The Poetry of Logarithms

by Ted Andromidas

Note: For this pedagogical discussion, you will need Appendices I and X to {The Science of Christian Economy}, {So You wish to Learn All About Economics,}, and the April 12, 2002 issue of {Executive Intelligence Review}.

“You have no idea how much poetry there is in a table of logarithms.” — Karl Friedrich Gauss to His Students

Developing a function for the distribution of the prime numbers has been one of the great challenges of mathematics. An exact solution to this problem, of how many numbers generated between 1 and any given number, N, are actually prime, has not yet been discovered, though there is a general notion of a succession of manifolds as determining to any solution.

One of the most stunning demonstrations of the generation of number by an orderable succession of multiply-connected manifolds, is Karl Friederich Gauss’ discovery of the “Prime Number Theorem.” The wonderfully paradoxical nature of Gauss’ approach, in contradistinction to that of Euler, is that we must move to geometries associated with the physics of higher-order forms of curvature, such as the non- constant curvature of catenary functions, and those forms of physical action associated with living processes, for a first approximation solution.

To understand the importance, and the elegance, of this discovery, we must first investigate a class of numbers called logarithms. Hopefully, it will all so demonstrate the inherent differences between a “constructive” approach to the questions of the generation of such numbers as logarithms as over, and against, the formalisms of the textbook. I have included as an addendum at the end of this discussion, a short rendering on the subject of logarithms, modeled on that of a typical textbook , so the reader might more appreciate the conceptual gulf separating the constructive approach from that of classroom formalisms.

“It is more or less known that the scientific work of Cusa, Pacioli, Leonardo, Kepler, Leibniz, Monge, Gauss, and Riemann, among others, is situated within the methods of what is called synthetic geometry, as opposed to the axiomatic- deductive methods commonly popular among professionals today. The method of Gauss and Riemann, in which elementary physical least action is represented by the conic form of self-similar- spiral action, is merely a further perfection of the synthetic method based upon circular least action, employed by Cusa, Leonardo, Keller, and so forth. [fn. 1]

It is in this domain, physical least action associated with the self-similar spiral characteristic of living processes, that we search for a solution to the ordering principle which, in fact, might generate the prime numbers. Gauss approach involves understanding the idea behind the notion of a logarithm.

Logarithms are numbers which are intimately involved in the algebraic representations of self-similar conic action. In previous discussions, we saw that number measures more than just position or quantity; number can also measure action. We discovered that numbers in one manifold measure distinctly different qualities, than numbers in another manifold, and that what and how you count can sometimes leave “footprints” of a succession of higher ordered manifolds.

All descriptions of logarithmic spiral action, and the rotational action associated with them, are of two types of projection:

1) The 3 dimensional spiral on the of cone; we understand that each increase in the radial length of the 2 dimensional, self-similar spiral on the plane, is a projection from the 3 dimensional manifold of the conic spiral. The projection of the line along the side of a cone, which intersects and divides the spiral is called “the ray” of the cone. [See {The Science of Christian Economy}, APPENDIX I]

2) The 3 dimensional helical spiral action from the cylinder; the rotation of the three dimensional manifold of the cylindrical spiral (helix) projects on to the two dimensional plane as a circle. Nonetheless, some action is taking place, and that action is represented, therefore, by a “circle of rotation”, as simple cyclical action, i.e. we “count” the cycles of each completed, or partially, completed cycle of rotation of the spiral.

Turn to the April 12, 2002 issue of EIR, page 16, (See figure), “The Principle of Squaring”; review the caption associated with that figure [“The general principle of ‘squaring’ can be carried out on a circle. z^2 is produced from z by doubling the angle x and squaring the distance from the center of the circle to z.”] and construct the relevant diagonal to a unit square. The side of the square is one, the diagonal that square equals the square root of two. Use that diagonal, the square root of two, as the side of a new square; the diagonal to that square, whose area 2, will also be a length equal to two. We are generating a series of diagonals, each, in this case, a distinct power of the square root of two. In this case, it is a spiral which increases from 1 to 16 after the first complete rotation; 16 to 64 after the second rotation, etc. As we will soon see, each of the successive diagonal beginning with the first square 1, is also part of a set of “roots” of 16.

Each diagonal is 45 degrees of rotation from the previous diagonal; this should be obvious, since the diagonal divides the 90 degree right angle of the square in half. Therefore, each time we create a new diagonal and a new square, in turn generating another diagonal and another square, we generate a series diagonals, each 45 degrees apart. It should also be obvious that 45 degrees is equivalent to 1/8 of 360 degrees of rotation or 1/8 of a completed rotation of the spiral.

Let us now review a few fundamental elements of this action: we can now associate, in our spiral of squares, a distinct amount of rotation with a distinct diagonal value. In this case the diagonal values are powers of the square root of two or some geometric mean between these powers.

Table 1 
Rotation  Diagonal Value 
0        1 or ?20 
1/8      ?2 or ?21 
2/8       2 or ?22 
3/8      ?8 or ?23 
4/8       4 or ?24 
5/8     ?32 or ?25 
6/8       8 or ?26 
7/8    ?128 or ?27 
8/8      16 or ?28 
9/8    ?512 or ?29 
10/8      32 or ?210

The diagonals of this “spiral of squares” function much like the rays [fn2] (or radii) of a logarithmic, self similar spiral. We can imagine an infinite number of self-similar spirals increasing from 1 to any number N, after one complete rotation. Each successive complete, whole rotation will then function as a power of N[table 2]:

Table 2 
Rotation   Power 
0       N0 or 1 
1       N1 or N 
2       N2 
3       N3 
4       N4

Since each rotation of the logarithmic spiral increases the length of the ray (or growth of the spiral) by some factor that we can identify as the “base” of the spiral. In other words the base of the spiral which increases from 1 to 2 in the first rotation ( and doubles each successive rotation), is identified as base 2; the base of the spiral which increases from 1 to 3, as base 3; from 1 to 4, as base 4;…. 1 to N, as base N, etc.. The spiral, base N, will after one complete rotation beginning with ray length 1, generate a ray whose length is N^1; after 2 rotations, the spiral will generate a ray whose length is N^2; after 3 complete rotations the ray length will equal N^3, etc.

To measure or count rotation, we now define a “unit circle of rotation”. We can map a point of intersection with a spiral, and a ray spiral whose length is equal to or greater than one, on to a point on a unit circle. In this way it seems that a point on our circle of rotation can map on to, potentially, an unlimited number of successive points of intersection of a spiral and any given ray. But, when we look at our circle of rotation, we are looking at the projection of a cylindrical spiral. We can therefore “count”, as cycles or partial cycles, the amount of rotation required to reach the point on the unit circle which a ray maps onto the unit circle and the spiral at the same time.

Look again at the musical spiral of the equal tempered scale. (see figure 1, page 50, {So You wish to Learn All About Economics}). Here, I am looking, not at successive ROTATIONS of the spiral, but DIVISIONS, in this case one rotation of the octave or base 2 spiral.

When I divide the rotation of the spiral by half (6/12ths), I get F# or the square root of 2.[see chart 2]. When I divide the rotation of the spiral by 3 (4/12ths) the first division is the G# or the cube root 2. So each successive rotation is a power of N, i.e. N^1, N^2, N^3, etc. Each successive DIVISION represents a root of N, i.e. ?N, 3?N, 4?N, 5?N, etc.

Chart 2
Division    Root of Two   Musical Note 
0           0            C 
1/12       12th          B 
2/12        6th          A# 
3/12        4th          A 
4/12        3rd          G# 
5/12     5/12th          G 
6/12     square root     F# 
1         2            C

As we have now discovered, given any spiral base N, we can associate a distinct amount of rotation with a distinct power or root of N. Each successive complete rotation can be associated with a power of N; each division or partial rotation can be associated with some root of N, or a mean between N and another number. This distinct amount of rotation to a point on the “circle of rotation”, which can then be associated with a distinct rotation of a self-similar cylindrical spiral, is the logarithm of the number generated as a ray intersecting the spiral at a particular point.

For example, take our spiral of the squares; that spiral is base 16. The logarithm of 16 is one, written as Logv16(16) = 1[footnote 3]. Using our table 1, we can create a short “Table of Logarithms” for base 16. Turn once again to the April 12, 2002 issue of EIR, pages 16 and 17; as Bruce indicates, if I double the rotation, I square the length. Let us try various operations with the table of logarithms below. Table of Logarithms, Base 16 Logarithms unit value of diagonal or “ray” 0 1 or ?2^0 1/8 ?2 or ?2^1 2/8 2 or ?2^2 3/8 ?8 or ?2^3 4/8 4 or ?2^4 5/8 ?32 or ?2^5 6/8 8 or ?2^6 7/8 ?128 or ?2^7 8/8 16 or ?2^8 9/8 ?512 or ?2^9 10/8 32 or ?2^10

Add the logarithm of 2 to the logarithm of 4, base 16. What is the result? (2/8 + 4/8 = 6/8 or the logarithm of 8, base 16.) If I add the logarithm of 2, base 16 to the logarithm of 4, base 16, the two ADDED rotations give my the logarithm of 8, base 16, which is the product of 2 x 4.

Now subtract the logarithm of 4, base 16, i.e. 4/8 from the logarithm of 8, base 16, i.e. 6/8 and the remainder will be the logarithm of 2, base 16 or 2/8. Now take any of the logarithms from our table, base 16; add or subtract the logarithms of any number of numbers and see if they correlate with the division or multiplication of those same numbers. In other words: adding or subtracting the logarithms of numbers (i.e. the amount of rotation) correlates with multiplication or division of those numbers,

When I am looking at the number we call a logarithm, I am actually looking at the measure of two distinct forms of action in the complex domain of triply extended magnitudes, i.e. the cyclical nature of helical action, with the continuous manifold of the logarithmic spiral. Which is precisely why Gauss understood “…how much poetry there is in a table of logarithms.” We will look at this relationship in another way next time when we investigate why: “It’s Really Primarily Work.”



2) The ray of a cone is a line perpendicular to the axis of the cone, intersecting the spiral arm [It can also be constructed as a straight line from the apex of the cone to an intersection with the spiral. Both project onto the plane as the same length. When we project from the 3 dimensional cone to the two dimensions of the plane we assume that the incidental angle of the cone is 45 ray of the cone and the axis are of equal length.

3) LogvN(N) = 1 is the equivalent of saying “the logarithm (Log) in base N (vN) of N (N) equals 1. In the above case we’re saying the Logarithm of 2 in base 2 is 1

ADDENDUM I: “What is a logarithm?” according to the book.

“… a logarithm is number associated with a positive number, being the power to which a third number, called the base, must be raised in order to obtain the given positive number.”

Presuming we understand the concept of “the power to which a number is raised”, then a definition for “exponent” and a “base” might be necessary at this time. An exponent “…is a symbol written above and to the right of a mathematical expression to indicate the operation of raising to a power. In other words, in the simple function of 2^2 = 4, ^2 is the exponent, in the function 2^3 = 8, ^3 is the exponent, etc. The definition of a “base” is a little more complicated.

When we write our numbers we use the digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. Since we use these 10 digits and each digit in the number stands for that digit times a power of 10, this is called “base ten”. For example, 6325 means:

6 thousands + 3 hundreds + 2 tens + 5 ones.

Each place in the number represents a power of ten:

(6 x 10^3) + (3 x 10^2) + (2 x 10^1) + (5 x 10^0), or 6325

We could also use base 2, 3, 5, or any other that would seem most appropriate to our requirements.

Let us look at base 2, the mathematics of the computer. There are 2 digits in base 2, 0 and 1; as with base ten, each digit represents a power of the base number, in this case 2. For example the number 1101, base 2, is: (1 x 2^3) + (1 x 2^2) + (0 x 2^1) + (1 x 2^0) or 13, base 10.

Base 10 is called “the common base” and was most widely used in developing the Logarithmic tables. Let us take an example: the logarithm of 100 in base 10, which is 2. To say it in another way, in base 10, 10 ^2 (^ denotes exponent or power) = 100, and the exponent, in this case, is 2. We will note this relationship in the following way: v denotes the subscript followed by the base number, such that, in mathematical shorthand, the logarithm of 100 in base 10 will be written Logv10 (100) = 2.

The logarithm of 10 base 10 or Logv10(10) = 1, Logv10(100) = 2, Log v10(1000) = 3, etc. Therefore, if I add:

Logv10(10) + Logv10(100) = 3

I get a logarithm of 1000 in base 10, which is also the exponent of 10^3, or 1000.

If I subtract:

Logv10(10,000) – Logv10(100) = 2

I get 2, which is the logarithm of 100 base 10, which is also the exponent of 10^2, or 100.

In other words, adding logarithm of any number, N, to the logarithm of any other number of that base number system, N1, generates the logarithm of the product of those numbers:

Log(N) + Log(N1) = Log(N x N1)

Subtracting logarithm of N from N1 generates the logarithm which is the quotient of those numbers:

Log(N1) – Log(N) = Log(N1/ N)

Consequently, tedious calculations, such as multiplication and division, especially of large numbers, can be replaced by the simpler processes of adding or subtracting the corresponding logarithms. Before the age of computers and rapid calculating machines, books of the tables of logarithms of numbers were for engineers or astronomers or anyone else who needed to calculate large numbers.

I think the preceding discussion has been a relatively accurate one page “textbook” introduction to logarithms and their use. If it seems somewhat confusing, one solution is that described by a typical professor of mathematics identified as “Dr. Ken”, who, using the Pavlov/Thorndike approach to arithmetical learning, suggests that:

“The way you think about it is this: the log to the base x of y is the number you can raise x to get y. The log is the exponent. That’s how I remembered logs the first time I saw them. I just kept repeating ‘the log is the exponent, the log is the exponent, the log is the exponent, the log is the exponent,…’ “

A singular problem arises when we use the Pavlov/Thorndike approach, replacing the name of one number with that of another, “x is y” or “the log is the exponent”, and then simply memorizing it. If we don’t know the characteristic of action generating the exponent, then what the heck is the logarithm anyway; if this simple equivalency were all there was to the matter, then we have no concept of the characteristic action corresponds to this class of numbers.

Can There Be Any Linearity At All?

by Phil Rubinstein

It is often the case that mathematicians, scientists, and their followers are able to see anomalies, paradoxes, and singularities, but maintain appearances by limiting such incongruities to the moment or the instant or position of their occurrence, only to return immediately to whatever predisposition existed in their prior beliefs, mathematics, assumptions. It is precisely this error that allows linearization in the small, in the typical case through reducing said singularities to an infinite series. In fact, in even the simplest cases, as we shall see, the singularity, anomaly or paradox requires every term in the pre-existing system to change, never to return to its prior form.

There is nothing complex or difficult in this. Let us take the simplest example. Construct or imagine a circle with the two simple folds we have used before. Now, construct the diameter and its perpendicular bisector giving us four quadrants. Now, take the upper left hand quadrant and connect the two perpendicular radii by a chord at their endpoint. If we consider the radius of the circle to be 1, we have a simple unit isosceles right triangle. Thus, from previous demonstrations, the chord connecting the two legs of the right triangle is the incommensurable square root of 2. Now, rotate the chord or hypotenuse until it lies flat on the diameter, or, alternatively, fold the circle to the same effect. The anomaly here is quite simple. Not only is the ratio of the chord to the diameter of the circle incommensurable, but the question arises: where does the end point of the chord touch the diameter? How do we identify it? From the standpoint of integral numbers and their ratios, this position cannot be located, neither can it be named within that system. This, despite the fact that if we take all the ratios of whole numbers between any two whole numbers, or ratio of whole numbers, we have a continuity, that is, between any two, there are an infinite number more. What, then, is the location? Is there a hole there or break? While this has often been the description, this is clearly no hole! By the simplest of constructions, we have the location, exactly. Our chord does not “fall through,” its end does not “fall into a hole”!

Now, we find the typical effort is to say, yes, there is a strangeness here, but we can make it as small as we like. By constructing a series of approximations, we get a series of ratios that get closer and closer. Fine, one might say, but still, what is the description or number by which we designate the location? Well, comes the answer, the infinite series description can be substituted for the place or number, and everything in this description is itself a number, or ratio of numbers. Thus, we have reduced the problem in fact and located the continuum on our diameter. One may reflect that, as simplified as this is, it is essentially the point made by Cauchy, etc., although in a different context.

In the calculus of Leibniz, the differential or limit exists as the area of change which determines the path of physical action. Cauchy reduces that physical reality to a mere calculation, by substituting an infinite approximation, or series for the limit, or area of change. What is lost is simply that reality which determines the physical action, and thus the ability to generate the idea of lawful change as a matter of physics.

But, does the anomaly go away? Clearly, it does not. To identify the actual position, which exists by construction, with a series that is infinite, endless and made up of precisely components proven NOT to be at that position, does not solve the anomaly. The position exists, is different, and remains singular.

In fact, much more follows. Label the left end of the diameter A and the location where the chord and diameter meet B. We will label the intersection of the diameters O. We can now ask what happens if we move back along the two lines, the chord and diameter. Let us say we move from B towards O, the center of the circle. Since the end point B of the chord is incommensurable with the diameter, if one subtracts any rational distance towards O, the position reached is still not commensurable, and this is so for ANY rational distance from B all the way to A. So, every position so attained is likewise incommensurable, as many as there are rational numbers. If I attempt to subtract an incommensurable amount (e.g., by constructing an hypotenuse and folding it), one has not solved the problem but merely used a position unlocatable by integral numbers or ratios of them. In fact, we now have a new infinity of these unlocatable positions back on the diameter.

This process can be looked at in the following manner. Is the position at the end of the chord greater than, less than, or equal to a given position back on the diameter? If we take also any position obtained by subtraction as above, do we attain a position greater than, less than, or equal to a rational number on the diameter? In fact, it is impossible to express the answer to these questions! One may attempt to say that an infinite series is as close to, but always less than, some arbitrary distance, but unless one knows beforehand the position, one can never know whether we have passed the position, or are not there yet. The concept of predecessors or successors or equivalence is inoperable, inclusive of whole number cases.

Since this occurs as has been shown, everywhere on the two lines, the only solution is to change the conception of number, measure, or position for every position on the diameter and chord. To simply add “irrationals” will not do, since this will leave us with inconsistency everywhere: in effect, a line made up of locations that cannot be compared.

The problem expands to a critical point with the addition of the relation of the diameter to the circumference. We must change the concept of number for every position. In this case, integers, rationals become a case of a changed number concept or metric. Properly understood, rather than attempting to linearize the discontinuity, we should say every position on the line has “curvature.” This becomes more transparent if we think of Cusa’s infinite circle as in fact the ontological reality of the so-called straight line. Only such a “straight” line could contain the positions cited above, could be everywhere curved, and yet a line.

How did this occur? An anomaly was shown to exist. To incorporate that anomaly’s existence requires a full shift in hypothesis. More especially, any linear construction is not an actual hypothesis, since it is unbounded and open ended, its extension is always arbitrary. To exist, an hypothesis requires, conceptually, “curvature,” that is, change which identifies its non-arbitrary character. That is its hypothesis. That is, what exists in the anomaly in the small is a reflection of its characteristic actions, its hypothesis. There are no holes, no arbitrary leaps. Now, of course, this leaves open the question — what other changes, hypotheses may be reflected requiring further hypothesis. It is no mystery that any line, or segment of a line existing in a universe of such action will manifest those actions down to its smallest parts, and do so for each such action.

Heraclides of Pontus Was No Baby Boomer

By Robert Trout

It is, today, a commonly believed myth that before the time of Columbus, everyone thought that the earth was flat, and located in the center of the universe, with the rest of the universe orbiting around it. In fact, over 2000 years ago, Greek scientists, using only the most simple instruments, developed an advanced conception of the universe that could have explained the ordering of the solar system, and how this ordering determined the seasonal cycles on the earth. They had even discovered the precession of the equinoxes to begin comprehending the longer astronomical cycles. Today, we will examine Greek discoveries in astronomy through Heraclides of Pontus, who refuted the world view of the baby boomer generation, more than 2000 years before the first boomer was born.

Greek astronomy was based on a scientific method which was in opposition to the methods used in ancient Babylon. The astronomy of the ancient Babylonians is an excellent example of how an oligarchical society does not develop science. The Babylonian oligarchy used a pantheon of cults to control the population. The priest caste studied the heavens for the purposes of omen astrology and for the improvement of their calendar, which was a lunar one unlike the superior Egyptian solar calendar.

The Babylonians left behind thousands of cuneiform tablets pertaining to astronomy. However, in the Babylonian approach to astronomy, not even a trace of a geometrical model is visible. Instead, they developed numerical methods using arithmetic progressions, in a fashion that would remind one of Euler. Using these methods, they were able to predict certain phenomena with the moon, within an accuracy of a few minutes. Although they compiled almost complete lists of eclipses going all the way back to 747 B.C., the Babylonians collected almost no reliable data on the motion of the planets. They never developed accurate methods for measuring the location of celestial objects, and never showed any interest in developing a unified conception of the cosmos.

Greek science developed as part of a cultural current which rejected the domination of an oligarchy. In the Homeric epics, man was presented matching his wits against the oligarchical Greek goods. In Aeschylus’s play, “Prometheus Bound,” the character, Prometheus, the Greek word for forethought, gives science to mankind, to free them from the pagan gods.

Greek culture, was, itself, split between a pro-republican and an oligarchical view, which is brought into sharpest relief by the opposing outlooks of Plato and Aristotle. Plato supplied the scientific method which has guided science ever since. He launched a research project to find “what are the uniform and ordered movements by the assumption of which the apparent movements of the planets can be accounted for.”

Around 150 A.D., under the Roman Empire, the fraudulent astronomy of Ptolemy was imposed, which was based on the ideology of Aristotle. The writings of the Greeks, with few exceptions, were not preserved, so the only records that exist are usually descriptions by later commentators. Therefore, we must reconstruct these discoveries, based on knowing how the mind functions.

Unlike the Babylonians, the ancient Greek astronomers sought a geometrical ordering principle behind the phenomena which are visible in the heavens. An early Greek astronomer would have seen that the motion of the objects in the sky appeared to follow regular cycles. As well, the cycles of the sun, moon, stars, and planets did not exactly correspond, giving rise to longer subsuming cycles.

Each day he would see the sun appear to rise in the east, cross the sky, and set in the west. The moon also rose in the east, crossed the sky and set. However, the moon seemed to travel slower than the sun, with the sun going through a complete extra rotation in approximately 29 1/2 days. The appearance of the moon also changed, going through a complete cycle of phases approximately every 29 1/2 days.

At night, he would see stars, most of which appeared to maintain a fixed relationship to each other. The Greeks developed a conception of a celestial sphere to explain the fixed relationships of these stars. The “fixed stars” rotated as a group throughout the night, around a point in the northern sky which appeared to not move. Also, the position of the “fixed stars” appeared to shift slightly, from day to day, with the same east to west rotation. This slight shift, from day to day, in the fixed stars appeared to go through a complete cycle each year, corresponding to the cycle of the seasons. A number of other cycles corresponded to the year. The sun’s path across the sky changed each day following a yearly cycle.

In addition to the “fixed stars” of the celestial sphere there were a few objects, which they named planets or wanderers, because, although they appeared very similar to stars, they did not remain in the same position in relation to the celestial sphere, but were constantly moving with respect to the rest of the stars.

One of the first known Greek astronomers, Thales, (ca 624 to 547 B.C.) is reported to have measured the angular size of sun and moon at approximately 1/2 degree. Thales, developed basic relations of similar triangles, such as demonstrating that the ratio of 2 sides is the same for similar triangles, and used this principle to measure relations in the cosmos.

Pythagoras (ca 572-? B.C.) is credited with discovering that the earth is approximately a sphere, and that the “morning star” and the “evening star” were the same, what we, today, call the planet Venus. He is also credited with discovering that the musical intervals are determined by number, and recognizing that the universe was governed by the same laws of harmony as those which govern music.

Since no writings from Pythagoras or his followers have survived, we can only speculate how he discovered that the earth is spherical. He might have concluded this based on conceptualizing the cause of eclipses. The discovery of the cause of eclipses is attributed to Anaxagoras (500-428 B.C.), who hypothesized that the sun was a red hot stone and the moon made of earth, for which he was accused of impiety. He recognized that the source of the moon’s light is the reflection of sunlight. He is credited with discovering that an eclipse of the moon is caused by the earth blocking the sun’s light from shining on the moon, and that an eclipse of the sun is caused by the moon blocking the sun’s light from reaching the earth.

Eclipses of the moon give evidence that the earth is spherical. The shadow that the earth makes on the moon during an eclipse is always circular, regardless of the direction from which the sun is shining. This is only true of a sphere, in the geometry that the Greeks were then developing.

Pythagoras could have discovered that the earth is spherical, because he conceptualized the idea of curvature that Erathosthenes understood, which enabled him to design his famous experiment to measure the circumference of the earth. Finally, Pythagoras could have concluded that this must be true, because he recognized that the universe is ordered by geometry and he thought that “the sphere is the most beautiful of solid figures.”

The “morning star” and the “evening star” are the two brightest objects in the night sky after the moon. The two phenomena each go through visible regular cycles, which Pythagoras was able to see were reflections of a subsuming cycle which ordered the two visible cycles.

The “evening star” first appears slightly above the western horizon shortly, after the sun sets. Over a period of months, it will appear each evening, when the sun sets, in a slightly higher position above the western horizon, travelling westward each night apparently in tandem with the rotation of the celestial sphere. Eventually it will appear, when the sun sets, at a position approximately 1/2 of a right angle above the western horizon. It will then start to appear, each night, at a slightly lower position above the western horizon, until it does not appear at all in the evening sky. However, shortly thereafter, the morning star becomes visible.

The “morning star” will first appear, on the eastern horizon immediately before the sun rises. Each night, it will rise slightly earlier, and travel westward apparently in tandem with the rotation of the celestial sphere. Its height above the eastern horizon, when the sun rises, will increase each night, reaching a maximum of slightly more than 1/2 of a right angle. It will then begin rising, later each night until, it rises so late that its appearance is hidden by the daylight. However, shortly after the morning star disappears, the “evening star” will then reappear on the western horizon.

Conceptualize how Pythagoras could have approached this problem, without all the knowledge of the solar system that you think that you know. For Pythagoras to have hypothesized that these two stars were the same, required that he approach the universe with the understanding that it was ordered, lawfully, and its lawfulness was comprehensible by human reason. Only then could he discover that the appearances of the two visible phenomena could be lawfully explained as the result of a process which could be comprehended by the mind but not seen by the senses. His hypothesis could have been that the morning and evening stars were the visible evidence of an object, which accompanied the sun in the sun’s apparent daily rotation around the earth, while oscillating back and forth over a period of approximately 20 months, half the time preceding the sun and half the time following it.

Pythagoras’s discovery, that these two visible phenomena in the night sky were the same, may seem trivial. However, his discovery set the stage for Heraclides of Pontus, approximately 200 years later, to overthrow the baby boomer conception of the universe, as we shall see below.

Philolaus, (second half of 5th century B.C.), a member of the Pythagorean school, introduced conceptions of motion to an earth, which had previously been thought of as largely static. Philolaus is credited with removing the earth from the center of the universe, and replacing it with a central fire, around which the rest of the universe, including the earth, rotated. This hypothesis was gradually rejected, because the existence of a central fire was never verified.

Plato (ca 427-347 B.C.) developed the scientific method, which was inherent in the work of the Greek scientists who preceded him, and was mastered by all scientists who followed him. In the Republic, Plato described how, when the senses give the mind contrary perceptions, the mind is forced to conceptualize an idea which is intelligible rather than visible. Astronomy compels the soul to look upward, not in a physical sense, but towards the realm of ideas. The study of astronomy required that man discover the true motions of the heavens, rather than merely their motion, as it appeared. “These sparks that paint the sky, since they are decorations on a visible surface, we must regard, to be sure, as the fairest and most exact of material things, but we must recognize that they fall far short of the truth, the movements, namely, of real speed and real slowness in true number and in all true figures both in relation to one another and as vehicles of the things they carry and contain. These can be apprehended only by reason and thought, but not by sight, or do you think otherwise?” Further on Plato adds, “It is by means of problems, then, said I, as in the study of geometry, that we will pursue astronomy too, and we will let be the things in the heavens, if we are to have a part in the true science of astronomy and so convert to right use from uselessness that natural indwelling intelligence of the soul.”

Plato rejected the world view of the oligarchy, who projected their own evil caprice onto God, and asserted that the universe was “controlled by a power that is irrational and blind and by mere chance.” On the contrary, Plato stated that he followed “our predecessors in saying that it (the universe) is governed by reason and a wondrous regulating intelligence.” The creator made a universe which is ordered harmonically, by mind that produces order and arranges each individual thing in the way that achieves what is best for each and what is the universal good. Therefore, man can comprehend the universe through reason.

Plutarch wrote of Plato ” … that Plato in his later years regretted that he had given the earth the middle place in the universe which was not appropriate.” Plato laid out a research project for his students to find “what are the uniform and ordered movements by the assumption of which the apparent movements of the planets can be accounted for.”

Heraclides of Pontus, (ca 388-315 B.C.) was a student of Plato at the Academy in Athens. Born more than 2000 years before the advent of today’s baby boomer culture, he made a crucial discovery which all too few baby boomers today have replicated. He discovered that the entire universe was not rotating around the earth, (and around him, who was standing on it) as would appear to be the case to one who believes in sense certainty. Rather, the cause of the rest of the universe appearing to revolve around the earth was that the earth is, itself, rotating around its axis. He also discovered that the cause of the apparently erratic motion of Venus and Mercury is that they are revolving around the sun. While Heraclides still believed that the Sun revolved around the Earth, his discovery that Venus and Mercury revolved around the Sun, set the stage for the later discovery that the Earth and all the other planets also revolved around the Sun.

Although he wrote numerous dialogues including two discussing astronomy, only a few remarks by commentators have survived the dark age, initiated by the Roman Empire, on how he made this remarkable discovery. We must reconstruct how he could have done it. What he must have done is conceptualize an idea of the nature of the Universe, and comprehend that his idea was more real than sense certainty.

The commentator Aetius reports that Heraclides thought that each of the innumerable stars in the sky was also a world surrounded by an atmosphere and an aether. Others, at the time, thought that the stars were attached to some sort of dome or rings. For example, Aristotle argued that the stars and sun were objects carried on rings around the earth at such a high rate of speed that the friction between the stars and the air caused the sun and stars to give off heat and light.

Obviously, Heraclides could not have arrived at his hypothesis based on his senses. (Even in the last few years, when astronomers have developed experiments to try to determine if other stars have planets orbiting them, they have still not “seen” any planets. Instead, they are designing experiments to measure certain phenomena, such as the distribution of heavy elements in the vicinity of distant stars, and, then, interpreting the results of their experiments as proving their hypothesis.) Heraclides must have thought, “If all the innumerable stars are each a world like our own, and they are at so immense a distance, that these worlds appear only as small specks of light in the night sky; why should all of them, and the immense universe in which they are located, orbit around the one world where he happened to be located?” Instead, he recognized that the impression which he received from his senses, that the heavens were rotating around the earth, could be explained by conceptualizing that the earth was, instead, rotating on an axis.

One significant anomaly that lead Heraclides to the discovery that Mercury and Venus revolved around the sun, was that the brightness of the planet Venus and the rate of it’s change in location, from night to night, varies dramatically throughout its cycle. It takes Venus, during the “evening star” part of its cycle, approximately 7 months to rise to its highest position above the western horizon, and only about 2 months for its descent. At the beginning of this cycle, it is dim. It becomes progressively brighter, until near the end of its cycle, it is, by far, the brightest object in the night sky, besides the moon. During the “morning star” part of the cycle, Venus rises rapidly to its highest position above eastern horizon in about 2 months, and, then, decreases in position each night very gradually, taking about 7 months until it disappears entirely under the western horizon. During the “morning star” part of its cycle, Venus starts out very bright and becomes progressively dimmer.

Heraclides hypothesized that his observations were a reflection of how an object rotating around the sun, would appear to an observer located on the earth, which is revolving on its axis. This is more easily understood from the following diagram: Draw a circle with a radius of 3 inches, to represent the orbit of Venus. The center of this circle represents the sun. Then draw a point to represent the earth, approximately 4 1/8 inches from the center of the circle. (For purposes of the diagram, make this dot below the circle.) Heraclides also placed the planet, Mercury, rotating around the sun in a much smaller circle. The cycle of Mercury appears similar to that of Venus, to an observer on earth. However, Mercury is usually much fainter than Venus, and reaches a maximum altitude in the sky only around 1/3 that of Venus.

In the diagram, the motion of Venus, would be represented counterclockwise around the circle. (Remember that Kepler’s discovery of elliptical orbits is almost 2000 years later.) The earth is rotating, daily, on its axis (counterclockwise in our diagram). The clearly visible differences in Venus’s brightness are explained by the dramatic differences in its distance from the earth at different places in its orbit.

Draw 2 lines from the earth, that are tangent to the orbit of Venus. At the points of tangency with the circle, the angle between Venus and the sun is greatest, and Venus will appear the highest in the night sky to an observer on earth. Draw a line through the sun and the earth which bisects the orbit of Venus.

Now, conceptualize what an observer standing on the earth, which is rotating counterclockwise, will see. In left half of the orbit, Venus appears as the “evening star,” and in the right half it appears as the “morning star.” Venus travels a far longer distance in rising to its highest position in the evening sky than in descending, making its assent take a far longer time than its descent. The opposite is true for Venus’s appearance in the morning sky.

Heraclides of Pontus’s discovery advanced Plato’s research project of discovering “what are the uniform and ordered movements by the assumption of which the apparent movements of the planets can be accounted for.” He set the stage for Aristarchus of Samos’s “Copernican Revolution,” which located the sun at the center of the solar system, less than 100 years later. Heraclides of Pontus, by rejecting sense certainty which leads to the baby boomers’ illusion that the Universe revolves around them, and locating his identity in agape, or the passion for discovering the truth in Platonic ideas, secured a place for himself in the simultaneity of Temporal Eternity.

An Exploration of the Relationship Among Number, Space, and Mind

By Larry Hecht

I can conceive in the mind of six objects, whose relationship to one another I wish to investigate. Their character as real objects does not interest me, but only that quality which makes them distinct, thinkable. They are, thus, objects in thought. I will label them with the number designations 1 to 6, though I might equally denote them by letters, or any other symbols which allowed me to keep them distinct in my mind. I am interested in discovering the number of different ways these six distinct objects can be formed into pairs. Their representation by numbers, allows a convenient means of investigating this. I first list all the pairs of 1 with the other 5, then all the pairs of 2, and so forth. The result is summarized in the table:


13 23

14 24 34

15 25 35 45

16 26 36 46 56

== == == == ==

5 4 3 2 1

Counting the number of pairs in each column and summing them, produces 5 + 4 + 3 + 2 + 1 = 15 pairs.

In another form of representation, I can imagine the six objects as points on a circle, and portray their pairing as the straight lines connecting any two. Drawing them produces a hexagon, and all the straight lines that may be drawn between its points. Counting all the connecting lines, we find 15, the same as the number of pairs above! The mind rejoices in the discovery of the equivalence of the two representations.

Closer examination of the second form of representation, now reveals also a difference with the first. In the first, nothing distinguished one pair from the next, except the symbols used to designate them. In the second, we discover three distinct species of relationships among pairs, each characterized by a different length of connecting line. We have (i) the six lines forming the sides of the hexagon; (ii) the six somewhat longer lines connecting every other vertex (i.e., 13, 24, etc.); (iii) the three longest lines connecting diametrically opposite vertices (14, 25, 36).

Where, before, the mind celebrated the sameness, it now rejoices at the difference of the two forms of representation, and is impelled to look for its cause. We hypothesize that the difference must reside in a property of the spatial mode of representation. We may reflect that, from the manifold ways we might have chosen to arrange our six points in space, we chose to place them on the circumference of a circle, equally spaced. An arbitrary arrangement of six points in a plane would have produced another, less-ordered relationship among the pairs. Another arangement, a spiral perhaps, would have produced a richer ordering.

Thus, from the positing of relationship among things in the mind, we moved to two modes of representation of that relationship, then to their sameness and difference, then to the causes of that difference. Having hypothesized that the latter is the result of the spatial form of representation, we are next led to explore the variety of such representations.

Of the great variety of possibilities, we choose now to rise above the plane, in order to examine the relationship among six points in three-dimensional space, the familiar backdrop for our visual imagination. Just as the circle aided us in ordering the points in the plane, here its counterpart, the sphere, comes to our aid. Six points, spaced evenly around the surface of a sphere, form the vertices of the Platonic solid known as the octahedron. We can picture two of its six points at the north and south poles of a globe, and four more forming a square inscribed in the circle of the equator. Connecting each point to its nearest neighbor, we find the 12 lines which form the 8 equilateral triangles, which are the octahedron’s faces. But we have not yet connected the six points in all the ways which space allows. Each point can yet be connected to its opposite, forming 3 more lines, which are diameters of the circumscribing sphere. Behold, again, the 15 paired relationships of six objects, now clothed in a new ordering, this time of two species!

We may now compare the three modes of representation our mind has invented to investigate these pairings:

1) By number, which produced the series 1 + 2 + 3 + 4 + 5 = 15.

2) In planar space, using the circle, which produced the three species of lines connecting the points of the hexagon.

3) In space, using the sphere, which produced the two species of lines connecting the vertices of the octahedron.

In turn, each of these modes of representation suggests new investigations. For example, with respect to the first (i.e., number), we may inquire into the pairwise combinations of other numbers of things, from which we soon discover that, in general, for “n” things, the number of pairs that can be formed is equal to n(n-1)/2, and we may next inquire, what is the expression for combinations three-wise, four-wise, or n-wise?

With respect to the second (the distribution of points on a circle and their combinations), we discover that there exist species beyond the regular polygons, which are known as the star (or Poinsot) polygons. These cannot be generated out of any arbitrary number of points, but only when the number of points, and the order in which we take them, are relatively prime to each other (that is, have no common divisor). The first of the star, or Poinsot, polygons, appears when we take five points on a circle, and connect every second one until the figure closes (that is, 1 to 3, 3 to 5, 5 to 2, 2 to 4, and 4 to 1). The result is the star pentagon, or pentagram, which is conveniently described as 5/2. We can then discover the 7/2 and 7/3, the 8/3, the 9/2 and 9/4, and so forth.

With respect to our third mode of representation of the pairwise combination of things (the distribution of points on a sphere), a new ordering principle arises: that a perfectly even distribution is only possible in the cases of 4, 6, 8, 12, and 20 points. When we investigate these, we find species of pairwise combinations called edges, diagonals, diameters, and some others, the greatest variet of species occurring in the 20-point figure.

Now, let us reflect on the higher ordering principle: All of the representations we have given, even the spatial, are creations of mind, products of the arithmetic or visual imagination. Yet, so real do these creations of the mind seem to us, we may be tempted to marvel at them as if they had some existence outside of the mind. (“But Platonic solids are {real}. I can build them!” you say. Perhaps you never have. Anyone who has tried, soon discovers a, sometimes gooey, massiness where massless points are supposed to be, a very finite thickness to the infinitely thin lines of the edges, and a, sometimes wrinkly, bulk to the massless surfaces. Even three-dimensional space, the forgiving medium of all our constructions, which seems so certain, so real, is only the ingenious work of the mind, the visual imagination. All are products of the mind.)

But when, in nature, the mind discovers forms just like these we have just created (thought), put there not by us, but by something like to us in mind, yet much vaster, then may we truly marvel, and reflect: What makes nature makes us. What we make in mind, think, is then nature — and may be so in a higher form than what we perceive outside us. (The proof of this truth, well-known to readers of this publication, need not be repeated here.) So in the ordering, number, space, and mind, the mind stands at both ends of the series, as both creator of its own images, and perceiver of others; the one is called imagination, the other, reality. Yet they are both real, as we just showed, and even both imagined, in so far as the perceived external is {known} only through the images of mind.

With such considerations, true science begins.

Leibniz And Dynamics: A Dialogue

by Phil Valenti

{Xena,} a young student.
{Academos,} a middle aged professional.
{A Philosopher.}

{Xena:} Greetings to you, Philosopher! I’m so glad you came along just now. Academos here is trying to convince me of his latest opinions about science, which have me awfully confused.

{Academos:} That’s right, Sir Philosopher! I’ve been reading about Leibniz, of whom you think so highly. Even you can’t deny that his idea of the “living force,” which is somehow implanted by God into matter, is nothing more than medieval metaphysical nonsense.

{Philosopher:} Well, my young friends, I can’t deny that his idea is metaphysical, but it is far from nonsense. In fact, all of modern technology is based on it.

{Xena:} But Academos gave me an example, which is hard to refute, although I’d like to.

{Philosopher:} Let’s hear this example.

{Academos:} Well, it concerns the issue of how to measure force. Descartes says that force equals the mass of an object multiplied by its velocity, whereas Leibniz constantly insists that force is proportional to the mass of an object multiplied by the SQUARE of the velocity. But this is just a phony dispute over words.

{Philosopher:} Please explain.

{Academos:} For example, it doesn’t matter whether you measure distances in miles or kilometers, as long as you’re consistent. Alexandria, Egypt is twice as far from Rome than it is from Athens, whether you measure it in miles or kilometers. You see, it depends upon what yardstick you use, and that’s just a matter of personal preference. In the same way, some people may choose to measure force by mass times velocity, and others by mass times velocity squared.

{Xena:} I still like to think that there must be some way of discovering what is true and what isn’t.

{Philosopher:} I’m happy to have this discussion with both of you, but I warn you, it will take much concentration. The rewards, however, will be great. Believe me when I say, that the results of this investigation will have the most profound impact on your entire view of the nature of the Universe and of the future of Mankind. It may even transform your conception of the value and purpose of your own life.

{Academos:} There you go, exaggerating again!

{Xena:} It sounds exciting, but I can’t see how that’s possible.

{Philosopher:} Let us start out by assuming that the force of a moving body equals its mass multiplied by its velocity. This means that a body weighing ten pounds and moving at one mile per hour, will have the same force as a body weighing one pound and moving at ten miles per hour, correct?

{Academos:} If you choose to measure force that way, that is the result you will get.

{Philosopher:} This also means that a body weighing 100 pounds and moving at 1/10th of a mile per hour, will have the same force as a body weighing 1/10th of a pound and moving at 100 miles per hour.

{Academos:} That’s right, as long as the ratios are the same.

{Philosopher:} In other words, a body weighing 1000 pounds and moving at 1/100th of a mile per hour, will have the same force as a body weighing 1/100th of a pound and moving at 1000 miles per hour?

{Academos:} Absolutely.

{Philosopher:} Are you sure? Think what would happen if you were hit by those objects.

{Xena:} Wait, I see what you mean. I probably would hardly feel a thousand-pound object moving so slowly, but I can’t imagine what a small object moving so fast would do to me. It probably would have the force of a bullet. Hey, I might be blown away by something moving that fast!

{Philosopher:} This is Leibniz’s point, when he says that the “active or living force appears in impact.” Remember, Leibniz also talks about “calculating the force through the effect produced in using itself up. For I here refer not to any effect,” he says, “but to one produced by a force which completely expends itself and may therefore be called violent.”

{Xena:} I can see how the effect would be violent for sure.

{Academos:} Let’s not jump to conclusions. There are other authorities besides Leibniz. What about Archimedes? In his study of the lever, he showed that a body weighing 10 pounds, which is one foot from a fulcrum, will balance a body weighing one pound, which is ten feet from the fulcrum. In other words, the weight and distance from the fulcrum are reciprocally proportional. Why shouldn’t the same relationship hold for mass and velocity?

{Xena:} Now I’m confused again.

{Philosopher:} Archimedes was certainly a great thinker, and this is a good point, because it will help us see the difference between Mechanics, a perfectly valid science which pertains to the ancient machines like the lever, pulley, inclined plane, wheel and screw, and Leibniz’s new science of Dynamics, which brought about the steam engine, and every technological advance after that as well!

Leibniz refers to Galileo, who, Leibniz says, “paradoxically called the IMPACT OF PERCUSSION an infinitely large force as compared to the simple tendency of gravitational force.”

Think about a pile driver. If you lay the pile driver gently on top of a post stuck in the ground, the weight of the pile driver will push the post a little further into the ground. You might conclude that the total force of the pile driver has been expended. But, if you drop the pile driver down onto the post from a distance above it, it will drive the post further into the ground. In other words, the pile driver has somehow accumulated more force by virtue of its motion. If you drop it again, it will drive the post a little further down. It has once again accumulated more force.

{Xena:} This is awesome.

{Academos:} This is ridiculous! How can motion change an object? You make it sound as if there’s something mysterious inside things which “comes alive” when bodies move, whereas an authority as great as Descartes shows that bodies are just passive things that exist in empty space.

Descartes says that “the nature of matter or of body in its universal aspect, does not consist in its being hard, or heavy, or colored, or one that affects our senses in some other way, but solely in the fact that it is a substance extended in length, breadth and depth.” In other words, “that the nature of body consists … in extension alone.” That’s why you can multiply and divide material bodies just as if they were purely geometrical or mathematical entities.

For example, if a freight car weighing 1000 pounds and moving at 10 miles per hour, collides and couples with another freight car weighing 1000 pounds which is stationary, the two of them will move in the direction of the first freight car, at a velocity of five miles per hour. This is because 1000 X 10 plus 1000 X 0 equals 10,000, which is the total mass X velocity before they couple, and also 2000 X 5 equals 10,000, which is the total mass X velocity after they couple, not counting friction. Haven’t you ever heard of the conservation of momentum?

{Xena:} I know I never heard of it.

{Philosopher:} Don’t worry, Xena. Let us keep up our discussion, and, with Leibniz’s help, you will see that you are much smarter than Descartes, despite his fame. Let us assume that the freight cars will behave approximately in the way Academos describes. The question is: why don’t both freight cars move at TEN miles per hour after they collide and couple?

{Academos:} I already did the calculation.

{Philosopher:} But I am asking WHY? Or if a small body collides with a large body at rest, why doesn’t the small body carry the large one along with it, without losing any velocity? In other words, is there anything in the concept of body as mere passive “extension,” to account for INERTIA?

{Academos:} But inertia is simply a physical law, which says that a body at rest will tend to stay at rest, unless acted upon by an outside force. Similarly, a body moving at a constant velocity will tend to continue its motion, unless an outside force acts upon it.

{Philosopher:} In other words, Academos, there is a certain “resistance” inherent in material things. It takes an effort, or work, to move something, or to change its velocity, whether it’s a freight car, or anything else. But if objects were just mathematical or purely geometrical entities, they would be purely indifferent to motion, wouldn’t they?

{Xena:} Well, I for one can’t imagine circles, squares and triangles “resisting” geometric constructions, or numbers “resisting’ being added!

{Philosopher:} Now you see the paradox!

“If the essence of a body consisted in extension,” Leibniz writes, “this extension alone should suffice to account for all the properties of the body. But that is not the case. We observe in matter a quality which some have called natural inertia, through which the body resists motion in some manner, in such wise that some force must be applied to set it into motion (not even taking into account the weight), so that it is more difficult to budge a large body than a small one. For example, if the body A in motion meets the body B at rest, it is clear that if B were indifferent to motion or rest, it would let itself be pushed by A without resisting it, and without diminishing the speed or changing the direction of A; and after the impact, A would continue its path, and B would accompany it ahead. But it is not so in nature. The larger the body B, the more it will diminish the speed of A, until A is forced to rebound from B, if B is very much larger than A….

“All of this shows that there is in matter something else than the purely Geometrical, that is, than just extension and bare change. And in considering the matter closely, we perceive that we must add to them some higher or metaphysical notion, namely, that of substance, action, and force.”

{Academos:} I’m amazed at how you insist on explaining everyday things with abstract metaphysical constructs!

{Xena:} This is exciting! I want to hear more.

{Philosopher:} Yes, we have a bit more work to accomplish before reaching our objective, which is to demonstrate why Leibniz is right, and Descartes wrong, and that the force, or power, of a body in motion is proportional to the mass and the square of the velocity.

{Academos:} If you can show how such a practical result follows from all of this metaphysical mumbo-jumbo, I will be very surprised.

{Philosopher:} You would have to rethink all of your assumptions about the world, which is a good thing. Let us begin by analyzing what happens to a heavy body in free fall.

{Xena:} What do you mean by “free fall”?

{Academos:} He just means a body falling under the influence of gravity.

{Xena:} Oh. This sounds like the case of the pile driver we discussed before.

{Philosopher:} That’s right. Take, for example, this paperweight, which I place on the ground in front of me. Now, I pick it up, and lift it about four feet above the spot where it was lying. This involved some effort, or work, on my part, which I have, so to speak, transferred to the paperweight, with the result that the paperweight has been raised four feet above the ground.

{Xena:} I follow you so far.

{Philosopher:} Would you agree that raising the paperweight eight feet off the ground, would take twice as much effort, or work, as raising it four feet off the ground? And raising it sixteen feet, would take four times the effort as raising it four feet?

{Xena:} I’ll accept that.

{Academos:} This is just elementary physics.

{Philosopher:} Then let us return to the paperweight raised four feet off the ground. In this position, the paperweight has zero velocity, relative to the Earth, correct?

{Academos:} Obviously.

{Philosopher:} Now, when I let it drop, it seems to pick up speed as it falls, and hits the ground with a thud. The paperweight seems to have its greatest velocity at the instant it hits the ground, correct?

{Academos:} And you claim that the force of the paperweight at that point is proportional to the square of its velocity. I still don’t see it.

{Philosopher:} Let us continue our analysis. Notice that the paperweight is back to the exact same position from which it started. This means that the work that I transferred to the paperweight in lifting it, was completely expended, so to speak, when it fell. The net result is “zero”– no change.

{Xena:} Wait a minute. You’re saying that the work required to lift the paperweight, somehow equals the force of the paperweight when it hits the ground?

{Philosopher:} Exactly!

{Academos:} But this is nothing new. Every physics textbook explains how potential energy is converted to kinetic energy.

{Philosopher:} However, my dear Academos, all of these concepts originate in Leibniz’s work on Dynamics.

{Academos:} They do?

{Philosopher:} This is how Leibniz puts it: “Thus there appears a new twofold distinction of forces; viz., one– which I call inert or inactive force” (or what you call “potential energy,” Academos) “refers primarily to the element of force while the motion itself does not yet exist in it but only the tendency to motion, as, for example, the stone in a sling which tries to fly off in the direction of the tangent, even if it is pulled back by the chain which holds it securely. On the other hand, the other force, which I call living or active force” (which is your “kinetic energy,” Academos) “is the usual one which appears in actual motion. An example of inert force is centrifugal force, or gravitational or centripetal force, or also the force which tries to restore a stretched elastic body to its original state. However, active or living force appears in impact–e.g., the force or impact of a heavy body that has been falling for a certain time, or that of a stretched bow which gradually resumes its earlier position–and such an active force arises from an infinite number of constantly continued influences of inactive forces.”

{Academos:} All right, I’ve heard enough of Leibniz. How about the issue of measuring the force by the square of the velocity?

{Philosopher:} Think back to the paperweight in free fall. Do you agree that the paperweight gradually picks up speed as it falls?

{Academos:} Everyone knows that there is a constant rate of acceleration of a body falling in a gravitational field.

{Philosopher:} Do you mean that a body weighing one pound dropped from a height of 10 feet, will hit the ground at the same time as a body of 100 TONS dropped from 10 feet?

{Academos:} If they are both dropped at the same instant, yes.

{Xena:} I’d like to see some proof of that!

{Academos:} But everyone has heard of the famous story of Galileo, who simultaneously dropped a ball of lead and a ball of feathers from the top of the Leaning Tower of Pisa. They both hit the ground at the same time. Moreover, now we know that the acceleration of a body in the Earth’s gravitational field is equal to 32 feet per second squared.

{Xena:} What does it mean to square a second?

{Academos:} In other words, Xena, the velocity of a falling body increases by 32 feet/second every second that it falls. After one second, the velocity of the body will be 32 feet/second. After two seconds, the velocity will be 64 feet/second, etc. Naturally, this is approximate, since it doesn’t take the resistance of the air into consideraton.

{Philosopher:} Thank you, Academos, for you have provided us the knowledge we need to reproduce Leibniz’s discovery. First of all, didn’t we agree that there is a certain inertia which is inherent in things?

{Academos:} I think that we all agreed with my definition of inertia.

{Philosopher:} Then how is it that a falling body constantly INCREASES its velocity? Didn’t we agree that a body will tend to preserve its velocity, unless acted upon by some outside force, and that it takes an effort, or work, to CHANGE its velocity?

{Academos:} Obviously, gravitational force acts upon the body and causes the velocity to increase.

{Philosopher:} Aha! Doesn’t this imply that an accelerating body is accumulating force, so to speak, at a non-linear, geometric rate? Consider that, at each instant, the body is moving with a certain velocity, V. Then, an outside force, like a little “shock,” or an “impetus,” which you call “gravity,” is required simply to overcome INERTIA, that is, to overcome the tendency of the body to remain at the original velocity, V. But, once inertia is overcome, doesn’t it require more force to actually INCREASE the body’s velocity? And isn’t this twofold process, of impetus and increasing velocity, occurring at every instant of the body’s motion?

{Xena:} Wait a minute. This reminds me of what actually happens when two freight cars couple, like the example we were talking about before. When the one in motion collides with the one at rest, it seems to stop for a moment with a violent shake, while making all kinds of noise, as if it were working to overcome inertia first, before they both start to move together down the track. I have seen this happen!

{Academos:} Now I suppose that our Philosopher has a quote from Leibniz that purports to explain the implications of all this?

{Philosopher:} In fact, I do. Leibniz writes “that God created matter in such a way that it contains a certain repugnance to motion, and, in a word, a certain resistance, by which a body opposes motion per se. And so, a body at rest resists every motion, and motion, indeed, resists greater motion, even in the same direction, so that it weakens the force of the thing that impels it. Therefore, since matter resists motion per se by means of a general passive force of resistance, but is put into motion through a special force of action, that is, through the special force of an entelechy, it follows that inertia also resists through the enduring motion of the entelechy, that is, through a perpetual motive force. From this I showed that a unified force is stonger, that is, that the force is twice as great if two degrees of speed are united in a one-pound body as it would be if the two degrees of speed were divided between two one-pound bodies, and thus that the force of a one-pound body moving with two degrees of velocity, is twice as great as the force of two one-pound bodies moving with a single degree of velocity, since, although there is the same amount of velocity in both cases, in the one pound body inertia hinders it only half as much.”

You can see for yourselves, my friends, how this analysis implies that force is proportional to the square of the velocity.

Furthermore, I think that this is what Leibniz has in mind when he writes that “the true quantity of motion over a period of time is ascertained as the integral of the individual impetusus,” or that “the calculation of the motion which extends over a definite time-interval is achieved by the summation of infintely many impetuses.”

{Academos:} Now I feel as if you’re trying to brainwash us with convoluted metaphysical babbling! My mind dissolves into confusion just listening to you! These kinds of elaborate abstractions may impress Xena here, but they are not going to convince any educated person.

{Philosopher:} Don’t give up so easily, Academos! Just try to work through the idea. In any case, perhaps we should proceed, as Leibniz did, to calculate forces through a different method, a posteriori, namely, by calculating the force through the effect produced in using itself up, and see if we achieve the same result. Let us suppose, Academos, that a body weighing one pound, falls for one second before it hits the ground. According to your calculation, its velocity when it hits the ground will be 32 feet/second, correct?

{Academos:} That is correct.

{Philosopher:} Now, answer this question for me: How far did that body fall?

{Academos:} At last, we’re discussing practical science! I would calculate it thusly. Since it started with a velocity of zero, and ended with a velocity of 32, its average velocity would be 32 plus zero, divided by 2, which is 16 feet/second. Similarly, if we consider the velocity at one-fourth of a second, which would be 8, and at three-fourths of a second, which would be 24, the average, again, is 16, and we can make the same calculation for every instant of the body’s fall. Therefore, the falling body would cover the same distance in one second, as a body travelling for one second at a constant velocity of 16 feet/second. In other words, the distance travelled would be 16 feet.

{Philosopher:} Very good! Now, Academos, what about a body weighing one pound, which falls for two seconds before it hits the ground? According to your calculation, its velocity when it hits the ground will be 64 feet/second. Now, how far did that body fall?

{Academos:} Well, since it started with a velocity of zero, and ended with a velocity of 64, its average velocity would be 64 plus zero, divided by 2, which is 32 feet/second, and so on for every instant of its fall. Therefore, it would cover the same distance as a body having a constant velocity of 32 feet/second, which travels for two seconds. In this case, the distance travelled would be 64 feet.

{Philosopher:} Excellent, my dear Academos! Now YOU have proven that Leibniz is correct, and Descartes wrong.

{Academos:} What are you talking about?

{Philosopher:} The body dropping 64 feet has twice the velocity as the body dropping 16 feet, correct?

{Academos:} Yes, but what does that prove?

{Xena:} I see it! It takes four times the work to raise a body 64 feet, than to raise it 16 feet, but only twice the velocity!

{Philosopher:} Xena has the idea. We showed that the force of a body in free fall from a certain height, is equal to the work required to raise it to that height. This means that the force of the body dropping 64 feet, is FOUR TIMES the force of that body dropping 16 feet. But the velocity of the body dropping 64 feet, is only TWICE the velocity of the body dropping 16 feet. This means that force is proportional to the SQUARE of the velocity, because twice the velocity leads to four times the force.

{Academos:} You mean if Descartes were right, the velocity of the body dropping 64 feet, would have to be TWICE the velocity of the body dropping 16 feet?

{Philosopher:} Exactly.

{Academos:} I’m stunned! Now there seems to be no way out of this conclusion! Philosopher, I can see why they compare you to a sting ray! And to think that I did the calculation myself!

{Philosopher:} But now, my friends, you can see more clearly how a tiny body weighing 1/100th of a pound and moving at 1000 miles per hour, if harnessed by technology, can accomplish ONE HUNDRED THOUSAND TIMES the useful work for Mankind, than can a huge body weighing 1000 pounds and moving at only 1/100th of a mile per hour– even though the famous Monsieur Descartes says they are equivalent!

Think, now, about all the tiny droplets of water in high-pressure steam, which move at such high velocity. Think about the force of explosions of gunpowder or gasoline, with such small particles moving so swiftly. Then, consider the microcosm, beyond our senses, containing those “worlds within worlds” spoken of by Leibniz, that infinitesimal realm of “non-linearity in the small.” Think about those infinitesimal worlds in motion, at speeds almost beyond our imagination. All of these wonders await our discovery. They are there, waiting to be harnessed by Man.

As Leibniz puts it, so poetically, “there always remain in the depths of things, slumbering parts which must yet be awakened and become greater and better, and, in a word, attain a better culture. And hence, PROGRESS NEVER COMES TO AN END.”

What a joyful thought! What great reason for optimism!

Let us live our lives accordingly.

Demonstrate the Principle that Measurement is Hypothesis

By Larry Hecht

Just as Cusa’s principle of {weighing}, the balance, was the basis for progress in chemistry, leading to the 1869 discovery of the Periodic Table, Gauss’s 1832 development of the magnetometer is the basis of all later discoveries in physics. This is so in a twofold sense. First, because the determination of the absolute intensity of the Earth’s magnetic force, was the prerequisite for Weber’s experimental proof and advancement of Ampere’s nuclear-atomic hypothesis. Second, because the magnetometer embodies the principle of measurement-hypothesis.

Contrary to the claims of ignorant empiricists and positivists, no fundamental experimental truth is ever arrived at by simple observation. Hypothesis itself is the subject of all measurement in science. Building a magnetometer to measure the Earth’s magnetic strength, will establish this truth for us.

Everywhere we go on the surface of the earth, a compass needle, or suspended magnet, will oscillate to the east and west of true north, at a frequency which will vary from place to place, and over time, and will also depend on the magnetic strength of the needle. If we could take the same magnetic needle simultaneously to all places on Earth, and if we could be sure its magnetic strength never weakened over time, we could have an absolute standard, against which we could measure the variations in the Earth’s magnetic force. As this is not possible, a means must be devised by which a measurement with different apparatus, anywhere on the Earth’s surface, at any time, can be reduced to an absolute standard. The apparatus and method for doing this, was invented by Carl Friedrich Gauss in 1832. A brief description of its construction and operation, is found on pp. 35-37 of the Fall, 1996, issue of {21st Century Science}.

It is first to be noted, that the “object” to be measured, the horizontal intensity of the Earth’s magnetic strength, which Gauss designates as T, has no tangible substance. It is a “mere” idea, and, furthermore, one whose cause is, to this day, not fully understood. There is no way to walk up to it and throw a tape-measure around its waist, or to place it on a scale, or next to a measuring stick.

It is an hypothesis. But, none the less, it is measured. How? How else, but by other hypotheses! A rich lattice-work of them must be employed, and each examimed carefully for its soundness and accuracy. Once this is done, it is possible to use them, even when we know them to be flawed in some ways, for we take this into consideration, too. These include assumptions about what magnetism is, its distribution in an iron bar, the motion of a pendulum, and also, even more fundamental assumptions about seemingly obvious things, such as measurement of length, time, and mass.

When we have thought all this through, we are ready to carry out our experiment to measure hypothesis with hypothesis. First, we assess the mechanical properties of our iron bar, by suspending it, and counting its frequency of oscillation, when tapped. It is suspended by a silk thread, which has a certain resistance to twisting, which must also be measured. Then, we magnetize the bar, by stroking it with an already-magnetized object. We don’t know the magnetic strength to which we bring the bar by this process, so we shall designate this magnetic strength by the unknown M. Now, we suspend the magnetized bar by the silk thread again, and count the oscillations caused by its interaction with the Earth’s magnetic force. Our theorem-lattice tells us, that the frequency of these oscillations is proportional to the product of the two unknowns, M x T. By another sequence of measurement-hypothesis (described in the {21st Century}), we can determine the proportion M / T. And, by dividing one measurement by the other, the value of T is found.

The apparatus for making this determination is remarkably simple, by today’s standards of machining tolerances, and could certainly be built in a home workshop, or perhaps found in the discard-pile of a university laboratory.

From 1836 to 1843, the {Magnetische Verein} (Magnetic Union), founded and run by Gauss and Weber, promoted the construction and installation of such instruments at magnetic observatories around the world. By 1840, on the basis of data so obtained, Gauss was able to calculate the probable location of the North and South magnetic Poles of the Earth. The former had been found a few years earlier by the Englishman, Captain Ross. The latter was unknown. It was a great triumph for Gauss’s theory, and for the U.S.-German republican collaboration, when, in 1841, Captain Charles Wilkes, USN, located the magnetic South Pole at a point in the Antarctic Ocean, within a few degrees of the latitude and longitude Gauss had predicted. The six-ship Wilkes Expedition (1838-42), funded by the U.S. Congress, and directed by the American Philosophical Society under Alexander Dallas Bache, outdid the rival British and French expeditions in other ways as well, and the whole affair surprised and stunned the imperial powers, probably in somewhat the same way as the Soviet Sputnik achievement of the late 1950’s, shocked the U.S.A.

So let us have a new Magnetic Union, this time dedicated to the establishment of a pedagogical principle: that measurement is hypothesis. A point which, once established firmly in the mind of the experimenter, is of truly revolutionary power.

Why Kepler Thought Well of Copernicus

by Robert A. Robinson

The achievement of Nicholas Copernicus, whom Johannes Kepler so much admired, is often misrepresented in astronomy textbooks, as the “discovery of the heliocentric system.”

Copernicus never claimed to be the originator of the heliocentric system, that is, the system of placing the sun, rather than the earth, at the center of the universe. Copernicus himself stated that the idea of placing the sun at the center of the universe originated, as reported by Archimedes in his work, “The Sand Reckoner,” with the ancient Greek astronomer Aristarchus of Samos, around 200 B.C.! Nor was the rediscovery of the ancient heliocentric hypothesis, in itself, what Kepler appreciated in Copernicus’ work. Indeed, Kepler almost rejected Copernicus’ theory, because it assumed the stars to be infinitely distant from the sun, and therefore that the sun is the absolute center of the knowable universe. If, as Kepler instead maintained, the distance to the stars were immense, yet implicitly measurable, why might not the sun be but one of those luminous bodies we call stars, and therefore, not be the center of the whole universe, but only the center of local planetary motion?

What, then, did Kepler find so beautifully significant in the work of Copernicus? In a word, it was Copernicus’ discovery, based on the work of Aristarchus, of a wonderful harmony, or congruence of measurement, within the domain of the solar system itself. This is a subject of elementary, yet profound, importance for the future development of science, as Kepler clearly realized. It is a crime that it has been so obscured in so many astronomy textbooks, apart from those “textbooks,” like “Mysterium Cosmographicum,” and “The Epitome of Copernican Astronomy,” written by Johannes Kepler himself.

Copernicus’ Breakthrough

Copernicus discovered that the heliocentric hypothesis supplies the “One” to unite a “Many.” As Copernicus himself wrote in his posthumously published masterwork, “The Revolutions of the Heavenly Spheres,” “Therefore, in this (heliocentric-RAR) ordering, we find that the world has a wonderful commensurability and that there is a sure bond of harmony for the movement and magnitude of the orbital circles such as connot be found in any other way.” (Quote P. 528-529 in Great Books #16).

Let us divide Copernicus’ breakthrough into three parts.

First, look at Mercury and Venus, the planets which never, in the evening or morning sky, deviate far from the sun. They never appear “opposite” the sun in the sky, like all the other stars and planets do periodically (including even the moon, every time it is full.)

Venus is best to look at for our purposes of demonstration. Venus never deviates more than 45 degrees from the sun. If you track Venus each day when in its full glory as an evening or morning “star,” you will notice that it moves out to a position of maximum divergence, or elongation, from the sun, hovers around there for a few days, then starts back toward the sun, with what appears as variably accelerating motion. (See last week’s pedagogical.)

Consider how the heliocentric hypothesis provides us with a simple method to measure the distance of Venus from the sun, using the earth-sun distance as an “astronomical unit” of measurement. Construct a circle on a piece of paper, with center S, and place a point E some distance outside of and below the circle’s circumference. Draw the two straight lines from E, that are tangent to the circle’s circumference, meeting the circle at V on the left and V’ on the right. Now, in coherence with the heliocentric hypothesis, let S represent the sun, E the earth, the circle Venus’ approximate orbit, and V and V’ Venus at points of successive tangency (assuming counterclockwise motion) between its orbit and lines of sight from the earth.

Note that a right angle is subtended at those points, V and V’, between, respectively, lines VE and V’E, and lines VS and V’S. Now, just focus on the left side of the diagram, and the triangle VSE, that has a right angle subtended by VS and VE. V not only is the vertex of the right angle in VSE. V also forms a point on the left hand side of the circle (V’ being the corresponding one on the right hand side) of maximum divergence, or elongation, of Venus from the sun as seen from the earth, that is, divergence between a line of sight linking the earth and the sun, ES, and a line of sight from earth to Venus, VE. That angle of maximum divergence, as seen from earth, between the line of sight from the earth to the sun, and the line of sight from the earth to Venus, is measurable, with a sextant, to within a certain (not to be ignored) variability, at around 45 degrees. Consequently, we know one angle of triangle EVS is a right angle, another is around 45 degrees, and therefore, assuming the space between the earth, the sun, and Venus is as flat as our piece of paper, we know all 3 angles of the triangle, EVS, formed by the earth, sun, and Venus. Copernicus takes the average earth-sun distance (determined by the earth’s orbit around the sun, which Copernicus assumed was approximately circular) as a unit, or one. It therefore becomes an easy matter, applying basic Pythagorean rules of mesurement to right triangle EVS, to measure Venus’ distance from the sun, as a proportion of earth’s distance from the sun. That proportion turns out to be about one to the square root of two.

Aristarchus, in ancient times, had used a remarkably similar method of measurement to determine (to a first degree of approximation) the relative distances and sizes of the earth, moon, and sun, as the crux of his argument for heliocentricity. Aristarchus had based his measurement on the idea that, when there is a “half moon,” the moon forms the vertex of the right angle of a right triangle formed by the earth, the sun, and the moon. Indeed, if, in repeating Copernicus’ measurement, you happen, just for fun, to view Venus through a telescope when it is at its maximum extension from the sun, Venus will appear to you as a tiny “half moon”! Of course, Copernicus, in the 1500s, had no such telescope with which to view Venus. Yet, therefore, ironically enough, Copernicus did not suffer the observational drawback of Aristarchus, who had had the nasty job of attempting to determine just when a “half moon” occurs, to complete his measurement, and so ended up being off by an order of magnitude in his earth-sun distance. Though Copernicus had no immediate means of re-evaluating Aristarchus’ earth-sun distance, he did determine a remarkably accurate relative measurement for the earth-sun and Venus-sun distances. He was able to do that because his measurement in no way depended on the observation of a “half Venus,” because a “half Venus,” as viewed from earth, occurs axiomatically, as one may see from our diagram, at its point of maximum divergence from the sun. The latter, as we have said, is a magnitude not so difficult to measure with a sextant.

The point of maximum observable divergence of Venus from the sun will vary, depending on what we now know to be the (very slight) eccentricity of Venus’ elliptical orbit, as well as on the (somewhat larger) eccentricity of the earth’s own elliptical orbit, on the inclination of the plane of Venus’ orbit to the plane of the earth’s orbit, and on other parameters of orbital motion. These parameters must all be integrated in order to accurately predict (as Gauss finally did in 1801) future positions of planets, asteroids, and comets, but their future comprehension would not have been possible without Copernicus first finding a method to know, at least approximately, the Venus-sun distance, and the other planetary distances from the sun, relative to the earth-sun distance.

Similar methods can be used to determine the distance of Mercury from the sun, relative to the earth sun distance, if you are ever fortunate enough to see that fleet footed rascal!

The Outer Planets

Copernicus used precisely the same “Aristarchian” method, only “in reverse,” which we have just identified for determining the Venus-sun distance (and Mercury-sun distance), to determine the Mars-sun, Jupiter-sun, and Saturn-sun distances, all relative, just as in the case of the inner planets, to an earth-sun “astronomical unit” of distance.

The phenomenon of “retrograde motion,” that is, when planets appear to move backward in their orbits for a time, as seen against the starry background, had long been a stumbling block for astronomers. The ancient Greek astronomer and student of Plato, Eudoxus, had, for example, built an ingenious, geocentric, model of spherical rotation on top of spherical rotation to account for it.

Then later, around 100 A.D., someone less ingenious than Eudoxus, but also a believer in a geocentric universe, the famous Ptolemy, developed a different theory to account for retrograde motion of the outer planets. Basing himself on the Aristotelean dictum that “nature abhors a vacuum,” Ptolemy made planetary distances just big enough to “fit” retrograde motion, in the form of “epicycles,” in between planetary orbits.

Copernicus, on the other hand, thinking in terms of the heliocentric (sun centered) hypothesis, saw in retrograde motion, a reflection of the earth’s own motion, and therefore, saw a way to measure the distances from the sun (and the earth, for that matter) to the outer planets. To illustrate his method, we shall employ exactly the same diagram as before, only with different labelling!

Construct a circle on a piece of paper, again with center S, and place once again a point, but this time labelled C, under and at some distance from the circumference of the circle. From C draw two tangents to the circle, as before, but now label the point of tangency of the straight line from C to the left side of the circle E, and the one to the right side of the circle E’. Extend EC and E’C past their point of intersection at C, to point towards a general region outside our diagram which we shall label the “starry background.” These labels reflect just two differences between this diagram and the previous one. First, the circle now represents the orbit of the earth, not Venus, as in the previous diagram, and thus E and E’ represent successive positions, moving counterclockwise, of the earth in its orbit around the sun. Second, the outer planet, at C, which represents in this diagram any of the outer planets (such as Mars, Jupiter, or Saturn) which Copernicus might observe moving night to night against the “starry background,” takes the place of the earth at E in the previous diagram. The sun remains in the same position in both diagrams.

Notice, first, that the progress of the earth from E to E’ (and discounting for a moment the outer planet C’s own “real” — but slower — counterclockwise motion along its own orbital path) accounts for the apparent clockwise, or retrograde, motion of the outer planet C against the starry background, as the line of sight from the earth to the outer planet moves from EC to E’C. Second, note that the maximum angular extension of that apparent retrograde motion of C exactly reflects the progress of the earth’s motion around the sun, from E to E’, in the diagram.

Divide that angular span of retrograde motion in half, drawing a line from the sun, S, to the outer planet, C, and forming the triangles SEC and SE’C on the left and right hand sides of the diagram. Focus on the triangle SEC. We already know the angular (degree) measure of the total span of retrograde motion against the starry background, and we know angle SCE will be half that. But we also know that the earth’s own orbital motion must begin to create the appearance of retrograde motion in the outer planet when the earth, at E, forms a right angle between a line of sight from the earth to the sun, and a line of sight from the earth to the outer planet C. So we know angle SEC must be a right angle. Therefore, we can determine all 3 angles of right triangle SEC in the diagram. Taking the earth-sun distance as “one,” the distance of the outer planet becomes measurable by the Pythagorean relationships in triangle SEC.

Well, almost. We still have to account for the outer planet’s own motion.

Integrating Modular Motions

Leaving aside for a moment Kepler’s future discovery of elliptical motion, it is not difficult to integrate the earth’s and the outer planet’s approximate respective contributions to the apparent retrograde motion of the outer planet as seen from the earth.

Suppose a planet moves ahead 10 degrees, then back 5 degrees, then ahead 10 degrees, then back 5 degrees, etc., forming cycles with 2 parts, plus 10 and minus 5. Divide the difference between plus 10 degrees and minus 5 degrees in half, which turns out to be 7 and one half degrees. The contribution of the earth’s own motion in each half of the total cycle of motion is thus plus or minus 7 and one half degrees, to the total apparent motion of the outer planet as seen from earth. That leaves 2 and one half degrees for the outer planet’s own contribution to its apparent motion. In the first half of the cycle, (from E’ to E in our second diagram) add 7 and one half (the earths motion in a contrary direction to the outer planet’s motion ), and 2 and one half (the outer planet’s own real motion), to give 10 degrees of apparent forward motion, as seen from the earth, to the outer planet. In the second half of the cycle, (when the earth is moving from E to E’ in our diagram) subtract 7 and one half degrees of earth motion from the outer planet’s 2 and one half degrees of motion, to give a net minus 5 degrees for the outer planet’s motion as seen from earth. (In modular arithmetic terminology, 2 and one half is the residue of real motion left to the outer planet, congruent with its apparent motions of plus 10 and minus 5, in terms of a modulus supplied by the earth’s periodic motion of 7 and one half.) This is only approximate, because more time is spent by the earth, going counterclockwise from E’ to E, than from E to E’, but it is sufficient to determine relative distances, and thus to make further refinement of measurement of motion possible.

Now, finally, open Kepler’s “Mysterium Cosmographicum” to its opening pages. What do you see? You see Kepler’s direct juxtaposition of Ptolemy’s non-measurement or non-congruence with Copernicus’ beautifully congruent measurements of planetary distances!

From Copernicus, Kepler saw, first, that the planets in reality move more slowly the further they are from the sun, in some regular progression related to their distances. (By what law?) Second, their spacing is somehow harmonic, which Kepler found to be congruent with the Platonic Solids. Third, there are all sorts of “little anomalies,” as we have noted, like variations in the apparent distances of the planets from the sun as seen from earth, leading to later triumphs like Kepler’s determination of the elliptical form of planetary orbits, and Carl Gauss’ 1801 determination of the orbit of the first asteroid to be discovered, Ceres.

Copernicus’ genius “in the small” determined an entire “curvature” for future astronomy. Enough to inspire Kepler, or you or me!

How Aristarchus Measured the Universe

By Robert Trout

“But Aristarchus of Samos brought out a book consisting of certain hypotheses, in which the premises lead to the conclusion that the universe is many times greater than that now so called. His hypotheses are that the fixed stars and the sun remain motionless, that the earth revolves about the sun in the circumference of a circle, the sun lying in the middle of the orbit, and that the sphere of the fixed stars, situated about the center of the sun is so great that the circle in which he supposes the earth to revolve bears such a proportion to the distance of the fixed stars as the center of the sphere bears to its surface.”

Archimedes, in “Sand-reckoner”

In 1543, Copernicus published his book, “On the Revolutions of the Heavenly Spheres,” which located the sun and not the earth at the center of the solar system. He was severely censured for this by Aristotelian circles. In his handwritten and autographed copy, Copernicus had written “… Philolaus perceived the mobility of the earth, which also some say was the opinion of Aristarchus of Samos….” This sentence was not included in the printed version. Perhaps, he judged the publication of it simply too risky, because it would had exposed how the system of Ptolemy, which had been imposed on the West for 1300 years, had been a fraud.

Aristarchus (ca 310-230 B.C.) is credited by his younger contemporary Archimedes, and many other commentators, with establishing that the earth rotates around the sun and not the other way around. In his one still extant treatise, “On the Sizes and Distances of the Sun and Moon,” he demonstrated his method for calculating the sizes and distances of the sun and moon, which dramatically changed man’s estimate of the size of the solar system. We will examine how Aristarchus’s discoveries are a culmination of a project launched by Plato to explain the apparently erratic cycles of the universe with “uniform and ordered motions.” Aristarchus’s created a paradox, and then resolved it by a creating a Platonic idea which explained the most basic cycles of the earth’s relationship with the sun, moon and the Universe.

Aristarchus begins his treatise by demonstrating that an observer on the earth can determine when the sun, moon, and earth are situated, so that their relationship to each other is described by a right triangle, and to measure the angles of that right triangle. From this, he was able to calculate an estimate of the ratio of the distance of the earth to the sun, relative to the distance of the earth to the moon.

Aristarchus demonstrated that the phases of the moon are caused by the sun shining on the moon from different directions. Aristarchus knew the basic principles of eclipses from Anaxagoras. He also knew, probably from the previous work of Pythagoras and Anaxagoras, that the moon is a sphere. Only the side of the moon, which is facing the sun, is illuminated, and is visible to an observer on earth. When the moon appears to be near the sun in the sky, the sun is actually further away and behind the moon. The sunlight, then, falls on the back side of the moon, so an observer on earth can see only a small sliver of the moon, if anything. When the sun and the moon opposite each other in the heavens, then the sun lights up the side of the sphere which faces the observer on earth. The moon then appears full. You can demonstrate this by shining a flashlight on a small ball from different directions, and observing how the lit portion of the ball appears.

He demonstrated that when the moon appeared to be exactly half full, the angle from the sun to the moon to the earth, was then very close to a right angle. (This requires that the distance between the sun and moon is large, relative to the diameter of the sun, as it is.)

Make a drawing to demonstrate what Aristarchus was doing. (Figure 1) Near the edge of a sheet of paper, draw a small circle, representing the moon, and label its center, M. Below M, draw a second circle, representing the earth. Draw a line through the centers of the two circles. Label the point where this line intersects the top of the circle representing the earth, E, which represents the position of the observer on the earth. Finally, draw a line, which is perpendicular to the first line, through the point M. The sun will be represented by a third circle, whose center is labelled S. If the center of the sun, S, is on this perpendicular line, Angle SME will be 90 degrees, and the moon will appear, to the observer on the earth at point E, to be almost exactly half full.

Now place the sun, with it’s center, S, on the perpendicular line, at different distances from the moon, and measure the angle which an observer, standing at point E, will see between the sun and the moon (angle SEM). When the sun is close to the moon, Angle SEM will be small. As you move the sun further away from the moon, Angle SEM will become larger.

Aristarchus, in studying the relationship between the actual sun, moon and earth, calculated that when the moon was half full, indicating that Angle SME was approximately a right angle, Angle SEM was 87 degrees. He now had calculated two of the angles of the triangle SEM. From this he was able to use the knowledge of geometry, which the Greeks had discovered at that time, to calculate SE/ME, or the ratio of (the distance from the earth to the sun)/(the distance from the earth to the moon).

There are a number of ways to do this. Neither trigonometry nor pocket calculators had yet been developed. Instead, Aristarchus solved the problem by using the knowledge which the Greeks had developed of relationships between triangles, to come up with an approximation for the ratio of the two distances. It can be seen from the diagram, that when Angle SEM approaches a right angle, the ratio of the sides, (SE/ME) becomes large. He calculated that the distance from the earth to the sun was approximately 18 to 20 times the distance from the earth to the moon.

Since Aristarchus estimated that the sun was 18-20 times further away than the moon, and they appeared to be around same size in the sky, as is demonstrated by eclipses of the sun, he used the principles of similar triangles, which had been developed by Thales, to conclude that the diameter of the sun was approximately 18-20 times the diameter of the moon.

The actual value of the ratio SE/ME is around 389, which is around 20 times greater than Aristarchus’ estimate of approximately 18-20. The actual value for the angle SEM is around 89.9 degrees. The error in his measurement of this angle probably resulted from the difficulty of determining when the moon appears exactly half full, and not his inability to accurately measure angles. Try reproducing his experiment. You will see for yourself that the angle SEM is clearly near 90 degrees, although it is difficult to determine this angle with greater precision than Aristarchus did.

Aristarchus also measured the distance to the moon, using the moon’s diameter as his unit of measurement. An observer on the earth will see the moon as a small disk in the sky. Thales is reported to have measured the angular size of the moon at 1/2 degree, approximately 300 years earlier.

To demonstrate Aristarchus’ method, draw a circle approximately 2 inches in diameter representing the earth and label a point on this circle, P, representing the position of an observer on the earth. (Figure 2) Approximately 5-6 inches away from point P draw a circle of around 1/2 inch in diameter, representing the moon. (This is not a scale model.) Now draw lines from the observer’s point, P, tangent to the two sides of the circle representing the moon. Draw a line connecting the two points of tangency which are on the opposite sides of the circle representing the moon. (Aristarchus demonstrated that the length of this line is very close to the diameter of the circle representing the moon.) Since the two lines from the observer at point, P to the two points of tangency are the same length, this creates a long slender isosceles triangle.

Using principles of geometry, which were then known to the Greeks, Aristarchus was able to calculate the ratio of the length of one of the long sides of the triangle, to the length of the short side. This ratio represented the earth-moon distance, measured in moon diameters. Aristarchus calculated that the distance to the moon was approximately 26 times the diameter of the moon. Strangely, Aristarchus used an angular displacement for the sun and moon of 2 degrees in this treatise. This is 4 times larger than the 1/2 degree which was reported by Archimedes to be known to Aristarchus. By using 2 degrees as the size of the angle at P, he decreased his estimate of the distance to the moon to approximately 1/4 of what he would have calculated, had he used 1/2 degree.

Aristarchus then combined these discoveries with another piece of experimental evidence, which was known from studying eclipses of the moon, to calculate the ratio of the size of the earth to that of the sun and moon. The Greeks had estimated, by measuring the amount of time that it took the moon to travel across the earth’s shadow during an eclipse of the moon, that the shadow, which the earth made on the moon, was approximately twice the size of the moon. Knowing this, he was able to use the geometrical relationships which existed between the sun, earth, and moon, during an eclipse of the moon, to calculate the size of the earth relative to the sun and moon.

Make a drawing which is a simplified version of Aristarchus’ calculations in his treatise. (Figure 3) This drawing will not be to the correct scale, to represent the sizes and distances in either the actual solar system or Aristarchus’ estimates of them. To do that, you need to know the answers that you are seeking. (It would also require either a very long piece of paper, or else drawing the sun, moon and earth so small that you would not be able to see the geometrical relations clearly.) Near the right edge of the paper, draw a circle, with a radius about 1 1/2 inches, representing the sun. Label it’s center S. Draw a circle to represent the earth, with its center, E, 5 inches to the left of S. Make the radius of this circle about 1/2 inch. Draw a line through the points S and E, and extend it 3 to 4 inches to the left of E. Next, draw a line, tangent to the top of the two circles, and a line, tangent to the bottom of the two circles. These lines will intersect at a point which is located on the first line, if your drawing is reasonably accurate. Label that point, A. Also, label, as U, the point where the upper tangent line intersects the circle representing the sun, and label, as F, the point where the upper tangent line intersects the circle representing the earth. Draw lines connecting U to S and F to E.

The earth’s shadow forms a cone on the side away from the sun. When the moon travels through this cone, there is an eclipse of the moon. The two lines that are tangent to the sun and earth represent the boundary of this shadow cone. The Greeks had estimated that, at the distance where the moon traveled through this shadow cone, the radius of the shadow cone was twice the radius of the moon.

Finally, draw a point M on the line AS approximately 1 1/2 inches to the left of E. This will represent the center of the moon, during an eclipse. At point, M, draw a line, perpendicular to the line AS, so that it intersects the line UA. Label the point where it intersects the line UA, N. The line NM represents the radius of the shadow cone of the earth at the distance where the moon travels during an eclipse of the moon.

From these relationships, Aristarchus constructed 3 similar triangles, AMN, AFE and AUS. He goal is to find the ratio of the earth’s radius relative to the sun and moon. He has at his disposal the following. He knows that the radius of the earth’s shadow on the moon, NM is approximately twice the length of the radius of the moon. If MN is twice the length of the radius of the moon, and the radius of the sun is 18-20 times the radius of the moon, this establishes a ratio of US/MN at between 9 and 10. He has estimated the distance from the sun to the earth, ES, at 18-20 times the distance of the earth to the moon, EM. From knowing these ratios, he is able to use the relations between the three similar triangles to calculate an estimate of the ratio of the earth’s radius to the radius of the sun and moon.

I will not go through Aristarchus’ calculations, which are available in his treatise. They are made even more complicated by the relatively unadvanced state of Greek mathematics. Archimedes estimation of the value for pi, with only the mathematics available at the time, was once described as the equivalent of running the hurdles while wearing weights. Aristarchus calculated that the diameter of the sun was approximately 6.8 times that of the diameter of the earth. He also calculated that the diameter of the moon was around .36 the diameter of the earth. He now had established a value for the diameter of the sun and moon, using the diameter of the earth as his measuring stick.

Aristarchus now had values for the distances to the sun and moon using the earth’s diameter as his measuring stick. Since he had calculated the distance from the earth to the moon, EM, at 26 moon diameters, and the distance from the earth to the sun, ES, at 18 to 20 times EM, he arrived at values of approximately 9.5 earth diameters for EM and 180 earth diameters for ES.

Eratosthenes of Cyrene (ca 276-194 B.C.) was, like Archimedes, from the generation after Aristarchus. He made a remarkably accurate estimate of the diameter of the earth. With Aristarchus’s and, then, Eratosthenes’s discovery, the Greeks had demonstrated that the sun and moon could be measured with the same units that were used to measure the earth.

Aristarchus’s estimate of, especially, the distance to the sun dramatically expanded the size of the universe over previous conceptions. For example, Anaximander, a younger friend of Thales, had estimated the distance to the moon at 19 earth diameters and the distance to the sun at 28 earth diameters. Anaximander had also thought that the planets and fixed stars were closer than the sun and moon.

Although there was a large error in Aristarchus’s measurements, his discoveries were a crucial experiment which demonstrated that the sun, which appeared to sense certainty to be only a relatively small disk in the sky, was dramatically larger than the earth. Aristarchus’s had created a paradox which he could resolve, only by overthrowing the earth centered conception of the universe that was accepted at the time, as we shall see in part II.

How Aristarchus Measured the Universe, Part II

By Robert Trout

In Part I, we saw that Aristarchus discovered how to use the geometrical relationship, that exists between the sun, earth, and moon, when the moon is half full, to calculate the relative distances from the earth to the sun and moon. He then calculated the ratio of the distance to the moon relative to the diameter of the moon, using his knowledge of the relationships contained in an isosceles triangle. Finally, he used the geometrical relationships that exist between the sun, earth, and moon during an eclipse of the moon, to calculate the size of the sun and moon relative to the earth, showing that the sun was dramatically larger than the earth or moon. He was then able to measure the sizes of the sun and moon, and their distances from the earth using the earth’s diameter as his measuring stick.

Aristarchus’ methods of measurement were crude. However, his experiment demonstrated that the sun was not just larger than the earth and the moon, but very much larger. The ratio of the volume of 2 spheres is based on the cube of their diameters. He calculated that the volume of the sun was around 6860 times larger than the volume of the moon. Likewise, he calculated that the volume of the sun was around 315 times larger than the volume of the earth.

Aristarchus did not state his hypothesis that the earth orbits the sun in his treatise, “On the Sizes and Distances of the Sun and Moon.” The treatise, that Archimedes quoted, in which Aristarchus stated this hypothesis, has been lost. Its disappearance is undoubtedly a result of the suppression of true science, that occurred with the imposition of the hoax of Ptolemy. However, we can reconstruct how Aristarchus’s discovery of the relative sizes of the sun, earth and moon represented a paradox, which could only be resolved by developing a new hypothesis of the universe, with the earth orbiting the sun and not the other way around.

To sense certainty, the earth appears to be 99% of what one can see. All around, one can see the earth. The sun and moon appear to be very small discs in the sky with an angular diameter of 1/2 degree, that rise, cross the sky and set. The stars are only tiny specks in the night sky. The earth feels rather solid and unmoved, with only earthquakes a rare exception.

A good example of a cosmology, which was based simply on assertions of sense certainty is the work of Ptolemy. Approximately 400 years after Aristarchus’s work, (and after the Roman Empire had driven the region into a dark age) Ptolemy established a fraudulent cosmology, with the earth again at the center of the universe. Ptolemy stated that “the fact that the earth occupies the middle place in the universe, and that all weights move towards it, is made so patent by the observed phenomena themselves.” His “proof” that the earth was the center of the universe consisted in arguing that all objects “which have weight” fall towards the earth. It is a dramatic demonstration of the dark age into which Europe had descended, that the ideology of Ptolemy, who argued that the nature of the Universe could be determined, by what he “saw” objects doing, within a few hundred feet from the earth, was enforced on Europe as the only acceptable view of cosmology, or the study of the heavens, for 1300 years. Nicholas of Cusa, with his “On Learned Ignorance,” again overthrew the earth centered system of cosmology.

Aristarchus’ experiments has rudely overthrown sense certainty. Aristarchus had constructed, in his mind, geometrical relationships which allowed him to determine the relationship between the sun, moon, and earth which his eyes were unable to see. Aristarchus had demonstrated through reason that the sun, that small disc in the sky which appeared to circle the earth, was far larger than the earth, estimating its size to be around 315 times the size of the earth. The idea that the earth was the center of the universe, while the sun, which orbited it, dwarfed the earth in size, created a paradox. Aristarchus, whose method was based on rejecting the fetters of sense certainty, was able to construct a new hypothesis, which placed the larger body in the center. His new hypothesis also solved a host of other paradoxes, which were inherent in an earth centered conception of the Universe.

Aristarchus had as a precedent, the discovery of Heraclides of Pontus, who had demonstrated, less than 100 years earlier, that two planets orbited the sun. His hypothesis that Mercury and Venus orbited the sun explained their seemingly highly erratic movements with a “uniform and ordered movement” of circular motion. Aristarchus’ hypothesis, that the earth was also revolving around the sun, would resolve the paradox which he had created, and explain, what appeared to be otherwise erratic and arbitrary motions of the outer planets and stars, and the cycles of the earth, with “uniform and ordered movements.”

This was, of course, a tremendous leap at that time. Plutarch described Aristarchus’s hypothesis as, “Only do not, my good fellow, enter an action against me for impiety in the style of Cleanthes, who thought it was the duty of the Greeks to indict Aristarchus of Samos on the charge of impiety for putting in motion the Hearth of the Universe, this being the effect of his attempt to save the phenomena by supposing the heaven to remain at rest, and the earth to revolve in an oblique circle, while it rotates, at the same time, about its own axis.”

Make a diagram to represent Aristarchus’s hypothesis. (Figure 4) Draw 2 concentric circles. The center of the two circles represents the sun. The inner circle represents the path of the earth’s orbit. (That the orbits or the planets are actually ellipses was, of course, a discovery by Kepler not to be examined here.) According to the view of the universe, prevalent at that time, the stars were located on a celestial sphere. (Many of the leading thinkers of the time rejected the idea that the stars were located on a physical sphere, including undoubtedly Aristarchus.) The outer circle represents the celestial sphere. You can draw little constellation around it, if you like.

Now, let’s look at how Aristarchus’s hypothesis corresponds to Plato’s research project of finding “what are the uniform and ordered movements by the assumption of which the apparent movements of the planets can be accounted for.” His hypothesis of the motion of the earth combined 2 rotations. First, the rotation of earth about its axis would explain the daily cycle of the sun and the nightly rotation of the fixed stars. Second, the yearly orbit of the earth around the sun, explains the apparent shifting of the “fixed stars” from night to night. Each night, the position of the “fixed stars” appears to have rotated slightly less than one degree, making almost one complete rotation in a year. (Although, not exactly a complete rotation. Hipparchus discovered, a century later, that these two cycles were subsumed by another much longer cycle.) It should be clear from studying this diagram, that as the earth travels around its yearly orbit about the sun, the view that an observer, located on the earth, will have of the celestial sphere, will shift from night to night.

The path of the sun across the sky also varies on a yearly cycle, corresponding to the yearly cycle of the seasons. Aristarchus’s hypothesis in which the earth revolves around the sun “in an oblique circle,” or slanting or sloping circle, can, potentially, explain this.

Conceptualize Aristarchus’s hypothesis that the earth’s orbit is an “oblique circle.” The earth is travelling, yearly, around the sun in a circular orbit, while rotating, daily, on its axis. Imagine the northern direction of the earth’s axis as pointing “straight up,” while imagining that the plane of the earth’s orbit around the sun is sloped at an angle relative to the earth’s axis. (Although, globes are constructed to have the north pole pointing upward, there is no basis in the physical universe for this assumption. Since you are assuming that the north pole is pointed upward, this makes the direction of the rotation of the earth on its axis, and its orbit around the sun both counterclockwise.) During the part of the earth’s orbit around the sun, that you are imagining to be the “higher part,” the southern hemisphere is more directly exposed to the sun. During the part of the earth’s orbit, that you are imagining to be the “lower part,” the northern hemisphere is more directly exposed to the sun. This explains the yearly variation in the path, that the sun makes across the sky each day, as seen by an observer on the earth, and the resulting variations in the seasons.

Eratosthenes also studied the question of the tilt of the earth’s axis relative to the plane of the earth’s orbit around the sun, approximately 25 to 50 years later. Eratosthenes measured the angle between the earth’s axis and the plane of its orbit around the sun with a remarkable accuracy, being in error by only around 0.1 degree from the currently accepted value.

Finally, Heraclides of Pontus had demonstrated that the apparently erratic motion of the inner planets, Mercury and Venus could be explained as circular motion around the sun. Aristarchus’s hypothesis would allow one to comprehend that the apparently erratic motion of the outer planets also corresponded closely to rotation around the sun, as seen from an earth, which is also rotating around the sun.

As a result of Aristarchus’s hypothesis, numerous movements in the universe, including a number of the most important conditions regarding human existence, such as the ordering of the seasons, which according to sense certainty, just are that way, could be determined as the consequence of “the uniform and ordered movements by the assumption of which the apparent movements of the planets can be accounted for.”

In closing, let’s examine the last part of Archimedes statement that Aristarchus hypothesized, “that the sphere of the fixed stars, situated about the center of the sun is so great that the circle in which he supposes the earth to revolve bears such a proportion to the distance of the fixed stars as the center of the sphere bears to its surface.” To restate this, Archimedes attributed to Aristarchus the idea that the ratio (circle of the earth’s orbit)/(distance from the earth to the fixed stars) equals the ratio (the center of the sphere)/(its surface). Archimedes criticized Aristarchus for saying this stating, “Now it is obvious that this is impossible; for since the centre of a sphere has no magnitude, it cannot be conceived to bear any ratio to the surface of the sphere.” However, Aristarchus was probably developing a metaphor to illustrate that the distance to the stars was such a large distance, that the circle of the earth’s orbit around the sun would appear to be point in comparison.

Since this work, which Archimedes quoted, has been lost, we must reconstruct how he could have estimated the distance to the fixed stars. The geometrical methods, which he used to measure the distance to the moon and sun, would work well to solve this problem.

Aristarchus has now established the hypothesis that the earth is orbiting the sun following a circular orbit, with a diameter estimated at 180 times the earth’s diameter. Aristarchus had already expanded the size of the universe tremendously over the prevailing view.

Go back to your drawing of the two concentric circles representing the earth’s orbit and the celestial sphere. As the earth travels around it’s orbit during the year, the earth will be closer to the part of the celestial sphere which is directly overhead at midnight. A star will be closer to the earth when it is directly overhead at midnight, and approximately the diameter of the earth’s orbit around the sun further away from the earth when it is very near the sun in the sky.

The angle between two adjacent stars should be larger when the earth is close to them. Aristarchus, who was very adept at this type of measurement, could have calculated the size of the celestial sphere, based on measuring the change in the size of the angle between two stars, when the earth is near them, versus when the earth is on the opposite side of its orbit from them.

Draw two dots, representing stars, on the celestial sphere, maybe 10 degrees apart. Pick a point on the circle of the earth’s orbit which is near these two dots, and draw 2 lines from this point to the two stars. The angle between these two lines represents the angle, that an observer on the earth, would see between the two stars. Next pick a point on the opposite side of the earth’s orbit, and draw 2 lines from this point to the two stars. (Don’t pick a point directly across, or the sun will block the view of the stars.) The angle between those two lines will be less than the first angle that you constructed, at the the other point on the earth’s orbit, which was nearer to the two stars.

The size of the celestial sphere’s diameter relative to the size of the diameter of the earth’s orbit, can be estimated by comparing the difference in the size of these two angles. If the celestial sphere is only a little larger than the circle of the earth’s orbit, the difference between the 2 angles will be large. If the celestial sphere is much larger than the circle of the earth’s orbit, then the 2 angles will be closer to the same size.

However, Aristarchus must have found that the angle, between two adjacent stars, did not appear to change, regardless of whether the earth was near to or far from those stars. He could only conclude that the distance to the stars must be so large, that the distance to them could not be measured using this method. There was probably no other method that could have measured the distance to the stars, with the means available at that time. This would have led him try to communicate his discovery of the immense size of the universe, by describing the distance to the stars relative to the diameter of the earth’s orbit, by comparing it to a “ratio between a sphere and a point.” Aristarchus had expanded man’s conception of the Universe beyond the wildest imagination of the majority of men who still had their minds stuck down in the mud of sense certainty!

On Archytus

By Bob Robinson

“If I were at the outside, say at the heavens of the fixed stars, could I stretch my hand or my stick outward or not? To suppose that I could not is absurd; and if I can stretch it out, that which is outside must be either body or space (it makes no difference which it is, as we shall see). We may then get to the outside of that again, and so on, asking at our arrival at each new limit the same question; and if there is always a new place to which the stick may be held out, this clearly involves extension without limit. If now what so extends is body, the proposition is proved; but even if it is space, then, since space is that in which body is or can be, and in the case of eternal things we must treat that which potentially is as being, it follows equally that there must be body and space without limit.” Archytus, circa 400-365 B.C.

I have invented a pedagogical -that is, teachable- model of the ancient Greek Archytus’ geometric solution to the classical problem of finding two mean proportionals between two extreme magnitudes, often also called the problem of “the duplication or doubling of the cube”. (If unfamiliar with the term “mean proportionals” think, for first approximation, in terms of numbers. What are the two mean proportionals between 1 and 8, 1 and 27, and 1 and 125?) Archytus’ solution requires no numbers; instead the smaller of the two extreme magnitudes is any chord of a circle, while the larger of the two extremes is the same circle’s diameter. His solution is three dimensional, involving the intersection in a point of three “solid” surfaces: a cylinder, a torus (doughnut shape), and a cone. But it is not just three dimensional, and to call it “three dimensional” is, as we shall see, somewhat misleading for a truthful understanding of the problem, its solution, and Plato’s friend Archytus himself. To situate Archytus’ solution to the problem of the doubling of the cube, consider the following quote of Eratosthenes (circa 200 B.C.), who developed his own solution to that problem, by Theon of Smyrna. “Eratosthenes in his work entitled Platonicus relates that, when the god proclaimed to the Delians by the oracle that, if they would get rid of a plague, they should construct an altar double of the existing one, their craftsman fell into great perplexity in their efforts to discover how a solid could be made double of a similar solid; they therefore went to ask Plato about it, and he replied that the oracle meant , not that the god wanted an altar of double the size, but that he wished, in setting them the task, to shame the Greeks for their neglect of mathematics and their contempt for geometry.” Archytus’ intention was not merely to move from the two dimensional realm (doubling the square) to the three dimensional realm (doubling the cube), but was to move outside the realm of three abstract dimensions into the realm of physical geometry. Three abstract dimensions appear in Archytus’ construction as a sphere. The determination of two mean proportionals lies outside that sphere, on the surface of a torus and a cylinder which surround the sphere in Archytus’ construction. One must, in effect, poke a stick through the three dimensional sphere of Archytus’ construction, to see where the stick intersects the cylinder and torus. Compare this with Archytus’ astronomical notion of “stretching my hand or my stick outwards” at “the heavens of the fixed stars”. Consider the analogous case, of how it is necessary to move into three dimensions, to double the square. The diagonal of a square, which forms the side of a square with double the area of the original one, is formed by folding the original square in half, that is, by rotation outside the plane of the original square!

The Construction

Situate a transparent sphere (such as a Lenart Sphere) in a cardboard box, such that the sphere sits snugly in a hole in the top of the box and the equator of the sphere is level with the top of the box. The hole in the top of the box should be put tangent to one side (edge) of the box. Trace with a spherical compass (also included with the Lenart Sphere) a number of concentric circles of latitude on the sphere which, unlike the circles of latitude on the earth, are all made perpendicular to the equator of the sphere. That is, the compass forming the circles is pivoted on a point on the equator of the transparent sphere. Be sure to use erasable ink of a certain color, say red. Next, with the point of tangency of the hole in the top of the box and the side of the box as center, trace with an ordinary plane compass a circle on the top of the box, such that the circle has twice the diameter of the hole in the top of the box. The diameter of the sphere and hole form the radius of the new circle, so that the new circle will be just tangent to the hole at a point directly opposite where the hole is tangent to the side of the box. This new circle, which we can only represent a portion of in our model, forms the outer perimeter of the torus in Archytus’ construction. The “hole” in the middle of the torus is of null diameter, and is represented by the center around which we pivoted the new circle, that is, the point of tangency of the hole in the top of the box and and the side (edge) of the box. Obtain from an arts and crafts store some wooden hoops with about the same diameter as the sphere, and some clear acetate. Cut some of the wooden hoops in a continuously growing array of arc lengths, up to and including a couple that are semicircles. Wrap the acetate around one of the semicircular hoops, such that it forms a cylinder, and with the sphere put into its hole in the top of the box, snuggle the acetate half cylinder in between the sphere and the hole. Next, position the smaller hoop lengths in order of ascending length, starting from near the point of tangency of the larger circle traced on the top of the box and the hole in the top of the box. The base of each hoop length should be on the larger circle, and its top should rest on the acetate wall of the cylinder. The effect should be that, were the hoops to continue past the barrier of the acetate cylinder, they would all converge on the point of tangency of the hole and the side of the box, or, what is the same thing, the center of the circle on the circumference of which their bases rest. Silicone should be used to secure their positions on the top of the box and the acetate wall of the cylinder. When the construction is completed, the wooden hoops should approximate a partial (quarter) torus, which if completed would wrap around the sphere and cylinder. Trace with a dry erase marker the line of intersection, so formed, between the cylinder and the torus. When the sphere is placed in the hole in the box, maneuver it so that the center of the concentric latitude circles traced on its surface coincides with point of tangency of the hole in the top of the box and the side of the box. Obtain a laser pointer. Shine it through the sphere, from the point where the hole in the top of the box is tangent to the side of the box, so that it hits the side of the acetate cylinder. Play around with it a bit. Trace along the curved line of intersection of the torus and cylinder, so that the laser crosses in succession all the lines of latitude traced on the sphere. Next, perform the inverse operation, by tracing along each circle of latitude to see where it intersects the line of intersection of the torus and cylinder traced on the clear acetate cylinder. As you do so, note how the laser’s motion along each circle of latitude forms a distinct cone. The integral of the bases of all such possible cones is nothing but the sphere itself! That is, the sphere is the only truthful representation of all possible cones formed by rotating all possible chords of a circle emanating from a single point on that circle’s circumference.

Archytus’ Creation of Two Mean Proportionals

Archytus wishes to find the two mean proportionals between any chord of the circle, contained in the great circle of the equator of the sphere in our construction, and the diameter of that same circle, which is also the diameter of the sphere, the cylinder, and the “tube” of the torus in Archytus construction. Place the chord with its origin at the point of tangency of the hole and the side of the box. (From now on we will simply call this point, which by our construction is on the equator of the sphere and also is the center “hole of null dimension” of the torus, and is the point from which we will direct the laser pointer, the origin.) For any such chord, there is implicitly a circle of latitude on the sphere, which is everywhere equidistant from the origin, forming, as we have indicated, a distinct cone. Next, trace with the laser along one such circle of latitude on the sphere, until the laser beam crosses the line of intersection of the torus and the cylinder. There will now be three points of light associated with the laser beam, one at the origin, one on the surface of the sphere, and one on the curved line of intersection of the torus and the cylinder. Looking down from directly on top of the model, imagine a plane including the three aforesaid points of light, and perpendicular to the top of the box slicing through the sphere, the cylinder and the torus. That plane, being perpendicular to both the top of the box and to the equator of the sphere, will form a circle cut exactly in half by the equator of the sphere, just as the circles of latitude were cut in half by the plane of the equator. This cut is most efficiently represented by a semicircle drawn on a clear overlay (provided with the Lenart Sphere), and placed on the sphere so that it coincides with the origin and the point on the sphere through which the laser shines. It should be of different color than the (red) concentric circles of latitiude on the sphere, say green. Now draw a diagram of a cross section of the portion of Archytus’ model cut by the perpendicular plane. The diagram will not be precise, but simply representative of certain geometric relations that exist in the three dimensional model. Construct two tangent circles, one inside the other. At the point of tangency, mark O. The smaller circle represents the green circle in our three dimensional model, while the larger one represents a cross section of the torus. Through O, draw a straight line that forms a diameter OB of the smaller circle, and OD of the larger circle. This equatorial line represents a cross section of the plane forming the top of the cardboard box in our three dimensional model. In the diagram, through B, draw a line perpendicular to OB, that forms a tangent to the smaller (green) circle, and intersects the larger circle at C. Draw OC and CD. Where OC intersects the smaller circle, mark A. Triangle OAB, being inscribed in the smaller semicircle of the diagram, will have a right angle at A. In triangle OBC, diameter OB and tangent BC will be perpendicular to each other, so there will be a right angle at B. Triangle OCD, being inscribed in the larger semicircle of the diagram, will have a right angle at C. All three triangles share angle DOC (identical to angle BOA). So, by similar right triangles OAB, OBC, and OCD, the continued proportions OA/OB=OB/OC=OC/OD are produced. OA represents the distance from O to the surface of the sphere at A, and is equivalent in length to the original chord forming the lesser extremity of Archytus’ demonstration. OD of the diagram represents the cross sectional diameter of the torus “tube” in the three dimensional model. By the way the model was constructed, the diameter of the tube of the torus is the same as the diameter of the sphere. And diameter of the sphere is the same as the diameter of the equatorial great circle of the sphere, which formed the larger extreme in Archytus’ demonstration. Straight line BC, perpendicular to OB in the diagram, is a cross section of the cylinder, which is perpendicular to the equatorial plane in Archytus’ three dimensional model. Thus, by reference to the schematic diagram, it is easily seen that Archytus’ model really does create two mean proprtionals OB and OC between two extremes OA and OD. Well, who needs the three dimensional model if a two dimensional drawing gives us the required solution? But the diagram is only schematic, and does not give us true values for OB and OC. Only the three dimensional model does. (Indeed, even Eratosthenes’ later mechanical method for finding two mean proportionals, which is two dimensional, only minimizes the problems of approximation inherent in any two dimensional model that uses only straight lines.) The more interesting question is, is Archytus’ model simply three dimensional, or of a higher power? Consider the fact, that in Archytus demonstration, we must go outside, not only the circle, but the three dimensional sphere, to find the two mean proportionals OB and OC. On the surface of the sphere itself, this is reflected in the different orientations of the red and the green circles.

Predictions Are Always Wrong

by Phil Rubinstein

Of late in dealing with the outlook of the population, we often have to face the impact of linearity most directly with respect to the sense of time. This occurs in the form of “can you predict…?”, “can you tell us when…?”, or “your prediction was wrong, it didn’t happen”, etc. All of this reflects a view of space-time that is one of a linear extension, with space as a filled-up box and in effect no concept of time, since time can exist only as change, action, becoming. It is precisely this linearity that simplifies language to dumbness, reduces music to noise and makes all science and geometry of the post-Kepler period incomprehensible.

It is no accident that one can find a nearly completely modern expression of this in Aristotle’s “On Interpretation” — he says first in section III “… verbs by themselves, then, are, nouns and they stand for or signify something…. [T]hey indicate nothing themselves but imply a copulation or synthesis, which we can hardly conceive of apart from the things thus combined.” And then, “we call proposition those only that have truth or falsity in them.” Were this only the ancient outlook of a discredited Aristotle no problem would ensue, but in fact this is the root of the thoroughly modren outlook of Russell, Frege, Carnap, etc. In fact, On Interpretation could be a handbook for information theory. While Aristotle like his modern followers recognized that the thoroughly deterministic outlook that follows from this contradicts the actual choices made by human beings, his resolution is to introduce mere contingency, a kind of randomness, which is allowed to the empty future.

The reality is best grasped by taking an approach rooted in physical economic planning. Begin with a moment in history defined by a resource level determined by an existing science and technology. A horizon can be hypothesized at which the social cost of resources usable at that level of technology would lead to a critical degeneration or inability to maintain capital or labor. That crisis defines the necessary present deployment of advanced technologies to create new scientific breakthroughs. This, however, requires greater density of use of resources, labor and soon, thus, the horizon is changed. Take the example of fossil fuel, nuclear fission, then nuclear fusion. Our present resources may be either stretched to extend the horizon, but that merely worsens the crisis. If we choose to accelerate the use of fission energy, the demand on existing resources USES UP those resources more rapidly. If we plan to achieve fusion, the rate of usage increases.

Thus, the future is changed for present action at each step. The problem then becomes to determine the actual activity required in the present. As this occurs, the relationship between now and the future is constantly altering: that also alters all other activity, allocation of resources, labor, and so on. In this way, the present is itself an incommensurable. It is a perfect example of non-constantly changing action. It is this subjectivity that lies at the root of understanding physical space-time as something both of constantly changed activity of a multiply connected type in the sense of Leibniz.

From this standpoint, one can see that not only is the future causing the present, but that implicit in any hypothesis of this type is an inversion that is assymetric. As the forecast is made it immediately brings us to a new concept of the path of action itself. The relationship of past, present, and future is altered.

This also has implications for language, such as the fundamental role of the subjunctive, and in physics, such as non relativistic relativity and non-statistical quantum theory. That could be raised for future discussion, but at least never let us be caught in Aristotelean conceptions of the future.

The Refraction Of Light And The Circle

By Larry Hecht

The law for the reflection of a ray of light, was known since ancient times. Imagine a plane mirror, resting on a table-top. A beam of light, directed at the mirror, forms an angle with the mirror’s surface called the “angle of incidence.” About 2,000 years ago, scientists knew that the beam, after striking the mirror, would reflect off in the opposite direction, the reflected ray making the same angle with the mirror’s surface, as the incident ray.

A related phenomenon is the refraction of light: A ray of light, passing from one medium, such as air, to another, such as glass or water, is bent (refracted) as it crosses the interface between the two media. Imagine the smooth surface of water contained in a home aquarium tank. A ray of light strikes the surface, where we measure the angle of incidence. The light ray continues on, below the surface of the water, but its path has changed direction! It is bent, or refracted, such that the angle it makes with the surface of the water, measured downward from that surface — called {the angle of refraction} — is greater than the angle of incidence.

As we increase or decrease the angle of incidence, the angle of refraction also increases or decreases. But in what proportion? The most skilled investigators of the laws of optics from the Hellenic age, to the Islamic Renaissance, on to the early European Renaissance, could not discover the lawful relationship of angle of incidence, to angle of refraction. The answer was found by the Dutch republican scientist Willebrord Snell, a student of the famous Simon Stevin, in 1620. Perhaps the reason no one had found it earlier, is that the proportion is a transcendental one; that is, it expresses the relationship of a circular arc to a straight line, or chord of the circle. In geometry, this relationship is called the “sine.” It is the same proportionality discussed by Nicholas of Cusa, in the {De Docta Ignorantia}, where he demonstrates the incommensurability of straightness and curvature.

Precisely this incommensurable proportion, defines the lawful relationship between the angle of incidence and angle of refraction of a ray of light. Snell’s beautiful discovery, was to show, that no matter how the angle of incidence may vary, the ratio of the sine of this angle, to the sine of the refracted angle, remains the same. This is Snell’s Law of Refraction, also called the Law of Sines. The beauty and simplicity of it, and its relationship to Cusa’s crucial breakthrough, are unfortunately disguised by the poor teaching of geometry today, in which the trigonometric functions (sine, cosine, and tangent), are usually seen only as linear ratios; that is, as ratios of sides of a right triangle.

To see clearly, what a sine actually is, and also to better understand Snell’s Law, let us look at the description of the law given by Snell’s countryman, Christiaan Huyghens, in the closing chapter of his {Treatise on Light}, written in 1673. [The reader will have to draw this simple diagram. — ed.] In a circle whose center is O, draw a horizontal diameter CD. Let the circle represent the cross-sectional view of the air-water interface, such that the area above the diameter CD is air, and the area below, is water. Now designate a point, A, at about the two o’clock position on the circumference, from which a ray of light originates, and proceeds to the circle’s center, O. Here it encounters the surface of the water, where it is bent downward, so that its direction is toward a point B, at about the seven o’clock position on the circle. Angle AOD is the angle of incidence. Angle BOC is the (larger) angle of refraction. But also notice, that what we call angle AOD, is a measure of circular rotation: the arc AD. And, similarly, angle BOC is the arc BC.

The problem, to repeat, was to find the lawful relationship between these two angles, or arcs. From A, drop a perpendicular to the diameter CD. Do the same upwards from B. The lengths of these perpendiculars, are the sines of the angles AOD and BOC. Snell discovered, that whatever the incident angle AOD, the refracted angle, BOC, will adjust itself, such that the ratio of their sines, will remain constant. How does the ray of light, know how to do that?

An even greater “willfulness” on the part of the insensible light ray, was discovered during the remainder of the Seventeenth Century. First, Pierre de Fermat showed that the path which the light ray “chooses” from A to B, is the shortest possible in time — that is, takes the least time. This is true, anywhere along the extended line OB, not just where it intersects the circumference of the circle. Next, Jean Bernouilli investigated the refraction of a light ray, in a medium of continuously varying density, such as the atmospheric air, as it rises and thins above the earth’s surface. Being continuously refracted at each interface of the denser, with the less dense, air, the path of the light ray is a curve. Bernouilli discovered, with great excitement and delight, that the curve which the refracted light follows under such conditions, is the cycloid — the same curve which, he had just discovered, was the path of least time, for a falling body under the influence of gravitation.

We leave to future discussion, the investigation of this “higher willfulness” of inanimate objects.

How Archimdedes Screwed the Oligarchy, Part 1

by Ted Andromidas

I began my investigation of the implications of the use a minimal surface by Brunelleschi, not merely as a theoretical or experimental investigation of physical principle, but as a “machine tool” breakthrough in constructing the cupola of Santa Maria de la Fiore, by investigating the historic scientific foundations upon which this breakthrough depended. I began, therefore, looking at the Classical Hellenic scientific tradition.

First let us re-acquaint ourselves with the physical principle used by Brunelleschi in the Dome’s construction. Why do we call a soap film bound by one or more wire hoops or boundaries a minimal surface? With amazing elegance and simplicity soap film solves an historic mathematical problem, namely, the soap film finds the least surface area amongst all imaginable surfaces spanned by the wire. For example, a “trivial” minimal surface which connects the interior of a circular hoop is a flat circular plain.

In a minimal surface the surface tension stabilizes the whole surface because the tension is in equilibrium at each point on the soap film. In other words, the tension at each point on the surface is equal to the tension at any other point on the surface. Just as the hanging chain or cable equally distributes the weight across its entire length, so the minimal surface also distriubtes the tension equally across its entire surface.

To see this for yourself, take a simple wide rubber band and begin stretching it. As you apply greater tension across the rubber band, you will notice that the middle of the rubber band is narrower, thinner, and almost translucent. The tension across the surface, at that point, is greatest. In fact, you know that the band will snap at that point if you continue to pull it apart. The stretched rubber band is not a minimal surface!

What a wonderful paradox! The surface which creates the minimal of all possible areas within any given set of boundaries also creates equal and minimal tension across the surface.

As we discovered last time, the current history of science indicates that the first non-trivial examples of minimal surfaces were the catenoid and helicoid found by J.B. Meusnier in 1776. Yet, as LaRouche discovered, Brunelleschi uses a minimal surface as a principle of physics in the construction of the Dome.

At this point I thought: Does this principle of least action, though not “proven’ mathematically go back to the classical Hellenic period? If the Archimedian screw has been described as a kind of helicoid, was, perhaps, the common bolt thread the first minimal surface studied? In reading a small article on the history of the bolt, I learned that the first comprehensive studies and development the screw or bolt thread are attributed to Archytas of Tarentum, the last and greatest of the Pythagoreans.

I went looking for Archytas.

A close friend and collaborator of Plato, it is if Plato had Archytas in mind when he says that “…those cities rejoice, whose kings philosophize and whose philosophers reign.” Archytas himself was so loved and respected in his native city that, though there was a one year “term limit” for anyone to act as chief executive of the city of Tarentum, the citizens suspended these rules and elected him to hold that position for seven consecutive years. We get a sense of his collaboration with Plato in the “Seventh Letter”.

Here, Plato discusses his various attempts, at the behest of his student and friend Dion, to teach the just anointed ruler of Syracuse, Dionysus the Second how to become a “philosopher king”. Plato says: “Dion persuaded Dionysios to send for me; he [Dion, ed.] also wrote himself entreating me to come by all manner of means and with the utmost possible speed, before certain other persons coming in contact with Dionysios should turn him aside into some way of life other than the best. What he said…was as follows: ‘What opportunities,’ he said, ‘shall we wait for, greater than those now offered to us by Providence?'” Archytas certainly helped Plato in this endeavor: “…it seems, Archytas came to the court of Dionysios. Before my departure I had brought him[Archytas, ed.] and his Tarentine circle into friendly relations with Dionysios.”

Plato makes clear his regard for Archytas when he says again in the “Seventh Letter”, that when Dionysus invited Plato to Syracuse a second time, he sent the invitation with one of the students of “…Archytas, and of whom he supposed that I had a higher opinion than of any of the Sicilian Greeks-and, with him, other men of repute in Sicily.”

Finally, when it becomes clear to all that not only is Dionysus deaf to Plato’s teaching, but, infact, the tyrant is determined to kill him, Plato turns to Archytas for help: “I sent to Archytas and my other friends in Taras, telling them the plight I was in. Finding some excuse for an embassy from their city, they sent a thirty-oared galley with Lamiscos, one of themselves, who came and entreated Dionysios about me, saying that I wanted to go, and that he should on no account stand in my way.”

Most of what we know about Archytas and his thoughts comes from either references from the writings Plato, Eudoxos, Plotinus, Eratostenes and others, and a handful of fragments his own writings. Nonetheless Archytas’ contributions seem to have been substantial and essential, to classical Hellenic science. In the following fragment Archytas writes of the science of mathematics: “Mathematicians seem to me to have excellent discernment…for inasmuch as they can discern excellently about the physics of the universe, they are also likely to have excellent perspective on the particulars that are. Indeed, they have transmitted to us a keen discernment about the velocities of the stars and their risings and settings, and about geometry, arithmetic, astronomy, and, not least of all, music. These seem to be sister sciences, for they concern themselves with the first two related forms of being [number and magnitude].”

Besides tutoring Eudoxos, some historians contend that Archytas also tutored Plato in mathematics at some point during the ten years that Plato spent in Sicily and Southern Italy.

Besides saving Plato’s life, itself no mean contribution to the future of humanity, Archytas’ is also known as the founder of scientific mechanics. Other numerous contributions were in the fields of music, astronomy, mathematics, and aerodynamics. He also provided the first solution the age-old problem of “doubling the cube”, i.e. constructing the side of a cube that is double the volume of a given cube.

As I said, Archytas speaks to us only through fragments, yet his thoughts on human creativity and resonate with our own when he says in one fragment: “To become knowledgeable about things one does not know, on must either learn from others or find out for oneself. Now learning derives from someone else and is foreign, whereas finding out is of and by oneself. Finding out without seeking is difficult and rare, but with seeking it is manageable and easy, though someone who does not know how to seek cannot find. ….”

In astronomy Archytas first put forward the notion of an infinite and boundless universe when in another fragment he says: “…since space is that in which body is or can be, and in the case of eternal things we must treat that which potentially is as being, it follows equally that there must be body and space extending without limit.” [This is not to be confused with the idea of simple extension of three linear extensions in space. Ed.]

As with all leading Pythagoreans, Archytas studied music. From these studies comes his discovery and development of the so-called “harmonic mean”.

Archytas is also credited with having developed a geometrical method for the famous “doubling of the cube” using a cylinder, cone and torus. Though not attributed to him there, some historians insist that Archytas approach to this problem can be found in Book VIII of Euclid’s “Elements” .

Since Archytas avowed that geometry was came from the study of physics, this particular solution to the “cube” problem could well have developed out of his work as an inventor and machine tool designer. As I said, Archytas is sometimes called the founder of mechanics.

As reported last week, General of the Revolution and student of Monge, Jean Baptiste Meusnier not only “discovered” the minimal surfaces of the helicoid and catenoid. But also designed and flew the proto-type of the first Dirigible.

In an historical parallel which is certainly not accidental, Archytas is credited with designing and flying the proto-type model of the first heavier than air aircraft.

According to Hero of Alexandria, Archytas designed and built an apparatus wherein a wooden bird was apparently suspended from the end of a pivoted bar, and the whole apparatus revolved by means of a jet of steam or compressed air.

Which takes us to the bolt or screw thread, in principle, the first use of a minimal surface. The which Archytas created and Archimedes then developed even further. Over the next week, why don’t you investigate this problem for yourself.

Construct a cylinder and a helix on that cylinder. You can do this by either constructing a paper of cardboard rectangle with a diagonal, and bending the rectangle into a cylinder; or get an empty paper towel role, which has the helical structure built in. Using the helix as a guide and the cylinder as your unthreaded “bolt”, with paper or any other “bendable” material, try to construct the “threads” of the “bolt” around your cylinder.

I urge you to take some time and try various ways of creating the appropriate shape of the surface that you will “bend ” around the cylinder. I actually spent several hours drawing and cutting various shapes out of paper and then trying to fit them around a cylinder. So give it a try See what you get. Find out for yourself.

Next installment we will look at exactly what kind of surface we need to construct.

How Archimedes Screwed the Oligarchy, Part 2

Once I determined to investigate the implications of LaRouche’s 1987 discovery of the use of “minimal surface” or ” least action’ physical principles in the design and construction of Fillipo Brunelleschi’s Dome of the Cathedral of Florence, I began to look at some of the history classical Hellenic and Hellenistic science.

Among the first “connecting references to minimal surfaces in Classical Hellenic and Hellenistic science was between the Archimedean Screw a water pumping device, though developed sometime in the 3rd century BC, still widely used today, and the helicoid surface as discovered by French Revolutionary General J.B. Meusnier.

Initial investigations of Archimdes’ invention, led to several references comparing the minimal surface helicoid to his invention. Yet none of these references noted the obvious paradox that the former discovery of the helicoid is attributed to Meusnier, the student of Monge, 2000 year later.

This in turn led me back to the 4th century BC founder of mechanics and rescuer of Plato, Archytas of Tarentum, as a way of coming back to the Archimedean principle two centuries later. It is important to note that, in principle, the “machine tool physics” as developed by Archimedes rested upon an historical foundation of at least two centuries or more. This in turn, and in steps, I’m convince will lead back to the implications of Brunelleschi’s Dome of Cathedral.

The problem of design faced by Archimedes would have been:

What kind of surface is the thread* of a bolt or screw?

How would I investigate and map such a surface? Put in another way: How would I “blueprint” the necessary specifics of a new machine tool product like the bolt thread?

Let me be clear. Despite what many historians assert, the engineering methods used by these early “machine tool” designers were not based on trial and error.

Let’s look at the “physics” we began investigating last week; the physics out of which the Archimedean Screw must have developed. As I indicated earlier, this device, invented sometime in the 3rd century BC, is still in use today. It is an ideal, relatively inexpensive means for pumping large volumes of water or other fluid like material, i.e. sand, fine gravel, ore, etc. Therefore improvements in design and development have continued to the present day.

In the latest study of “Optimal Design Parameters for the Archimedean Screw,” as printed in the Journal of Hydraulic Engineering, March 2000 edition, it has been determined that, given various critical parameters, the Archimedean Screw as designed by Archimedes and described by the Rome architect in Book VIII of the Architecture, is in fact, if not the optimal design…the best design! Given design parameters like angle of pitch of the tread surface to amount of thread rotation, or the width of the thread surface compared to the diameter of the overall structure, pumping screw as designed by Archimedes is 7% off from the optimal as determined by today’s engineering capabilities.

In other words, the Journal of Hydraulic Engineering concluded, there is no cost effective way to improve upon the original 2000 year old design. Yet that same Journal’s authors assert that the incredible success of this design is a result of mere experience with the technology over centuries. This is quite an arrogant assertion on the part of the Journal, as none of Archimedes thoughts on the invention of the screw are extant, owing in part to the Roman’s burning of the library of Alexandria. The only course left to the modern investigator, therefore, is to replicate Archimedes thinking, which, in no way, can be considered trial and error.

Two centuries earlier, Archytas was inventing the bolt and screw, whose function can be studied as the intersection of several different, intersecting and interacting surfaces. Archytas is also credited with providing a solution to the age old problem of doubling the cube, using the intersection of those surfaces, i.e. the cone, cylinder and torus.

Archimedes developed a machine tool of such efficient design that, to date it is the best design for doing the job, moving large volumes of fluids. This design also requires the intersection several different surfaces. Archimedes is the first to scientifically investigate volumes of spheres, cylinders and cones, and their inter-relationships. He studied the relationship of weight to volume, using water, to develop the idea of specific gravity. He was not only a mathematician, he was a master inventor and hydraulic engineer.

With this all said; what is the relationship between the “thread” of the bolt and the “cylinder of the bolt? What kind of surface is that thread?

We can “develop” a cylinder by “bending” rectangular plane such that two parallel sides are joined to form the side of the cylinder, while the other two parallel sides from the base and top circles. A cone can be “developed” from a circular plane. Simply cut an arc out of the circle in a “slice of pie” shape. Bend that circular slice of pie arc such that two radii of the circle meet forming the side “ray” of the cone; the point where the two sides of the pie meet, the center of the complete circle of the circular arc is the apex of the cone; the semi-circle forms in the circular base of the cone.

In both cases there is no “ripping” of the surface to make it fit. You just bend it. As you know, we can not “develop” the sphere from a plane; it is not a developable surface. If you did some experimentation last week you might have discovered that the surface of the thread is also not developable.

It is the case though that the circular place surface and the helicoid share common features: 1) They are both minimal surfaces. They define the least area connecting a set of boundaries. The circle, for example is the maximum area for the minimal circumference. The helicoid is a surface which in connecting the boundary defined by a helix also describes the minimal area.

As we pointed out last week, the easiest way to construct a minimal surface is to dip a wire in the shape of the boundary with which you wish to construct the surface, i.e. circle, two circles, cube, pyramid, etc. The soap film will quite beautifully “describe” the shape of the minimal surface connecting those boundaries. A helical wire with a central axis will “describe” the surface called a “helicoid”. 2. Both the circular plane and the helicoid are “ruled” surfaces. If you rotate a straight line such that one end is fixed at a point and the other end of the line rotates around tat point, the straight line become the radius of the circle which seeps out a circular plane surface.

Now look at the helix on your cylinder. The cylinder is bound by two circle whose radii are the radii of the helix as well. Now begin to wind one of those radii along the helix, keeping it perpendicular to the side of the cylinder. Think of a winding staircase inside a lighthouse or turret. Think of the edge of each step as the radius of the helix.

This process will describe the helicoid as “discovered” by Meusnier. Now this is fascinating. We’ve discovered the minimal least action surface of the helicoid as developed in the Archimedean screw in the 3rd century B.C. Now, while trying to convey the idea of constructing a helicoid, we discover that the spiral staircase, an ancient architectural and engineering feature, also describes the helicoid minimal surface. One of the best examples of this is Tycho Brahes observatory in Copenhagen Denmark.

It must be the case that for centuries, if not millennia, architects have been incorporating least action principles of minimal surfaces into their engineering techniques.

More next time.

The Spiral Of The Primes

by Ted Andromidas

When I presented the draft of our last discussion on prime numbers and the notion of indicative quota, to one of my closer collaborators, I was filled with a sense of satisfaction, and wonder, at having gotten a glimpse at what I thought was the idea of number, the generation and distribution of the prime numbers and their connection to the notion of indicative quota. She read it, looked up and said, not the “Ah-ha” for which I had so patiently waited, but “Yeah, so what?”

I was stunned! I sputtered: “What do you mean ‘SO WHAT?’? Are you confused.”

“No, not really.” she said. “I was confused for a moment when you tried to convince me that Eratosthenes and Earthshines was a clever play on words, until I realized that you screwed up the spell checker; AND, despite the fact that you then tried to convince me that there was some pedagogical significance to putting the footnotes out of order, rather than just sloppy editing, I do see how Eratosthenes’ method works; it’s just: What’s the significance of this to indicative quota? And, as a matter of fact, what’s the significance of this problem of prime numbers at all?”

I walked away baffled and disoriented. I thought it was all so clear.

If someone were to say that an “indicative” quota, i.e. a systematic approach to raising money, is just like any other idea of quota, that it’s just a number reached by adding the money raised in any given week, and that any change in the that quota is merely an process of adding or subtracting from that number, could we not characterize that as an “axiom of the system of quota”? Then couldn’t…

“You say to somebody, ‘Here is the axiomatic problem.’ Everybody in mathematics who has a terminal degree–which is what happens to you before they put you in a body-bag–knows the hereditary principle. Even Bertrand Russell knows the hereditary principle–or knew it, wherever he is today. Everybody knows that if you construct a logical system– and mathematics as usually defined is nothing but a logical latticework– everyone knows that if you start out with a system based only on axioms and postulates, and you develop only deductive theorems based on these axioms and postulates, that the entire latticework, which can never be closed, consists of nothing but echoes of the axiomatic assumptions with which you started. Therefore, if one of the axioms is false, the entirety of that field of knowledge collapses.

“An example: If you say that the only thing that exists in arithmetic is the integers, as counting numbers–that everything else is synthetic–therefore, so the argument goes, all mathematics must be derived from the counting numbers as the axiomatic foundation. So you start with an axiomatic counting system, 1 + 1, you construct that, and from that elaborated basis you must develop all mathematics. This is essentially what Russell and Whitehead demanded: radical nominalism. Therefore, as the case of prime numbers implicitly proves–the Euler-Riemann theorem, the work of Gauss on prime number sequences, the ingenious foresight of Fermat on this question, the work of Pascal on the question of differential number series–the entire history of mathematics, centering around this fantastic little problem of prime numbers…”  —Lyndon LaRouche, Schiller Institute Conference, 1984

If we look at the process of counting as iteration, as a function of a one dimensional manifold, we are confronted with “…this fantastic little problem of prime numbers…”: we can not determine the distribution or “density” of prime numbers between 1 and any given number N by any other means than that of Eratosthenes? We can not determine what the next prime number, or for that matter, any future prime number in a counting series, will be before it is actually generated by counting?

Begin counting 1, (2), (3), 4, (5), 6, (7)…;(all prime numbers are bracketed in parentheses) at first it seems that all the prime numbers are also all the odd numbers; that is by just adding 2 + one, then add 2 + 3, i.e. f(p)= (1+2x). We see, to generate the primes, but as we continue to count, (2), (3), 4, (5), 6, (7), 9, 10, (11), (13), 14, 15, 16, (17), (19), 20, 21, 22, (23) 2…,the “pattern” or function seems to change. For a while it seems to be f(p)= (6x +/- 1) till we reach (23), then it changes again. We seem unable to discover a successor function for, not just all the prime numbers, but any particular continuous series of prime numbers.

There have been innumerable functions, theorems, hypotheses, corollaries and conjectures written on this problem: The Prime Number theorem, the Reimann Hypotheses, the Twin Prime conjecture, the Goldbach Conjecture, and the Opperman Conjecture just to name a few.

Let us look at a conjecture referenced several times by LaRouche, that of Pierre de Fermat. Fermat conjectured that every number of the form (22^n +1) is prime. So we call these the Fermat numbers, and when a number of this form is prime, we call it a “Fermat prime”; the only known Fermat primes are the first five Fermat numbers: F0=3, F1=5, F2=17, F3=257, and F4=65537.

In 1732 Euler discovered 641 divides F5 and F(n) has been extended to 31, (i.e. 22^31 + 1) and no other primes have been generated by this function. It is, therefore, likely, yet not proven, that there are only a finite number of Fermat primes.

You might remember that the Fermat primes were the subject of a recent “Reimann for Anti-Dummies”. Gauss proved that a regular polygon of n sides can be inscribed in a circle with Euclidean methods (e.g., by compass and straightedge) if and only if n is a power of two times a product of distinct Fermat primes. (Hopefully we will look at this problem from the vantage point of Riemann’s and Dirichlet’s correction of Euler, if I can figure it out by then.)

Anyway, for now, let’s just continue to investigate the phenomenon of counting and the primes.

The Spiral of Prime Numbers

We saw last week that, using the method first developed by Eratosthenes in the 3rd Century B.C., with the circle as 2 dimensional manifold, we could construct a cyclical or modulo approach to the determination of the distribution of the prime numbers in the one dimensional manifold of the number line. The limitations of that approach are obvious. Moreover, it actually tells us more about the process of generating the non-prime, composite numbers of the number field as, implicitly, an ongoing succession of prime number cycles, than revealing something about the generation of the prime numbers themselves.

As a prime number is generated it is implicitly the modulus of an ongoing cycle, which intersects all past and future cycles of previous prime numbers, transforming the entire number field past and future. Yet, we seem unable to account for the generation of that singular event in the number field, the generation of a prime number, till, in fact, it occurs.

Perhaps it is the way we count; let us “count” differently. Rather than imagining the number line as straight, one dimensional manifold, i.e. 1, 2, 3, 4, 5, 6, 7…etc., is it possible to count in two dimensions? As we’ve seen from the dialogues of Philosph and Cando, numbers in a 2 dimensional manifold are not necessarily what they are in one; but rather than looking at the characteristic differences between two dimensional and one dimensional measure, let us take a more simple construction, and see what happens. Let us generate the number field in 2 dimensions by using a simple a kind of “Archimedean spiral”.

In the center of a piece of note paper write the number 0. To the right of that write the number 1; above one write 2. Now count to the left 3, 4. Below 4 write 5(at the same level as 0); below 5 write 6. To the right of 6 write 7, 8, 9; above 9, at the same level as 1 write 10, and go up 11, 12, 13; now count left of 13, 14, 15, etc. As you count this way, you will generate a spiral of numbers.

Now, beginning with zero, start counting the numbers spiralling out from there.(see figure 1). Try it; it is really not difficult.

(figure 1)

4-(3)-2 | | (5) 0–1 |


What do you notice, almost immediately: a certain number of prime numbers are generated along various diagonals of the number field.(see footnote 1)

Now, if we begin counting with 5 as our first number in the center of our spiral, we notice that between 5 and its square, 25, all the numbers that lie on a diagonal connecting 5 and 25 are prime numbers. They are not all the prime numbers between 5 and 25, but they do define a successor function of prime numbers between 5 and 25.

Begin counting at 11 and we generate a diagonal of prime numbers along the axis between the prime number 101 through 11, to 121, the square of 11. If we start counting with our Archimedean spiral at 41, we discover the same generating characteristic: it is a line prime numbers which stretch along the 41 and 1681 diagonal. In fact, if we count in this manner from 41 to 10,000,000, half of the numbers on that diagonal will be prime.

When we count in the one dimensional manifold, we can, through the sieve of Eratosthenes, determine that the cyclical, modulo characteristics of the counting numbers, the integers is ordered through the two dimensions of circular action. Yet, there seems to be no “connectedness” at all to the ordering characteristics of the prime numbers, no “pattern” seems to emerge.

When we actually begin to count in a two dimensional manifold, a “connectedness” emerges almost immediately, numerical shadows on the wall of Plato’s cave. Why? Is there some ordering principle from a higher, perhaps 3 dimensional manifold, ordering the two dimensions of our spiral counting? It doesn’t provide us with an actual function for determining the distribution of the prime numbers, nor does it help us develop a successor function, but it does impel us on to a notion of a succession of ordering principles, as we will begin to see in our next discussion.

And finally, to my collaborator’s insistent, “So what? What’s the significance of the prime numbers, anyway”, Karl Friedrich Gauss would reply:

“The problem of distinguishing prime numbers from composite numbers and of resolving the latter into their prime factors is known to be one of the most important and useful in arithmetic. It has engaged the industry and wisdom of ancient and modern geometers to such an extent that it would be superfluous to discuss the problem at length. Nevertheless we must confess that all methods that have been proposed thus far are either restricted to very special cases or are so laborious and prolix that even for numbers that do not exceed the limits of tables constructed by estimated men, i.e. for numbers that do not yield to artificial methods, they try the patience of even the practiced calculator… The dignity of the science itself seems to require that every possible means be explored for the solution of a problem so elegant and so celebrated.”

— Karl Friedrich Gauss, Disquisitiones Arithmeticae (translation: A. A. Clarke)

Footnote 1.

Here is a list of the first prime numbers to aid you in your investigations.   2 3 5 7 11 13 17 19  23 29 31 37 41 43 47 53 59  61 67 71 73 79 83 89 97 101  103 107 109 113 127 131 137 139 149  151 157 163 167 173 179 181 191 193  197 199 211 223 227 229 233 239 241  251 257 263 269 271 277 281 283 293  307 311 313 317 331 337 347 349 353  359 367 373 379 383 389 397 401 409  419 421 431 433 439 443 449 457 461  463 467 479 487 491 499 503 509 521  523 541 547 557 563 569 571 577 587  593 599 601 607 613 617 619 631 641  643 647 653 659 661 673 677 683 691  701 709 719 727 733 739 743 751 757  761 769 773 787 797 809 811 821 823  827 829 839 853 857 859 863 877 881  883 887 907 911 919 929 937 941 947  953 967 971 977 983 991 997 1009 1013 1019 1021 1031 1033 1039 1049 1051 1061 1063 1069 1087 1091 1093 1097 1103 1109 1117 1123 1129 1151 1153 1163 1171 1181 1187 1193 1201 1213 1217 1223 1229 1231 1237 1249 1259 1277 1279 1283 1289 1291 1297 1301 1303 1307 1319 1321 1327 1361 1367 1373 1381 1399 1409 1423 1427 1429 1433 1439 1447 1451 1453 1459 1471 1481 1483 1487 1489 1493 1499 1511 1523 1531 1543 1549 1553 1559 1567 1571 1579 1583 1597 1601 1607 1609 1613 1619 1621 1627 1637 1657 1663 1667 1669 1693

The Well-Tempered System: Kepler vs Ptolemy

by Fred Haight

Some of this material was presented in a recent cadre school, and some in a previous pedagogical: at this time I wish to emphasize a particular point. There are still many gaps to be filled in, and questions to be asked, about the history identified here, but I am convinced that I am on the right track, and that Kepler is identifying the right problem.

Lyndon LaRouche has always stressed the importance of Kepler for our music work, but in the past, two problems arose:

1. Professional musicians resented such “outside intrusions” into “their turf”.

2. For a while, only Book Five was translated. You cannot “look at the back of the book”, and expect to find the answer. You have to read the entire work, and pay special attention to the relation between Book Three, where Kepler lays out his own revolutionary musical ideas(fn1), and Book Five; a relation which Kepler himself cites in the Introduction to Book Five:

“I found it truer than I had even hoped, and I discovered among the celestial movements the full nature of harmony, in due measure, together with all its parts unfolded in Book Three – not in that mode wherein I had conceived it in my mind (this is not last in my joy) but in a very different mode which is also very excellent and very perfect.”

I am putting forth the contention, that Kepler, without having composed a single measure of music, may be the greatest musical revolutionary, and that Bach’s breakthroughs, would not have been politically possible, without Kepler.

No great discovery has ever been made without attacking lies, falsehood, and stupidity. Kepler’s discoveries, from Mysterium Cosmographicum on, were all inseparable from his attacks on the method of Ptolemy, Aristotle, Tycho Brahe , Copernicus(fn2) et al. A recent 21st Century article, reprints Kepler’s argument that Aristotle lied, and reinstated the idea of an earth-centered solar system, when he knew that the Pythagoreans had known its true Heliocentric nature much earlier. This, and the revival of the “flat earth” theory, after Erasthosthenes’ discoveries, set science back for centuries.

Humanity lost 17 CENTURIES between the Rata-Maui expeditions, and Columbus, Magellan, et al. SEVENTEEN CENTURIES, because of politically imposed, in fact, REINSTATED, false axiomatic assumptions! As late as 1616, the Counter-reformation, once again, condemned the heliocentric system. Even today, fundamentalist “Christians” will sometimes use the Bible to “prove” that the sun rotates around the earth.(fn3) Mankind must be freed of such arbitrary, but popular opinions, before progress may take place.

Think about the following quote from “Economics: At The End of a Delusion”:

“Kepler was the founder of the first successful effort to establish a comprehensive form of mathematical physics, the first to establish a method which freed science from the ivory tower mathematician’s blackboard, and to civilize mathematics by bringing it into the real world, the world of universal physical principles, rather than the purely imaginary world of abstract ivory-tower mathematical speculations.”

Kepler did the same for music, which had been held back for Centuries, by a similar Ptolemaic system. Out of the thousands of years of mankind’s existence, the period of great Classical masterworks, from Bach to Brahms, lasts just under two hundred years! In the Twentieth Century, under the evil influence of the Frankfurt School, humanity allowed its greatest gifts to be stolen, again.


In order to examine the problem that held back musical progress, we must examine a few things that Kepler understood, but are not present in his Harmonice Mundi. We shall do this through the posing of a paradox. What I shall present here, are sometimes known, as the discoveries of Pythagoras, and his school, but I suspect that they may have suffered the same sort of rewriting, as Aristotle did to Astronomy.

Boethius tells the story, that Pythagoras was walking by a blacksmith shop one day, and “noticed”, that the different sizes of hammers hitting anvils produced different tones. This sounds too much like Newton “noticing” getting hit on the head to me, and besides, I think it would be the size on the anvils that made the difference.

Anyhow, Pythagoras was said to have investigated this, and, supposedly, moved quickly to investigating string lengths on an instrument called the monochord. This is a box with two strings of the same length tuned to the same tone. You produce different tones by dividing the second string into different lengths, and comparing the tones produced, to the sound of the open, first string.

You can approximate the experiment yourself using a Cello, and substituting your finger for the bridge that was used to divide the monochord. We shall use modern terms like “fifth”, rather than “diapente”, etc. Tune the second string of the Cello down to C, so that it is in unison with the first string.

First ,divide the second string in half. If you place your finger so as to divide the sounding portion (from the scroll to the bridge) of the second string in half, and compare it to the open first string, the interval should be an octave. You have blocked off the upper half of the second string with your finger, so that only the lower half of the string is sounding. So, the string length is half of the string, but the sound is twice as high (C at 64 becomes C at 128), so the ratio of the interval is 2/1. The string lengths and frequencies are in inverted ratios.

Next, try a string length of 2/3 (blocking off one third with your finger, and letting two thirds sound). This approximates the fifth. The ratio of the interval is 3/2, so multiply 64 times 1.5 to get G at 96. Then, continue by dividing the second string into three parts, four parts, five parts and 6 parts. The sounding portions will be 3/4, 4/5, and 5/6 of the string, and ratios for the intervals will be, 4/3, 5/4 and 6/5. (Don’t skip the experiment – you will undermine the discovery).

These were said to correspond to: String length: 1/2 2/3 3/4 4/5 5/6 Intervallic 2/1 3/2 4/3 5/4 6/5 ratio: Interval: octave fifth fourth major third minor third

Now, isn’t this beautiful? Here you have an interesting ordering of number (as in the sequence of numerators and denominators), a series of arithmetic and harmonic means (fifth and fourth as those two means of the octave, and major third, and minor third, as the same means of the fifth), inversion of a sort, and musical intervals derived from a physical process.

So, what is wrong with this? Think from the standpoint of method, and take a few minutes from your busy schedule before proceeding.

Did you do what I asked, or are you like those clients, who, when challenged to think, say “I’m sure you’re going to tell me, so let’s get to the bottom line”? Go back!

Three things, all interrelated, stand out. Perhaps you will find more:

1. If there are three types of successively higher-order physical processes, non-living, living, and cognitive; then this determination is from the lowest level, non-living, which might reflect higher order processes, projected downwards, but as through a glass, darkly.

2. If you try to determine planetary orbits individually, they won’t fit together as a solar system. Kepler, in the Mysterium Cosmographicum, starts by seeking the highest, top-down, ordering principle for the entire solar system. We shall see how this problem arises in these musical intervals shortly, in the paradox of the comma.

3. This is not as obvious, but the axiomatic PREJUDICE, that intervals could only be represented by rational numbers (fractions), set music back for centuries; much as the prejudice, that planetary orbits could only be perfect circles, did Astronomy. Organizers can do the same thing. “This way is the best, because we have been doing it this way for centuries.”

Kepler recognizes how long this prejudice held court. From the introduction to Book Three of Harmonice Mundi:

“Having discovered definite proportions,” or “the fact that,” it remained to track down the causes as well or “the reason why” some proportions marked out consonant intervals, and others dissonant.

And in the course of two thousand years the opinion had been reached that the causes are to be looked for in the proportions themselves, as they are contained within the bounds of a discrete quantity, that is to say, of Numbers.(fn4)

Question: Is the monochord experiment itself, a tautology?

Boethius, in his fifth century “De institutione musica”, states that intervals can only be represented by rational numbers, as they are the best. How could an irrational number, which is not precise, represent something as specific as an interval, he asks? Boethius was considered THE AUTHORITY for a thousand years.

Even in the debates over tempering, it was often insisted that the “pure fifth” 3/2 was the best, the closest to perfection, and should be used whenever possible, or come as close to as possible.

Kepler, on the other hand, goes for the throat on this point. He acknowledges the use of incommensurables, preferring the Greek term, {alogoi}, which he translates as “inexpressible”, to the Latin term irrational (which CAN mean without reason, as well as without ratio). In Book One he elaborates their “degrees of knowability”. Everything beyond the third degree of knowledge is an “inexpressible”. What a beautiful concept: the incommensurable is ordered, in a knowable way! From Book One:

“People are always molesting inexpressibles, by trying to express them – as numbers!”

Let’s look at the problems that arise from this fixation on rational numbers:

1. The Lydian interval, even on a monochord, is represented by the square root of two, so it would be have to be banned (it was banned on its own merits as the Devil’s interval).

2. Since half-tones and tones cannot even be approximated as fractions, the system begins to break down at their determination. They invented different sizes of them: 9/8 was a major whole-tone, 10/9 a minor whole-tone – half-tones were at 16/15 (wide semitone), 18/17, 25/24 (diesis), even 256/243 (narrow semitone)! Since they were trying to add these intervals up, to the pure ratios of the above mentioned consonant intervals, they had to invent certain critters to fill the gaps, such as the same diesis, the limma (135/128), etc. Does not all this remind you of the way quantum mechanics sometimes makes up particles, to force experimental evidence to conform to a faulty theory? Doesn’t it remind you of the “made-up” epicycles in the planetary orbits? Imagine trying to teach singers to sing all these!

3. Supposedly, Pythagoras himself developed the paradox of the “comma”. If he did, one would have to admire a man who challenged his own system, but I’m not sure that was the way it worked.

I will demonstrate the “comma” from our modern terminology of tones, intervals and frequencies.

Take C at 256. Go down three octaves to C at 32. This is the lowest C on the piano. Play, on the piano, a series of 12 fifths (with one “enharmonic”): C G D A E B F#(or Gb) Db Ab Eb Bb F C. This takes you up 7 octaves to C at 4096, the highest C on the piano. However, if you take C at 32, and multiply by 3/2, or 1.5, the ratio of the “pure fifth, you will get 48 for G. Keep doing this twelve times, and instead of an octave of C, 4096, you will get 4151. The fifths are a little too large! If you have a diagram of the circle of fifths, imagine if it did not meet at the top, C, but there was a slight gap. This gap was dubbed the comma, and it had a precise measurement.

There is an abbreviated version of this. Three major thirds should comprise an octave: C E G# B#(C). An octave of C at 256 should be 512.

Multiply 256 by the ratio of the major third, 5/4, or 1.25, three times, and you will get, 256, 320, 400, 500. So the major thirds are too small! (If you wish to argue that the Greeks did not know frequencies, you can multiply the ratios and find the same problem. If you accept the octave, or diapason, as having a ratio of 2/1, then multiply 1 by 1.25 three times, and you will obtain,1, 1.25, 1.5625, 1.953125. Again, it falls short of an octave).

So, think back to point number two, the problem that arises when you try to determine the system as a whole, rather than one interval, or orbit, at a time. Kepler has great fun pointing out, that even if one rejects incommensurables for rational numbers, as the only representations of intervals, these rational numbers are themselves incommensurable – with one another!

Kepler, in Book Three, is polemical about the need to destroy this prejudice. Not only are rational numbers not the best; they are not a cause at all.

From the introduction to Book Three:

“… the causes of intervals have remained unknown to men…. I shall be the first, unless I am mistaken, to reveal them with such accuracy.”

Also from Book Three, next to a margin entitled “His error (Ptolemy’s) in treating a non-cause as a cause”:

“… since the terms of the consonant intervals are continuous quantities, the causes which set them apart from the discords must also be sought among the family of continuous quantities, and not abstract numbers, that is in discrete quantity; and since it is the Mind which shaped human intellects in such a way that they would delight in such an interval….the causes of such intervals being harmonious, should also have a mental and intellectual essence….

“if the cause was sought in abstract numbers. Yet it would still not be very clear why the numbers 1,2,3,4,5,6,etc conform with musical intervals but 7,11,13, and the like do not conform.”

In Chapter One of Book Three, Kepler states that he is using a geometrical method (the inscription of plane figures in a circular string), as a “substitute for the Pythagorean abstract numbers, which have been repudiated.”

Kepler was more opposed to the numbers being seen as a cause in themselves, than the division of strings; his division, however, is a very different, geometrical one. He inscribes the plane figures in a circular string, and orders the intervals according to the same degrees of knowability that he laid out in Book One. This is still not his highest determination.

Throughout Harmonice Mundi, he consciously UPLIFTS the cause of intervals to a cognitive one – from Chapter Sixteen of Book Three:

“The theme of that book (Five) is the sole object which I intend in this whole work. For, being an astronomer, just as I argue about the regular figures not so much geometrically….as astronomically and metaphysically, so also I write about the ratios of melodies not so much musically as geometrically, physically, and lastly, as before, astronomically and metaphysically.”


1. In the title page to the entire work, Kepler’s description of Book Three includes:

“…and on the nature and distinguishing features of matters relating to music, contrary to the ancients;”

I.e., he is refuting the ancients. The translators, Duncan and Field, insist that the only “real discovery” in the entire work, is the so-called, Third Law.

But, Kepler challenged future musicians to act on his discoveries. He sought his Bach, as well as his Leibniz. The introduction to Book Five reveals his sense of what a revolution he was unleashing:

“I am free to taunt the mortals with the frank confession that I am stealing the golden vessels of the Egyptians, in order to build of them a temple for my God, far from the territory of Egypt. If you pardon me, I shall rejoice; if you are enraged, I shall bear up. The die is cast, and I am writing this book- whether to be read by my contemporaries or not. Let it await its reader for a hundred years, if God himself has been ready for His contemplator for six thousand years.”

2. Bruce Director, in a conference presentation, quoted Ptolemy on how, of the Theological, Physical, and Mathematical causes of something, mankind could only know the Mathematical “cause”. Years ago, Bruce had a pedagogical, quoting Copernicus on how it didn’t really matter, if your mathematical model corresponded to physical reality, only if it described it. If that quote is reliable, then that, plus his continued insistence on perfect circles, would tend to put Copernicus in the Ptolemaic camp, despite his acknowledgement of the Heliocentric nature of the Solar system.

3. Harmonice Mundi is completed in 1619, at the beginning of the German part of the Thirty Years War, and the same year as Kepler’s works were put on the Index of Prohibited Books.

4. Supposedly there was a difference between Ptolemy and what was represented as the Pythagorean view, on whether it even mattered how the intervals sounded, or whether the ratios alone determined consonance or dissonance. Kepler doesn’t think there is much difference. The two Venetians, Galilei, and Zarlino, took side in this matter. 5. The translator of Boethius into English, says that his work is basically just a translation of Ptolemy. I have to check this out. Ask yourself, what is the axiomatic prejudice built into the monochord experiment?


by Fred Haight

Is the question of tempering then, just a question of finding then right ratios, and correcting the errors, or is it something far more important?

Equal tempering arose, supposedly, as early as Aristoxenus, a pupil of Aristotle, as a mechanistic procedure of simply dividing the octave into a “chromatic scale”of twelve equal tones, based only on what sounded good, in disregard to any physical cause (I’m not sure that I am giving fair due to him that’s something I have to look into more closely). A kind of gang-countergang debate sprung up, between those who said that this was best, because the ear was the ultimate guide, and those who said that you cannot abandon the physical cause of the intervals, for what seems merely sensuously pleasing. This allowed Boethius to make a phony distinction between practicing musicians (whom he considered vulgar), and the superior, theoretical musicians, who only contemplated the beauties of the ratios! Centuries later the Venetian Vincenzo Galilei, in a phony debate with his deceased Venetian predecessor, Zarlino, proposed to divide the octave into twelve equal tones by the ratio 18/17. Kepler, who otherwise speaks positively of tempered intervals, rejects Galilei’s determination as “mechanistic”(6) ( he also points out that it doesn’t work-it generates a comma). The well-tempered system is not a matter of finding the right ratios, but of CHANGING YOUR THINKING, and starting from the TOP down, in terms of the actual processes governing the universe, as do LaRouche, and Kepler. Look back at the previous quotes from Kepler, on lifting the investigation of melodies to an astronomical, and metaphysical level, and on the cognitive nature of the causes of the intervals, as communicated from the Mind, to our minds.

An academic reader would have a hard time identifying Kepler as founder of the well- tempered system after all he keeps using these so-called “pure”ratios to represent the intervals, even in the Fifth Book.

But, in the Fifth Book, he does something different. The ratio between the aphelial, and perihelial angular velocities within a single orbit, he refers to as being like ancient plainchant, a single, primitive melody.

The ratios between planets though, he refers to in terms of polyphony, which blossomed in the Rennaissance, with the development of bel-canto. In Chapter Five, he sets up two scales, one from the set of convergent ratios, i.e., from the aphelion of the lower planet (farthest out from the sun), to the perihelion of the upper ( e.g. Saturn to Jupiter; Jupiter to Mars etc.) The other is the set of divergent ratios, where he inverts the process, by starting with the perihelion of the lower planet, and the aphelion of the upper. These two scales, which he calls hard, and soft, are not exactly our major and minor: the hard scale differs from the major by one tone, but that is enough to make the two scales inversions of one another!

In Chapter Seven of Book Five, Kepler examines the possibility of several planets being in tune, at these extreme ratios, at the same time, which he compares to four-part harmony. Here, he says that a “certain latitude of tuning,”is not only acceptable, but necessary. In his charts, in this chapter, he identifies the highest and lowest possible tunings for each of these measurements. (This latitude of tuning is not an arbitrary variance, as in equal tempering, but comes from different means of measuring these physical ratios, of perihelial, and aphelial angular velocities).

After the chart on the possibility of five planets being in tune, he states: “Here at the lowest tuning, Saturn and the Earth coincide at their aphelia; At the mean tuning, Saturn joins in at its perihelion, Jupiter at its aphelion; At the highest tuning, Jupiter joins in at its perihelion.”

Even a single pair of planets being located strictly at the “pure ratio”can exclude other planets from being “in tune” at all.

The same problem arises in polyphony. So-called “just intonation”was an attempt to construct a scale with as many “pure”fifths and thirds as possible. This doesn’t even work within a single scale (how many fifths are there in any “diatonic”scale? How many major, and minor thirds? Can they all correspond to the “pure ratios”)? In both cases, bel-canto polyphony, and the solar system, tempering arises not from some pragmatic evening out of the scale, but new discoveries of physical principle in the universe; and, in both cases, tuning is determined, not “at the blackboard,”but by the composition itself, whether it be by a human artist, or The Divine Architect himself! (see footnote 10)

Lyndon LaRouche has always insisted that music originates in the polyphonic, bel- canto vocalization of sung poetry (which itself “contains a score”in its prosodic elements such as the ordering of vowels, meter, etc.) Is bel-canto vocal registration living, or cognitive; or perhaps living participating in the higher level? The discovery of solutions to paradoxical problems: to ironic, polyphonic “dissonances”through inversion etc, is cognitive, and parallels the discovery of new physical principles as solutions to paradoxes in physical science, but, such cognitive ironies, are usually expressed, as ironies in the living harmonics of VOCAL REGISTRATION, and can only exist in the “physics”of actual polyphonic musical composition, not the “classroom mathematics”of formal systems of scales, keys etc. In other words, cognitive musical ideas do not exist as disembodied notes, as Heinrich Schenker, or a counterpoint text would imagine: but, perhaps we could follow Vernadsky and LaRouche, in saying that cognitive discoveries in music, create ironies in voice registration, as “natural products,” in the “living processes”of bel-canto, much as the biosphere creates “natural products”such as soil, water cycles etc.; and as cognitive processes, generate increased relative potential population density, as a natural product, in the biosphere.

Wait a minute! Does not all this sound like what has been presented in Volume One of the Music Manual (or projected for Volume Two?) But step back a bit. Did not Lyn, in the Music Manual, revolutionize musical theory, by finding the origins of music in the highest cognitive, and living levels of physical processes? Compare this to oligarchical theories, of music originating in, “bird songs,” “the dance,””hammers hitting anvils,”etc. They all wish to eliminate the idea that human cognitive activity originates in anything human! Now, you can begin to appreciate what Kepler actually did. (7)


As discussed in a previous pedagogical, but worth repeating, human voice registration produces an entirely different set of harmonics than a mere vibrating string, characterized by a series of Lydian intervals, and a chromatic scale of register shifts (when the down shifts are considered as well as the up). (8)

Now look at the Lydian intervals in the six voice species:


Soprano-Tenor C F# B F

Mezzo-Baritone Bb E A Eb

Bass Ab D G C#

Here we have all six Lydian intervals organized in a series of descending half-steps (the next one would be F#-C), in a form where each of the two tones comprises the main register shifts for a specific voice type. C F# B F BbE A Eb Ab D G C#

This is , of course, not all that could be said about the harmonic ordering of the human voice, but here, in this series determined by voice register shifts, we have a “chromatic scale” of twelve half-tones, but from the true physical cause. Not only is every tone a register shift (as Eliane Magnan used to demand), they are not all equal. As Lyndon LaRouche first pointed out in “Beethoven as a Physical Scientist,”tones are not “point frequencies,”but more like regions of negative curvature. They can occupy an area, and move (as can, and must, Brunelleschi’s dome), according to the Analysis situs of actual composition (for this reason, well-tempering could not be derived from the keyboard, which is a FIXED tuning). (9) So, Gb can differ from F#, as Pablo Casals clearly understood, but it will differ more or less, as the composition itself, requires. (10)

Ironically though, there are still only twelve tones; all attempts to create quarter-tones etc, fail.

One of the most exciting ideas ever presented by Lyndon LaRouche on music, was in the famous footnote 65 to “The Becoming Death of Systems Analysis,”which applies perfectly to Mozart’s K.475: “The pivot of the entire composition so unfolding, is a conflict in tonality, derived lawfully from those simpler ironies of well-tempered counterpoint, but expressing a clash of ironies equivalent to an ontological paradox in physical science. Thus, it is a physical reality, as represented by the natural (i.e., bel-canto) composition of the natural-determined division of the human singing-voice . which imposes naturally generated ironies and paradoxes upon the formalist’s musical scale.”

Think of that idea: of the cognitive activity, of playfully imposing the Lydian centered harmonic series of the human voice, on a lower order, non-living, more formal harmonic species, and generating ironies and paradoxes throughout; much as the collapse of the real, physical economy, such as U. S. steel production is posing such paradoxes for the formal, utopian schemes of the Globalists, now. K 475 , by including the F# in the opening Bach statement, and the pedal point series, generates new modalities, unthinkable in, say, a pre-1782 Mozart Sonata.

So, the well-tempered system, is inseparable from human bel-canto voice registration, from the Classical principle in Art, and from the moral intent of actual classical musical composition, whereas equal tempering, implies nothing for composition. Not surprisingly, Schoenberg seized on it as the basis for his so-called twelve-tone system, which throws out the voice, the mind, etc.

Could the Greeks have known this? Could equal tempering have occurred on its own, or only as a Delphic operation against a real discovery? Polyphony is certainly natural (despite textbooks that say it wasn’t thought of until the 11th century A.D., and then as an annoying drone)! But, I’ll bet that human beings were born with natural bel-canto voices then, even as they are now. It’s true it has to be developed, but in the last century, some great singers were “naturals”for whom the “voice” was there. The nature of the best of Greek culture, and science, would suggest that they would investigate the right areas, and if you’re looking in the right place, it should not be that hard to find.

There is a lot more work to be done here, but let me ask another question: How many people, through a revolution in method, bring about such a fundamental change in science, and art? How many Leonardos, Keplers, and LaRouches are there in history? In our new Century, great works of art shall be made, by artists who absorb LaRouche’s ideas, as a whole.

Lastly, let me leave you with a beautiful thought by Kepler on tempering:

“There is an absurd arithmetic equality at banquets if everybody is seated indiscriminately, with no account taken of sex, condition, or age. On the other hand mere geometric similarity is insipid. For if the learned are put only next to the learned, what good will they do to the unenlightened? If women only next to women, what pleasure will there be? If the rowdy next to the rowdy, who will instill good behavior into them? But if you admit neither blind equality, nor peevish similarity, the proportion will be harmonic. For you will bring it out that the old rejoice to see the young,the men to see the women, the young are ruled by the wisdom of the old,the women by the authority of the men (sic), the sociable stimulate the unsociable …this is not a combination of intact kinds, but to a certain extent an infringement of them, to set up a harmonic proportion. Friendships are given life by harmonic tempering. For what concord is to proportion, that love, which is the foundation of friendship, is to the whole compass of human life.” Footnotes:

6. In the Nineteenth Century, Helmholtz divided the octave into twelve equal parts by the twelfth root of two. He follows Mersennes and Rameau in basing musical theory on the overtone series of a vibrating string, which is somewhat worse than a monochord (Mersennes and Descartes also had a phony debate over tempering going on, with Descartes taking the side against temperament).

In the late 1600s, Werckmeister, wrote that he had created one diatonic-chromatic-enharmonic scale. Diatonic vs Chromatic music was another phony debate. (Read “the Case against Rock,” and other writings from that time, and you will see how we fell into that trap). These three “genera,”chromatic, diatonic, and enharmonic, were considered incommensurable; the well-tempered system, created a Gauss-like congruence, and thus integrated them 7. Thought processes themselves are highly musical. This is an area that requires a lot of investigation, but, rather than the so-called Mozart effect, I find it very interesting that professionals who work with Alzheimer’s patients, and stroke victims, find that the musical memory persists, even when memory is otherwise impaired, or gone. 8. The F# centered voice register series is what determines C at 256. Without that, you have no defense against the arguments of “relative pitch,”which does exist. It is the intersection of the fixed, voice register values, with the “transposable”keys, which gives each key its unique “color,”and protects us against random transposition.

9. For this reason, Werckmeister had at least three tunings for his well-tempered Clavier. Though Kepler identified the difference between living and non-living processes in his “Snowflake”paper, it remained for Bach to discover all these questions of the living bel-canto voice. His son, Emmanuel, makes it clear that bel- canto was the basis, even of Bach’s keyboard technique.

10. Plato’s, and Kepler’s, Composer of the Universe, requires the same quality of change. In the Seventh Chapter, of Book Five, of Harmonice Mundi, Kepler must temper the intervals differently for the hard, and soft scales. Thus, the well-tempered system, is neither a fixed tuning, or a series of fixed tunings, but requires constant change, as generated by the composition itself!

On The Circles Of Apollonius

By Bob Robinson

Apollonius of Perga (260-170 B.C.), called by the ancient Greeks “the great geometer” for his discovery of the concept of “conic sections”, is a much neglected giant in the history of science. According to Pappus (300 A.D.), he traveled to Alexandria from his birthplace in Asia Minor as a young man, attracted by the ideas of Aristarchus of Samos (310-230 B.C.), who discovered the heliocentric principle in astronomy. Apollonius undoubtedly collaborated with Eratosthenes (284-210 B.C), who was the librarian in Alexandria at the time Apollonius was there. Indeed, it would be fair to call him the immediate successor of Archimedes and Eratosthenes in geometry and astronomy. His written works, which except for part of On Conic Sections, have been “lost”, included (according to Pappus) the titles Cutting of an Area, Determinate Section, Tangencies, Inclinations, Plane Loci, and On the Burning Glass. In the latter work, Apollonius demonstrated why only a parabolic, not a spherical, reflector would focus light on a point. He is also known to have developed a sundial with a curved surface to more accurately determine time. How much of his work was destroyed when Julius Caesar burned down the Alexandria library in 48 B.C., we do not know.

Nevertheless, let us attempt to put a “parabolic focus” on the elementary breakthrough contained in Apollonius’ concept of “conic section”. It is not just the fact that circles, ellipses, parabolas, and hyperbolas are all formed by cutting a cone with a plane. Though Apollonius coined the terms ellipse, parabola , and hyperbola, others before him including Archimedes knew these figures were conic sections. Rather, it is the discovery of the significance of the cone, or conical action, itself!

What is the characteristic of the whole cone, as opposed to the characteristics of conic sections? It is the equi-angular, or logarithmic, spiral winding around the entire cone. As far as we know, Apollonius never directly identified the logarithmic spiral as such. But, the intuition Apollonius must have had about the cone was that it gives geometric form, an image, to what we would call the exponential function (raising to higher powers) as the envelope that includes the conic sections. His work is therefore a direct hereditary descendent of the discoveries of Archytas, Menaechmus, and Eratosthenes on the doubling of the cube, and the equally direct hereditary ancestor of Leibniz’ work on the catenary, Gauss work on complex numbers and residues, and Riemann’s work on surface functions of a complex variable.

How can we know this? We have to go outside Apollonius’ work On Conic Sections, and situate that work in the broader corpus of his other titles. Consider, for example, a famous construction, probably derived from Apollonius’ work under the title On Plane Loci, called the Circle of Apollonius. The construction is in two dimensions, and runs as follows.(See for diagrams.)

Construct a triangle with vertices A, B, and C. Bisect the angle at C, and find the point C’ where CC’ intersects AB. Unless the triangle is isosceles with sides AC and BC equal, CC’ will not bisect AB, but AC/BC=AC’/BC’ . Now, extend AC past C to some point E, and bisect angle ECB to intersect the extension of AC’B at some point D. Then, AC/BC=AD/BD. Also, since angle ACE is a straight line (contains 180 degrees), and is equal to angle ACB plus angle ECB, angle C’CD (being 1/2 of angle ACB plus 1/2 of angle ECB) is a right angle (contains 90 degrees).

Next, construct a circle with C’D as diameter. Because C’CD is a right angle, triangle CC’D will be inscribed in the semicircle with diameter C’D, and C will be somewhere on the circumference of that circle. In short, the locus of all the points, whose distances from points A and B have the same proportion as AC to BC, will be a circle, which has come to be known as the Circle of Apollonius

Furthermore, there are three distinct such circles associated with any triangle, depending on whether we bisect the angle at A, B, or C to begin the construction. Those three circles will be collinear at two points, and all three will be orthogonal to (intersect at right angles) the circle circumscribed around triangle ABC!

Thus, Apollonius constructed a planar system of multiply connected circular action, which creates an orthogonal relationship (right angles) at the intersection of those circles. If one were located in the small area around those intersections, one would seem to be surrounded by the square grid of Euclidean space! (This is similar to Riemann’s orthogonal intersection of parabolic surfaces.)

Ironically, Apollonius may have been inspired to understand the significance of the three dimensional cone as the envelope for two dimensional conic sections by geometrical constructions in two dimensions which pointed intuitively toward a domain of multiply connected circular action more powerful than three dimensional space!

Archytas and Apollonius Compared

Take, as a related example, the little model of Archytas’ “doubling of the cube” that I recently constructed, and wrote a pedagogical exercise about in the second week of September. That model, which I know delighted everyone who took the time to look at it, was based on my realization that most demonstrations of Archytas are flawed by failing to rigorously distinguish between two dimensional cross sections of the model, which are constantly changing, and the model as a three dimensional object, which stays the same. The torus, the cylinder, and the circle at the base of the cylinder, and the line of intersection of the torus and the cylinder do not change, but the cone, which is typically lumped together with the torus and cylinder, is not a single cone at all, but is constantly changing! That is, the circle, cylinder, torus, and the curve of their intersection do not change, no matter what chord and diameter we are trying to find two mean proportionals between. They are “integral” features of Archytas model. But the cones do change; they are the “differential” feature of the model!

The only way to truthfully represent the integral of all the possible cones for every possible cross section of the model was with a sphere, having the circle as its equator. That done, any planar cross section passing through the center of the torus of Archytas’ construction, and formed at a vertical right angle to the original horizontal circle of Archytas’ construction, would, when displayed on a suitable planar surface, show the whole construction “clear as day” to any willing student. In the cross section, two circles would appear, the smaller one a cross section of the sphere, and the larger one a cross section of the torus. In cross section, the upright cylinder appears as a vertical straight line. The laser beam I employ forms the (verbal) action in the model by piercing both the torus and the sphere in a locus forming a cone. The laser appears in the planar cross section as a diagonal line cutting both the smaller and the larger circle, as well as the straight line of the cylinder in cross section, in such a way that both extremes, as well as both mean proportionals, “leap out at you”. The student will look at the cross section, then the model, then the cross section, and so back and forth, until the conception is clear.

I am sure Apollonius had a similar experience, when for the first time, he discovered, and then showed students how cross sections of a cone produce conic sections. Take the ellipse, for example. Who would think that a flat plane cutting a cone diagonally, from a point quite close to the vertex of the cone down to a point where the cone’s diameter has become quite wide, would form a perfectly symmetrical ellipse? Or, who would guess that, while hyperbolas and ellipses may have various shapes, that not only the circle, but the parabola (like a catenary) has only one? These things become clear in cross section.

Apart from this, there is a construction of the conic sections in two dimensions based on the locus of points that maintain a constant proportionality of distance to a fixed point and a fixed line, called the “directrix”. Supposedly, Apollonius never investigated this property of conic sections, despite the glaring analogy with the Circle of Apollonius.

So, what is the “spring” in the discovery that Apollonius made concerning conic sections, or that Archytas made with his “conic sections” of the sphere, torus, and cylinder? It is the paradox, that a true understanding of the three dimensional object only occurs when the differential feature of the object, as represented by multiply connected circular action in even “merely two dimensions”, is shown to be the “force” or “vis viva” behind the resolution of visual anomalies associated with the three dimensional realm.

“How can two dimensions be of higher power than three dimensions?”, you ask. That is the wrong question. Ask, “What in the physical universe is of a higher power than the three dimensions of space or the four dimensions of space time, so that it is possible sometimes for a two dimensional picture to be of a higher power than three dimensional space?”

It occurred to Apollonius that multiply connected circular action, portrayed even on a two dimensional planar surface, could represent more degrees of freedom than the three dimensions of visual space, just as conical action is of a higher power than conic sections!

Greece: Child Of Egypt, Pt. I

Lyndon LaRouche recently described classical Greece as the “child of Egypt.” The great figures of the sixth century B.C., Solon, Thales and Pythagoras, were, in fact, the children of Egypt, each having travelled to Egypt and studied under the Egyptian astronomer- and geometer-priests. Through them, and others, Egypt transmitted a science — a method of knowing the universe which has reached its current height in the works of Gauss, Riemann and LaRouche. Yet, the role of Egypt in relation to science, astronomy and mathematics has been almost universally rejected by modern historians of science, as the following samples show:

” … looking at Egyptian mathematics as a whole, one cannot escape the feeling of disappointment at the general mathematical level. … Babylonian mathematics … did supply a basis for Greek mathematics. … We do not need to set up a hypothesis concerning a lost Egyptian higher mathematics.” from Science Awakening, Van der Waerdan

” … mathematics and astronomy played a uniformly insignificant role in all periods of Egyptian history … mathematics and astronomy had practically no effect on the realities of life in ancient civilizations.” from Exact Sciences in Antiquity, Neuegebauer

” … The Greeks owed much more to the Babylonians than to the Egyptians.” from Greek Astronomy, Heath

Nor will one find much literal evidence of Egypt’s role in these fields in available, ancient writings. There are only a few written mathematical-scientific papyri that have been discovered, most dating from Egypt’s Middle Kingdom (2000-1800 B.C), and none from the great Pyramid Age of the Old Kingdom. Of Pythagoras, the central figure in this transmission, there are no extant writings. Nor are there any from other Pythagoreans of his generation.

But, if you look with your mind, instead of with your senses, the evidence is abundant.

A comparison of a passage from Kepler, to one from Plato, begins the journey. Kepler, in the introduction of Book 5 of the “Harmonici Mundi,” pays homage to the importance of Egypt, “I am free to taunt the mortals with the frank confession that I am stealing the golden vessels of the Egyptians, in order to build of them a temple for my God, far from the territory of Egypt. If you pardon me, I shall rejoice; if you are enraged, I shall bear up. The die is cast and I am writing this book — whether to be read by my contemporaries or not. Let it await its reader for a hundred years, if God himself has been ready for his contemplator for six thousand years.”

Kepler’s is echoing a passage from Plato’s “Laws,” during which Plato, in the person of the Athenian Stranger, cites the same Egyptian golden vessels: “Then there are, of course, still three subjects for the freeborn to study. Calculations and the theory of numbers form one subject: the measurement of length and surface and depth make a second; and the third is the true relation of the movement of the stars to one another … Well then, the freeborn ought to learn as much of these things as a vast multitude of boys in Egypt learn along with their letters… The boys should play with bowls containing gold, bronze, silver and the like mixed together, or the bowls may be distributed as wholes.”

What is the subject of this boys’ play?: the incommensurable, as the Stranger elaborates next. In questioning Cleinias, he establishes that Cleinias believes he knows what is meant by “line,” “surface,” and “volume.” Then:

“Ath: Now does not it appear to you that they are all commensurable (measurable) one with another?

Clein: Most assuredly.

Ath: But suppose this cannot be said of some of them, neither with more asurance nor with less, but is in some cases true, in others not, and suppose you think it is true in all cases: what do you think of your state of mind in this matter?

Clein: Clearly, that it is unsatisfactory.

Ath: Again, what of the relations of line and surface to volume, or of surface and line one to another; do not all we Greeks imagine that they are commensurable in some way or other?

Clein: We do indeed.

Ath: Then if this is absolutely impossible, though all we Greeks, imagine it possible, are we not bound to blush for them all as we say to them: Worthy Greeks, this is one of the things of which we said that ignorance is a disgrace?”

With this brief section of the “Laws,” Plato has given us the essence of the “who” and the “what” behind the development of classical Greece: the “who” is Egypt, the “what” is a geometrically-grounded mathematics, for which the questions involving the incommensurable were primary. Plato unpacks the various paradoxes which deal with the incommensurable in the Meno, the Theaetetus, and the Timaeus.

Most readers will be familiar with Plato’s “introduction” of the problem in the Meno, that the diagonal of the square is incommensurable with its side. That Socrates is threatened for his method, in the course of the dialogue, by Anytus (who later helps precipitate his trial and execution), perhaps foreshadows Kepler’s recognition that some will be “enraged” by such ideas.

But it is in the Theaetetus and the Timaeus that Plato establishes, directly, the debt to Egypt. The Theaetetus begins to introduce the necessary concept of “power” or dunamis. The power which creates a square or a cube is an action in the universe, an action knowable to the mind, but not reducible to the sense-certainty numbers of the visible domain. The two characters in this dialogue, besides Plato, are two real geometers who made fundamental breakthroughs. The older of the two, Theodorus, comes from the Greek-Egyptian city of Cyrene, a city on the western edge of Egypt, and dominated by the Temple of the Egyptian god, Zeus Ammon. Theodorus is the teacher of the young Theaetetus who goes on to discover the uniqueness of the five Platonic solids.

In his masterwork, the Timaeus, Plato is even more direct in identifying Greece’s debt to Egypt. Plato opens the dialogue by having Critias tell of Solon’s trip to Egypt and his instruction by the priests of Heliopolis. When the priests chide Solon, that the Greeks are children, and have no knowledge of ancient things, the tell Solon that Egyptian knowledge and civilization extend back 9000 years (hence, 9600 B.C.). With that introduction, Plato unfolds his composition on the universe, in a very Pythagorean discussion of astronomy, harmony and geometry.

Indeed, Pythagoras was the key figure in the transmission of Egyptian knowledge to Greece. The sixth century B.C. was the century of Solon, Thales and Pythagoras, and was the century in which the leadership in this method of thinking, passed from Egypt to Greece. Iamblichus, a third century A.D. biographer of Pythagoras wrote that it was Thales, the Ionian scientist, who deployed Pythagoras to Egypt:

“When he had attained his eighteenth year, there arose the tyranny of Polycrates: and Pythagoras foresaw that under such a government, his studies might be impeded .. So by night he privately departed (from the island of Samos) … going to Pherecydes, to Anaximander the natural philosopher and to Thales at Miletus…. After increasing the reputation Pythagoras had already acquired, by communicating to him the utmost he was able to impart to him, Thales, laying stress on his advanced age, advised him to go to Egypt, to get in touch with the priests of Memphis and Zeus (priests of Ammon, ed.). Thales confessed that the instruction of these priests was the source of his own reputation for wisdom, while neither his own endowments nor achievements equalled those which were so evident in Pythagoras. Thales insisted that, in view of all this, if Pythagoras should study with those priests, he was certain of becoming the wisest and most divine of men…. He (Pythagoras) visited all of the Egyptian priests, acquiring all the wisdom each possessed. He thus passed twenty-two years in the sanctuaries of the temples, studying astronomy and geometry, and being initiated in no casual or superficial manner in all the mysteries of the Gods.”

Working back from Plato’s various identifications of Egypt as the wellspring of a geometrical, astronomical and harmonic tradition which is embedded in the study of incommensurables, to the history of the sixth-century B.C. travels and studies of Solon, Thales and Pythagoras, one might ask Van der Waerden and his cothinkers, why they think that Egyptian higher mathematics is either “lost” or non-existent. Perhaps, as Kepler suggests, it is the rage induced by living inside a reductionist’s mind that can only see the shadow’s cast on the cave wall.

A future pedagogical will “let the stones speak” of Egyptian astronomy.

On Polygonal Numbers [; And So On]

Larry Hecht

Diophantus, who lived probably around 250 A.D., wrote a book called {On Polygonal Numbers,} of which only fragments remain. One of the famous fragments refers to his work on a definition by Hypsicles, an earlier Greek mathematician, concerning polygonal numbers. Working out what Diophantus means in this short fragment proves quite interesting, and relevant to the topics we have been discussing in these series. I will first give you a translation of the fragment from Diophantus. Don’t worry if it seems incomprehensible at first. We will construct it, and then it will all be quite clear.

Diophantus writes:

“There has also been proved what was stated by Hypsicles in a definition, namely, that `if there be as many numbers as we please, beginning from 1 and increasing by the same common difference, then, when the common difference is 1, the sum of all the numbers is a triangular number; when 2, a square number; when 3, a pentagonal number. The number of angles is called after the number which exceeds the common difference by 2, and the sides after the number of terms including 1.'”

To understand what he means, let’s take the most familiar case, that of the square numbers. Most books discussing this subject (sometimes referred to as the “figurate numbers”) draw dots as illustration; but there is a flaw in this, which you will understand after we have done the complete construction. It is far better to find some square objects, or cut them out of paper. Using these square tiles as the units, you will discover that only certain numbers of tiles go together into squares. The first grouping is 1, the second 4, and the third 9. But you should construct this for yourself, for it is already telling you something important about a certain kind of bounding condition, which interested Kepler very much.

Now, cut out some equilateral triangles and do the same thing–that is, make triangular numbers. This is a little less familiar, so I will illustrate how to count in triangles for you:

    /\     1                  /\/\/\/\    4

	/\/\    2                  /\/\/\/\/\  5

	 /\                          /\
	/\/\    3 (2-triangled)     /\/\       6 (3-triangled)

You see that the first three triangular numbers are 1, 3, and 6. These have sides of lengths 1, 2, and 3, just as the first three square numbers (1,4,9) do. You might notice that there are also holes in these numbers, which the squares did not have. There is no need to worry about them. You will see by the end, why they must be there.

Finally, we come to the pentagonal numbers. Now, you must cut out at least 5 equal pentagons, although 12 would be better. Here the fun began for me: to figure out what 2-pentagoned would look like. As I don’t want to spoil it for you, I will not say right here, but let you pause and puzzle over the construction a bit. For now, I will give you the numerical values: the first three pentagonal numbers are 1, 5, and 12.

Now, it is easy to see from these constructions what Hypsicles had discovered, and described in words. We can illustrate it in the following series:

Triangular numbers (common difference = 1)
	Series: 1  2  3  4   5   ...
	Sums:      3  6  10  15  ...

Square numbers     (common difference = 2)
	Series: 1  3  5  7   9
	Sums:      4  9  16  25

Pentagonal numbers (common difference = 3)
	Series: 1  4  7  10  13
	Sums:      5  12 22  35

In each case. we start with one, and increase by the common difference, characteristic for the series. The sum of the numbers in the series is the number of tiles we had to employ to make the triangular, square, or pentagonal numbers.

This may all seem innocent enough, but there is a “fighting” matter of epistemology buried within. It is the same point which Gauss addresses from a more advanced standpoint, in his refutation of Euler, Lagrange and d’Alembert’s attempts to prove the Fundamental Theorem. Namely, do we accept any notion of number, or operations on number, that is not constructible, or subject to “constructible representation” (as Gauss once described the same issue respecting a matter in physics)? It is not only a fighting matter for us. Our enemies also get very upset over the issue. I recognized how much so, after I contemplated why the translator of the Loeb Classical Library Edition {Greek Mathematical Works, II} felt it necessary to add the bracketed phrase “[; and so on]” following the words “when 3, a pentagonal number” in the citation from Diophantus that I gave above. If Hypsicles or Diophantus had wished to say “and so on,” why would they not have done so? Sir Thomas Heath, the leading British commentator on these matters, finds it a shortcoming that Hypsicles had not gone further than the pentagonal number, and claims that what Hypsicles was really showing was how the n-th term of a series, with any common difference, could be determined.

Yet, anyone who has properly considered the significance of the Platonic solids, and stuck to the principle of mathematical rigor employed by both of Gauss and his Greek predecessors, would immediately recognize why Hypsicles stopped at the pentagon. What is being considered is not a math-class game of number series, which seem to go on forever to a bad infinity, but a process of examining the lawful constructibility of number. There is a clue to this also in the {Theaetetus} dialogue of Plato, which had been in the back of my mind, as I wondered what was getting Heath and company so worked up. Consider how Theaetetus describes there, in his examination of the problem of incommensurable numbers, the generation of the numbers 1, 2, 3 as the sides of the square numbers 1,4,9. He calls the numbers 1,2,3 “powers” (where we were taught to call them “roots”), because they have the “power” to generate squares, the singularity under examination in this case. The point in both cases, is that number must be lawfully constructed, and it is obvious that Hypsicles was doing so by examining the paradoxes generated by the Platonic solids.

So, let us now see what happens, if we take these polygonal numbers into the next dimension. The case most familiar to us is that of the square turning into a cube. Thus 1-cubed is 1, 2-cubed is 8, and 3-cubed is 27. (Remember, we are not doing a multiplication table operation, but a construction.) What, then, is the equivalent construction for the other polygons? We can see the case for the triangle most easily, if we now build ourselves four tetrahedra (that is, the Platonic solid made of four equilateral triangles), using triangles of the same size as those we cut out for the construction of the triangular numbers. Construct again the triangular number three, and place a tetrahedron atop each of those triangles. Then, place one more tetrahedron at the summit. Examining the solid so constructed, you will see that it has sides of length 2 in every direction–hence we have constructed 2-tetrahedroned. You can figure out for yourself, what 3-tetrahedroned would be [; and so on].

You might have noticed that there was a hole on the inside of the figure 2-tetrahedroned. That space in there turns out to be an octahedron, by the way. We also had those holes in the plane when we built the triangular numbers. This is telling us something interesting about the tiling of the plane, and the filling of space. Only squares and hexagons, among the regular polygons, can tile the plane, and only cubes and rhombic dodecahedra, among the regular (or quasi-regular) solids can fill space without gaps, which you can investigate for yourself, as Kepler did to his great delight. If you try to tile the plane with pentagons, you notice that when three come together at a point, there is an overlap. That is the key to constructing the figure 2-pentagoned, which I left for you to figure out earlier. You must break the unwritten rule in your mind, and allow yourself to overlap the sides.

Now, if tetrhahedrons do not quite fill space, but leave gaps, and cubes just manage to fill it up, you might expect that dodecahedra would go too far. and overfill it, just as the pentagons overtiled the plane. If you have now constructed your 2-pentagoned figure, with the overlapped sides, you can try your luck at placing a dodecahedron atop each of the five overlapped pentagons, and another dodecahedron atop each of these, to produce the number 2-dodecahedroned. You will see that, just as the pentagons had to overlap, so the dodecahedra must overlap, or interpenetrate, and so the figure 2-dodecahedroned will be of a different type than the cubic or tetrahedral numbers.

Those of you who know why there cannot be more than five regular solids, will now see why Hypsicles stopped at the pentagon. For, while the series with increasing common differences can be extended out to a bad and boring infinity, the interesting paradoxes are not going to arise, unless we have a concept of a constructive process for these numbers.*

Before closing, and since you have all the materials at hand, let us review why there can be only five regular solids. It is a famous proof, given by Kepler. The regular solids are, in fact, the plane projections of the figures produced by tiling the sphere, and they are five ways to do it. As plane solids, they must have regular polygons for their faces, so the problem is reduced to great simplicity by considering only how many of these figures may come together at a vertex. Start with the equilateral triangle. Three of these may be joined at a point, and brought together into a sort of cup, so that they could hold water. This will become the vertex of the tetrahedron. Four triangles may also be brought together, and cupped; they will form the vertex of the octahedron, which looks like two Egyptian pyramids brought base to base. Five triangles may also be brought together and cupped; they form the vertex of the 20-sided icosahedron. However, when six triangles are brought together, it is seen that they just lie flat, and cannot be made into a vertex of anything solid. Next, we try the square, and find that three can be brought together and cupped into a vertex of what becomes the cube. But four are too many; they lie flat. Three pentagons lying in the plane, and joined at a vertex, leave just enough space to be cupped into a vertex of what becomes the dodecahedron. But that is the end of the possibilities, for if we next take a regular hexagon, we find that when three are brought together at a point, they simply lie flat and cannot become the vertex of any solid.

So, you see, there is no “[and so on],” as Hypsicles and Diophantus appear to have understood better than their modern commentators of Oxford erudition. Avoiding “[and so on]” is also good advice for your speaking practice–that you not recite a series of things in sing-song fashion, as so many people do these days, as if there were no lawful cause for their being there. This is nominalism in language, as the idea of number without constructibility is nominalism in mathematics. It is part of the same disease, which we are trying to cure.


* Of such interesting paradoxes, you might consider, as a topic for more advanced consideration, that a prime number is a constructible species in the series of numbers constructed using squares as the tiles. Following Theaetetus’s specification that we allow only square or oblong (rectangular) numbers, the prime is a number of the form that it can only be represented as a rectangle of width 1. What, then, is a prime number in the triangular or pentagonal series? What else is peculiar about the square and rectangular numbers?

Pierre de Fermat became quite fascinated with the polygonal numbers, and discovered many things about their properties of combination. His famous Last Theorem might be seen as an investigation of the constuctible properties of solid numbers of the square, cubic, and higher variety.

Construct a Solar Astronomical Calendar

by Larry Hecht

The evident success of the ongoing project to measure the retrograde motion of Mars, suggested to me that we are ready to take up another challenge in observational astronomy–the construction of a solar astronomical calendar.

This is a challenge that Lyn posed to us nearly 25 years ago, in part influenced by a trip to India, where he came into contact with the work of turn-of-the-century Indian independence leader Bal Gandaghar Tilak. I first began to seriously take up Lyn’s challenge in connection with my own efforts to understand Tilak’s work some time in the mid-1980s. To be honest, I could not understand at first why Lyn kept talking about “constructing a calendar,” which I thought was something easily obtainable at any stationery store. Once I began to understand what was involved, however, I found that this project led in a number of very interesting directions.

Tilak’s work involves the hypothesis that verses in the sacred Vedic hymns refer to astronomical phenomena, which could only be known by a people living at a point at or above the Arctic Circle. His hypothesis immediately brings into play at least three important and interlocking branches of science: astronomy, Indo-European philology, and climatology, all necessarily subsumed under the topics physical economy and universal history. One of the most provocative aspects of Lyn’s discussion on the subject was the hypothesis that a highly-developed, poetical-musical language, (such as was indicated by the Sanskrit, for example) would be required for the task of recording and preserving astronomical observations over long periods of human history. Rather than the object-fixated grunts of some doomed society of primitive Rave-dancers, forms of verbal action capable of expressing the transformative nature of natural law would be required, including verbal forms capable of expressing the subjunctive mood necessary for any hypothesis, varying degrees of completion of action, and many other subtleties.

Attempting to read Tilak’s {Arctic Home in the Vedas,} however, produced some immediate problems. Early in the book, the author began talking about astronomical phenomenon, such as the precession of the equinox, the seasonal motions of the Sun, and the relationship of the Sun to the zodiacal constellations, which, I soon realized, I had no real understanding of. To make any sense out of his thesis, it was clearly necessary to have some grasp of these things, so I decided at some point to dig in and make the effort. As I had been reading books about astronomy since childhood, it was something of an embarassment to have to admit to myself that I could not even explain the meaning of the seasons in any cogent way. A joke which Jonathan T. had been making at that time had stuck in my mind, and was helpful in overcoming the embarassment. The joke, which I think he may have included in the title of a Fusion magazine article he wrote at the time, was the phrase “Astronomy without a Telescope.”

A Simple Calendar Observatory

The method I suggest here for constructing a solar astronomical calendar, is not an exact replica of the steps I took. However, I think it will work, and, under the present circumstances, where collective and enthusiastic pedagogical activity is taking place all around, it should allow us to proceed quickly and happily.

I suggest we begin by constructing something which will resemble, in principle, the famous Stonehenge, an historical artifact which has unfortunately taken on all sorts of cult-like significance, but which is actually just one of many still-standing astronomical observatories from the Megalithic period. Our observatory will be much simpler. Probably the most difficult part of this project will be to find a level site with a good view of the horizon, especially to the east and west, to which we can return regularly. The calendar observatory need consist of no more than some stakes in the ground, arranged around part of the circumference of a circle, and one stake at the center.

Now, here is what I suggest we do. On the first day, we make two observations, one at sunrise and one at sunset. We begin by locating a center for our circle, and driving a stake in the ground at that point. This will be the siting post for all the observations. Now, choosing an appropriate circumference for our circle, and using a rope or chain to keep a constant distance, we plant a stake in a line from the siting post to the point where the Sun rises over the horizon. We return before sunset, and similarly drive a stake in the ground on the other side of the circle where we see the sun set.

That simple observation, repeated over the course of a year, will provide us with an experimental understanding of many important concepts in astronomy, including the summer and winter solstice, the vernal and autumnal equinox, and the equation of time (which, by the way, bears a certain relationship to the lemniscate). But this is only a beginning. For, using no more than our simple observatory, we may next begin to observe the motion of the Sun, not only with respect to fixed positions on the ground, but also with respect to the stars. From this we may develop many new concepts, including that of the precession of the equinox, which plays an interesting part in this history of science, which, we shall also come to see, is the history of language.

But we will also have an advantage over our predecessors, who were carrying out such observations probably tens of thousands of years ago, several cycles of glaciation back into the pre-historic past. By use of modern means of communication, we will be able to rapidly compare observations made at widely divergent positions on the Earth. We shall have the great advantage of having access to observations at the high northern latitudes of Stockholm and Copenhagen, the near-equatorial latitudes of Bogota and Lima, and many middle latitude sites in both the Northern and Southern Hemispheres. This will really make for some fun, some paradoxes, and definitely ensure that there is no “right answer” to be looked up in the back of the back.

To start out, I suggest we take the time to explore and secure a good site for our calendar observatory, and begin with the first very simple observation of marking the rising and setting points of the Sun. Between these points, we will have a circular arc on the ground, whose angle can be measured and recorded. It would also be useful to make some observations of the path of the Sun in the sky over the course of a day. From this observation and the position of our two stakes in the ground, we should also be able to come to a clear understanding of the meaning of North, South, East and West, and also of the word Noon. For some added fun, we might try to measure the highest declination of the Sun, and observe what time it occurs on our watches.

With the measure of the circular arc between the two stakes in the ground recorded, it will be most interesting to immediately compare the results with those found on approximately the same day at other calendar observatories around the globe, as one could do, for example, on an international youth call. If it should happen that some of the observations should take place around the 22nd of September, a very interesting paradox will arise when the observations from different latitudes are compared. (The path of the Sun and its position at Noon ought also to be observed on that day.) But it will only get more interesting, as the subsequent observations are taken, and compared for the different latitudes.

So, let the fun begin.

The Case Of Max Planck

By Philip Rubenstein

If the history of humanity and human knowledge proceeds by crises–by solving the problems and anomalies that arise as we reach the boundaries of our development or knowledge–then might it not be the case that if the path we have chosen at some past point is wrong or in error, that we must, of necessity, go back to that fork in the road and correct that choice? Or, if it is said that we cannot really go back in time, still we must go back and change our choices, our axiomatic orientation, thus to allow an actual change in path from here on, and to see what else was misled as a consequence of that event. It is often just such a rigorous journey that is rejected, sometimes merely out of the horror of the labors involved but, also, to what wreckage we may find on the way.

When we look at 20th century science, for all of its accomplishments, it rests, in the main, on achievements derived from the 18th and 19th century continuation of Leibniz’s tradition. In fact, much of the fundamental science of the present obscures that reality, and little of a fundamental nature, but confusion, has been added in this past century, except as derived from that obscured heritage.

If we look to the case of Planck and the attack on him, and his defense by Einstein and himself (as referred to recently by Lyn and Caroline Hartmann’s work), we find a very significant such point, much obscured. And seeing what was obscured is of great importance.

While others know this story far better and would wish to point out critical areas to fruitfully pursue, Planck’s quanta simultaneously upset two groups. The notion put forward by Boltzmann and those who viewed the universe as a simple continuum, was that the absorption and emission of radiant energy would occur in a way to conform to that uniformity, Thus, as an absorbing body was heated, it would emit through all frequencies. Since, however, the upward direction of increasing frequencies was infinitely larger, we would be led to “violet or ultra-violet catastrophe.” The predominant, and infinitely so, range would be in the upper frequencies. One might note how OFTEN these views lead to catastrophes—Olbers’ paradox, entropy, etc.

In reality, of course, this does not happen. In fact, the emissions peak, and fall off. Like other such cases, a real event is paradoxical from a given set of assumptions.

What solution is available? Planck ultimately, and with great thought, proposed that radiant energy is, in fact, emitted in quanta, such that a constant proportion exists between the frequency of radiation and a quantum in which it is released. That ratio is Planck’s constant, {h}. Thus, as the frequency increases, the “packets” or quanta likewise increase, thus the amount of work increases to accomplish this, and the condition is bounded such that the “catastrophe” fails to occur.

But this is a radical idea! The simple infinite continuum of spreading electromagnetic radiation is now transformed. In some way, individuals, singularities are formed and at increasing densities. This concept was anathema.

Further, as this applies to atoms and the like, an apparently predetermined ordering seemed to be placed–for example, in electron placing in atoms.

This prompted Rutherford to write to Bohr, who used this part of Planck, “it seems to me that you would have to assume that the electron knows beforehand where it is going to stop.” A response that resonated with the usual offense that philosophical empiricists feel at the nature of science.

Bohr himself was one of those who nonetheless attempted to contain Planck’s idea within the “either, or” scheme of “wave and particle” of the mechanistic outlook, by saying it is both—- embracing contradictions, so to speak.

This path was not unlike the Machians or positivists for whom science is not the search for truth, but merely for logically consistent systems, with which different formulations may be equally acceptable as long as the appearances are saved.

In fact, both the entropists and the positivists recoiled at precisely the concept that inorganic nature exhibited a density, ordering and creating individuals that mediate higher ordered processes. Thus, the challenge of the turn of the 19th century was placed before us with Planck’s concept, along with other similar ideas in physics, biology, etc. The 20th century chose to REJECT the continuation of the Leibniz tradition.

Planck, who was a conscious Leibnizian said in his autobiography:

“While the significance of the quantum of action for the interrelation between entropy and probability was thus conclusively established, the great part played by this new constant in the uniform regular occurrence of physical processes still remained an open question. I therefore tried immediately to weld the elementary action, {h}, somehow into the framework of classical theory. But in the face of all such attempts the constant chose itself to be obdurate…..

” My futile attempts to fit the elementary quantum of action somehow into the classical theory continued for a number of years and they cost me a great deal of effort. Many of my colleagues saw in this something bordering on a tragedy. But I feel differently about it, for the thorough enlightenment I thus received was all the more valuable. I know knew for a fact that the elementary quantum of action played a more significant part in physics than I had originally been inclined to suspect, and this recognition made me see clearly the need for the introduction of totally new methods of analysis and reasoning in the treatment of atomic problems.”

Einstein, brought to Berlin by Planck, contributed to developing Planck’s idea (of which relationship much could be said o significance for the political history of the 20th century). In a speech on Planck’s 60th birthday in 1918, he said:

“The supreme task of the physicist is to arrive at those universal laws from which the cosmos can be built up by pure deduction. There is no logical path to these laws; only intuition, resting on sympathetic understanding of experience, can reach them. In this methodological uncertainty, one might suppose that there were any number of possible systems of theoretical physics all equally well justified; and this opinion is no doubt correct, theoretically. But the development of physics has shown that at any given moment, out of all conceivable constructions, a single one has always proved itself decidedly superior to all the rest. Nobody who has really gone deeply into the matter will deny that in practice the world of phenomena uniquely determines the theoretical system, in spite of the fact that there is no logical bridge between phenomena and their theoretical principles; this is what Leibniz described so happily as a `pre-established harmony.’ Physicists often accuse epitstomologists of not paying sufficient attention to this fact. Here, it seems to me, lie the roots of the controversy carried some years ago between Mach and Planck.

“The longing to behold this pre-established harmony is the source of the inexhaustible patience and perseverance with which Planck has devoted himself, as we see, to the most general problems of our science, refusing to let himself be diverted to more grateful and more easily attained ends. I have often heard colleagues try to attribute this attitude of his to extraordinary will-power and discipline–wrongly, in my opinion. The state of mind which enables a man to do work of this kind is akin to that of the religious worshipper or the lover; the daily effort comes from no deliberate intention, or program, but straight from the heart..”

Much of the obfuscation of the 20th century could be corrected by returning to the mistaken path foisted by the likes of Bohr and formalism and discovering the true nature of Planck’s contribution.

Living Chemistry

by Brian Lantz

Recall that yellowing periodic table, hanging on a wall in your science classroom, or perhaps the color-coded version that appeared at the back of your chemistry textbook. You read it in that textbook: modern science bows in the direction of Dimitri Ivanovich Mendeleyev, and gives him credit for the discovery of the periodic table of elements. Ask yourself whether “textbook” science understands Mendeleyev at all. The answer may not be known to you, and that, perhaps, will peak your curiosity. What is taught today, of the actual methods of Lavoisier, Pasteur, Mendeleyev? Do we know anything of those methods of Mendeleyev, which led him to his famous discovery? He knew nothing about electron shells, which explained the periodic table in your textbook. Consider that his writings are now virtually nonexistent in English, and only scantily available, or studied, anywhere in our noosphere. Perhaps, a benefit derived from this pedagogical series will be an appreciation of the methodological “roots” of that enormous chemical knowledge bequeathed to modern society, and further recognition that only if noetic methods are applied, might we reverse the very definite, measurable entropic effects of ignorance!

Consider the following comment of Dmitri Ivanovich Mendeleyev (1834-1907) – a correspondent of Henri and Marie Curie, and intellectual predecessor of Vernadsky – taken from a lecture before “The Royal Institution of Great Britain,” May 31, 1889. He is speaking of his periodic table, whose “groups,” “families,” and “periods” reveal the periodic ordering of the elements.

“The tendency to repetition – these periods – may be likened to those annual and diurnal periods with which we are so familiar on the earth. Days and years follow each other, but, as they do so, many things change; and in like manner chemical evolutions, changes in the masses of the elements, permit of much remaining undisturbed, though many properties undergo alteration. The system is maintained according to the laws of conservation in nature, but the motions are altered in consequence of the change of parts.”

Can we not surmise that, like Kepler, Mendeleyev appears to have plumbed the universe, and found it alive with {intention}? Dimitri’s lecture was entitled, “An attempt to apply to chemistry one of the Principles of Newton’s Natural Philosophy.” In that lecture, he stated that {only one} of Newton’s three laws of motion could be applied to chemical molecules, and he thanked Lavoisier (and also Dalton) for recognizing, in “the unseen world of chemical combinations,” {the same orderings} which, he pointed out, Kepler – and, he said, Copernicus – discovered in the planetary universe.

We will return to dialogue with our new-found friend, Dimitri Mendeleyev, soon. In this and following pedagogicals, we prepare the way by considering some of the chemistry of the seventeenth and eighteenth century, and particularly the revolution worked by Mendeleyev’s ‘friend’, Antoine Laurent Lavoisier (1743-1794).

What Is Elementary?

Today, the typical chemistry textbook begins from discrete “building blocks.” These discrete parts, presented as self evident in-and-of-themselves, are ripped from the larger cycles, ‘periods,’ and evolutions of which Mendeleyev spoke. They can only appear as if dead: Elements are compiled from atoms, which in turn are differentiated by their atomic number, etc. Molecules are then built up out of combinations of these discrete elements which, we have just been told, are not really so elementary. Then only, interactions of molecules – “inorganic” by their nature – are built up. Today, the colors and techniques of computerized graphics present this all vividly to the eye, but not more alive. The principle of life is really no where to be found. Lyn has pointed us to Erwin Schrodinger’s influential little paper,{What is Life?}, and there you may find a banal, lifeless rationalization: life comes down to chromosome fibres, which are “aperiodic crystals,” albeit “novel and unprecedented.”

How refreshing then, to consider that the scientific revolution associated with Lavoisier and his circles, which in turn was also the acknowledged foundation for Mendeleyev’s work, began with the study of {respiration}, and what Lavoisier (borrowing from Stephen Hales) termed “plant and animal economy” – life! Lavoisier’s conscious jumping off point, in 1773, as guide to his future work, were the topics of fermentation, vegetation, respiration and the composition of bodies formed by plants and animals. The development of the scientific field of chemistry, proceeded from the study of {life} as certainly as the physical sciences, taken as a whole, began with the study of the heavens (astrophysics).


Of course mankind had long had a practical understanding of many natural, chemical processes. Man has been making wine and beer for thousands of years, to generally good effect, but that is not science. Today, many of our post-modern denizens might find the idea, of the discovery of oxygen, an ‘intuitive’ no-brainer: “Hey, it’s what we breathe, and somebody named the stuff oxygen.” Thank God that Leibniz, Franklin, Priestley, Lavoisier, among others, understood that the development of physical economy required that man discover new physical principles, not name them! Let us lay a foundation. Consider now, albeit briefly, a few, provocative examples of early work, prior to Lavoisier, into the why’s of chemical and physical processes.

It had long been know that an animal could only live for a certain length of time in a given quantity of common air. But why? In 1660 Robert Boyle demonstrated that a flame is extinguished and an animal dies in an evacuated chamber of an air pump. Is there a connection between these two empirical facts, and what is it? In the 17th century it was also shown that venous blood become arterial in passing through the lungs, and that the color change takes place only so long as the lungs are supplied with fresh air. However, it was also known that air {in} the blood could be fatal – certainly a paradox. It was also thought, based on no small amount of empirical evidence, that air – one of the four physical elements along with fire, water and earth – did not enter into chemical combinations. To the practitioners of the principle of sufficient reason, the contradictions and paradoxes were everywhere!

The answers, as we will discover, were not “right in front of their noses.” The efforts, to carefully isolate the essential paradoxes, required painstaking work, the proofs were indirect, the actual experiments tedious, and the means cognitive. Facts did not, and do not, “add up.”

By the middle of the 18th century, work on the chemistry of “airs” prompted the consideration of new postulates, if not revolutionary new axioms. Joseph Black isolated what he coined “fixed air” – a distinct “aeriform” substance which, unlike ordinary air, could combine (“fix”) with lime and with alkalis. This fixed air was deadly; observation found that animals placed in it died in a matter of seconds. Joseph Black then convinced himself that the exhaled air of respiration was the same as his fixed air, “that the change produced on wholesome air by breathing it, consisted chiefly, if not solely, in the conversion of part of it into fixed air. For I found, that by blowing though a pipe into lime-water, or a solution of caustic alkali, the lime was precipitated, and the alkali was rendered mild.” Black also found that fermentation and burning charcoal produced his “fixed air.” Air obviously entered into chemical combinations.

We leave it to the reader to investigate what modern chemistry would say about the process(es) involved here. (“Lime-water” is made up from the mineral, not the fruit.) We do see, even without satisfying the itch to look into a chemistry textbook, that Joseph Black, among others, was onto something. Respiration produced a kind of gas, which combined (“fixed”) to lime, but why?

To ordinary air and “fixed air” were soon added others. “Inflammable air” was produced by certain metals in dilute acids, and rigorously determined to be distinct from both common and “fixed” air – including by observing its effects on animals. Even though it was not known what animals (and humans) inhale or exhale, or the actual role of respiration in physiology, the effects of “airs” on respiration was a obvious reference point!


Enter Benjamin Franklin’s student and collaborator, Joseph Priestley, who became, by the early 1770’s, the most determined investigator of new “species” of airs. Joseph Priestley was among those who became intrigued by an experiment first done decades earlier: Placing a small animal under a glass inverted over water, he observed that its breathing caused the water level to rise in the glass, up to 1/27 (or there-abouts) of the total volume of the common air originally enclosed. The air diminished in volume! The “common air” we breath, Priestly hypothesized, drawing upon his wide ranging work with various airs, was “disposed to deposit one of the parts which compose it.”

That air might be a composite was, in itself, a potentially axiom-busting notion. Priestley, who studied putrefaction and compared it, through experiments, with respiration, also did not believe that Joseph Black, et al had proven that “fixed air” was alone created by respiration. “Animal and plant substances which are corrupted furnish putrid emanations, and fixed air or inflammable air, according to the time and circumstances,” reported Priestly, a not unimportant observation, as we will see.

Further, in studying the effect of respiration on air, which he originally understood to be a “corruption or infection” of the air, he rigorously reported, “There is no one who does know that a candle can burn only a certain time, and that animals can only live for a limited time, in a given quantity of air; {one is no more familiar with the cause of the death of the latter than with that of the extinction of the flame under the same circumstances, when a quantity of air has been corrupted by the respiration of animals placed within it}.” [Emphasis added -bl]

Let us pause, along our trail, leading up to Lavoisier and his work. We have seen that types of airs – almost entirely ‘invisible,’ directly, to the senses – were now being differentiated, and compared. The ability of some airs to “fix” to certain known substances had also been recognized – indirectly. We have seen that exhaled air of respiration had features that compared to that produced by the burning of fermentation and coal. Also, whatever air, or change air, that caused a candle to go out, in almost every case also killed a mouse or bird! (They also found that the animal, if removed from the bell jar could also often recover.) All of this is indirect – non-empirical – as we have seen with the lime-water experiments, but perhaps most transparently with the rise in the water level, in the bell jar, with the respiring mouse – a kind of barometer.

Consider now a stunning contribution, ‘holistic’ in nature, from Joseph Priestley: Priestley tenaciously believed that “nature must have a means” of reversing the process of respiration which “corrupted” ordinary air! Why? As animals died if exposed only to the corrupted air (or ‘fixed air’) expelled in respiration, Priestly argued, the mass of the atmosphere would have long ago become inhospitable for the sustenance of animal life! Basing himself on this certainty – that, in effect, the universe was not entropic, but rather ‘the best of all possible worlds’ – and testing the effects that plants might have on the “corrupted air” of man and animal, Priestly discovered that green plants restored this corrupt air to respirable common air! Here was a cycle, discovered among “airs”, as certain as those to be found in the orbits of the planets.


Lavoisier, who warmly admired and carefully studied Joseph Priestly’s ongoing work, and was himself a part of Franklin’s extended network, shared Franklin’s and Priestley’s underlying, if unstated, {Leibnizian} outlook.

In his early review of Priestley’s work, and undertaking his own experiments to confirm Priestley’s, Lavoisier recognized apparent, crucial anomalies in Priestley’s results, as based on Priestley’s own thorough, well-circulated reports. Utilizing “baths” of mercury (first utilized by Priestley), rather than water, in which a glass bell of “airs” could be contained and changes in their volume measured, Lavoisier drew certain distinctions. Lavoisier noted, in particular, the difference in airs from putrefying animal matters (which, in what follows, Lavoisier designates as the “fixed air”), and that of respiration, and that of both from common air:

“Air which has thus served for the respiration of animals is no longer ordinary air: it approaches the state of fixed air, in that it can combine with lime and precipitate it in the form of calcareous earth; but it differs from fixed air (1) in that when mixed with common air it diminishes the volume, whereas fixed air increases it; (2) in that it can come into contact with water without being absorbed; (3) in that insects and plants can live in it, whereas they perish in fixed air.”

In short, Lavoisier noted that exhaled air, and what he here distinguishes as fixed air, may also be distinct “airs”. You may have already leaped to conclusions, or tried to, calling up terms like “CO2,” “nitrogen,” etc. Stop yourself and consider what you actually know – have discovered – about the phenomena in question. Relax, and place yourself in the shoes of Joseph Priestley and Antoine Lavoisier. After all, how could {we} prove, for example, something which we probably all assume: that these different “airs” are actually, elementarily, different airs, as opposed to being different “fluxes” or “variations” of a single air, under varying conditions of moisture, light, pressure, etc.? That was still something that Priestley and Lavoisier have not answered for themselves.

Let us jump ahead, to Chapter II of Lavoisier’s {Traite’e’lmentaire de Chimie}, published in Paris in 1789, to also appreciate Laboisier’s universalizing, non-empirical standpoint, along side that of Joseph Priestley. Lavoisier was to coin the term ‘gasses’ to replace the more confusing term, ‘airs,’ as we will see. In chapter I, Lavoisier outlined his working premise, of an underlying process in nature by which there is “separation of particles of bodies, occasioned by caloric.” (Caloric (heat) was understood by Lavoisier to be a substance, itself a gaseous state of matter.) Here then, just from the second chapter, is what he writes:

“These views which I have taken of the formation of elastic aeriform fluids or gasses, {throw great light upon the original formation of the atmospheres of the planets, and particularly that of our earth}. We readily conceive, that it must necessarily consist of a mixture of the following substances: First, of all bodies that are susceptible of evaporation, or, more strictly speaking, which are capable of retaining the state of aeriform elasticity in the temperature of our atmosphere, and under a pressure equal to that of a column of twenty-eight inches of quicksilver in the barometer; and secondly, of all substances, whether liquid or solid, which are capable of being dissolved by this mixture of different gasses.”

[Emphasis added-bl]

Lavoisier then writes that, {to better consider the issues involved}, one might consider, “If, for instance, we were suddenly transported into the region of the planet Mercury, where probably the common temperature is much superior to that of boiling water”, and pressures would also be transformed. For Lavoisier, no Aristotelian or neo-Aristotelian division exists, between the heaven and earth or between macrocosm and microcosm! Lavoisier concludes chapter II with an hypothesis, regarding the possible “inflammable fluids” that might exist in the lighter upper stratta of air (atmosphere), and their relationship to “the phenomena of the aurora borealis and other fiery meteors.” [Emphasis added – bl]

To be continued……….

On the political economy of the Leibniz-Franklin-Priestly tradition, the interested reader is referred to the February 9,1996 EIR feature, “Leibniz, Gauss shaped U.S. science successes”.

Living Chemistry, Part II


In his private memorandum of February 1773, Antoine Lavoisier stated that it was “the operations of the plant and animal economy,” together with “the operations of art,” which absorb and disengage air. Lavoisier continued, “one of the principal operations of the animal and plant economy consists in fixing the air, in combining it with water, fire, and earth in order to form all of the composed with which we are acquainted.” Can we consider this vantage point a foreshadowing of Vernadsky’s much later discovery of the ordered phase-space relationship of the noetic, to the biotic and abiotic domains? Place Lavoisier’s 1773 statement in context, here simply considering Joseph Priestley’s discovery, acknowledged by Lavoisier, that the functioning of the atmosphere necessarily includes the respiration of plants, as the complement to the respiration of animals and man. Here, the atmosphere itself is a creation of living processes, taken as a totality, and those living processes act on the rest of nature, “in order to form all of the composed…” Benjamin Franklin’s own work with lightning also comes to mind.

That Priestley and Lavoisier be understood as forerunners of Vernadsky, as figures united in the simultaneity of eternity, is now of special significance. While anyone familiar with Lavoisier’s work and notebooks would realize that that the principle of life is central, today his best known idea is used to promote the opposite. An “axiom” of Lavoisier’s is given the modern, imputed content of systems analysis, a principle of no-change, ruling out the efficient existence of life.

Lavoisier’s Hypothesis

I think that it is very important to quote Lyn, from his latest paper, “A new Guide For The Perplexed – How The Clone Prince Went Mad!” to help us consider Lavoisier’s axiom. This is taken from the section of his paper titled, “The Definition Of Knowledge,” wherein he referred to Kepler’s discovery of universal gravitation and Fermat’s preliminary, experimental definition of the isochronic principle. He writes,

“The solution for such an ontological paradox, is the discovery of a verified hypothesis. By hypothesis, we signify an idea which has the quality, in form, of a universal physical principle. To qualify for the title of hypothesis, that idea must show either that some relevant axiomatic assumption of the believer was false, or that some additional axiomatic assumption, that of the hypothesis, would produce a new system of thought consistent with all of the relevant evidence. If a certain uniquely appropriate quality of design of experiment, shows that that hypothesis is universally correct, we adopt that hypothesis as a universal physical principle. The result of incorporating such an hypothesis as a universally efficient principle, in that way, is not merely the addition of a new universal principle to the system, but also a revolutionary transformation of the system itself.

“Universal physical principles, and non-deductive transformations of systems, effected in that way, qualify as scientific knowledge, as distinct from, and opposed to sense-impressions. No knowledge was ever acquired, except by means of hypotheses defined a sI have just summarized the functional meaning of the term hypothesis, contrary to the famous, silly aphorism of Isaac Newton…”

Lavoisier’s first, explicitly stated “axiom” is already stated in 1775, in the midst of intensive work on the conumdrum of “airs.” In a manuscript titled, “Of elasticity and the formation of elastic fluids,” Lavoisier states that it is to be “an axiom” of his method that all substances can exist as solids, fluids and in “the state of vaporization,” and that “a vaporous fluid is the result of the combination of the molecules of any fluid whatever and in general of all bodies,” with the matter of fire.

This may seem like another “no-brainer” to you, but someone had to actually discover, as a necessary hypothesis, that gasses were another form of what we see as liquids and solids! Without recognition of gasses as a {state} of matter, to which quantified measurements, could be extended, one could no more account for complete chemical processes (reactions) as account for the terror attack on the World Trade Center and the Pentagon by the doings of Osama Ben Laden. His hypothesis would show that the process of chemical change was knowable, subject to man’s reason and utilizable for economic development, as opposed to no-change.

First, let us consider Lavoisier’s second axiom, here presented in the context wine’s chemistry, fermentation.

“This operation is one of the most extraordinary in chemistry: We must examine whence proceed the disengaged carbonic acid and the inflammable liquor produced, and in what manner a sweet vegetable oxyd becomes thus converted into two such opposite substances, whereof one is combustible, and the other eminently the contrary. To solve these two questions, it is necessary to be previously acquainted with the analysis of the fermentable substance, and of the products of the fermentation. We may lay it down as an axiom, that, in all the operations of art and nature, nothing is created; an equal quantity of matter exists both before and after the experiment; the quality and quantity of the elements remain precisely the same; and nothing takes place beyond changes and modifications in the combination of these elements. Upon this principle the whole art of performing chemical experiments depends: We must always suppose an exact equality between the elements of the body examined and those of the products of its analysis.”

From Chapter XIII, “Of the Decomposition of Vegetable Oxyds by the Vinous Fermentation”

Taken from {Elements of Chemistry}, 1789

How many times have we heard, “Matter can neither be created or destroyed”? Issac Asimov and many others have popularized Lavoisier’s ‘conservation of matter’ principle – or ‘conservation of total mass’ – as “a closed system” model, in effect a predecessor to the rantings of radical positivist John Von [sic] Neumann.

Taking historical specificity into account however, Lavoisier’s ‘conservation of matter’ “axiom” was a revolutionary supposition, adduced from a newly identified “type” of physical action occurring in the atmosphere, one closely identified with living processes. This new type of action, involving {empirically invisible} cycles and periodicities, is made comprehensible (measurable), as Lavoisier states above, by an experimental decomposition and re-composition of substances, which carefully includes the measurement of the airs that are “fixed” or disengaged in the process. The universe, for man, was increasingly one of multiply-connected action.

Recall that it had still been commonly believed in the 18th century, that air was one of four physical elements, along with fire, water and earth. Air was distinct in part because it did not enter into chemical combination. Certainly there was little visible evidence to assume that air did.

With gases axiomatically understood as a third state of matter, Lavoisier was able to zero-in on a necessary “sufficient cause” of various hither-too mysterious, or misunderstood phenomena. Lavoisier, famously “with the aid of the balance,” proceeded with the systematic weighting (indirectly) of what he could not see (gases), weighting elements and compounds in their solid or liquid states, and then weighing their reduction (or increase) in weight, as they were combined, and/or converted into “airs” filling measurable volumes. Lavoisier developed and utilizing most of the instruments and techniques that we think of today when we think of a chemistry laboratory – flasks, retorts, distillation techniques, etc. – and systematically revamped the nomenclature of chemistry to “name” the newly unlocked discoveries of nature’s processes. Has not Lyn been engaged in this same kind of process?

Let us briefly examine how Lavoisier, working in dialogue with Joseph Priestley, went about laying the basis for this revolution in science and technology.

Weighty Airs

In the early 1770’s Lavoisier prepared systematic reviews of Joseph Priestley’s published reports, a continuing source of new experimental techniques, paradoxes, and results for the world. Lavoisier wrote, “The works of the different authors I have cited, considered from this point of view, have presented me with separate portions of a great chain; they have joined together several links. But there remains an immense series of experiments to carry out in order to forge a continuity…” Lavoisier turned first to {fermentation}.

Priestley, examining the processes of a nearby brewery, became fascinated with the “air” that lay over the liquids in the fermentation vats. He soon announced findings that fermentation produced prodigious amounts of fixed air, “of almost perfect purity.” Lavoisier, reporting on Priestley’s findings, as well as his replication of Priestley’s experiments, told a meeting of the Academy, “…one observes that as soon as the spirituous fermentation takes place there is a release of air in great abundance, but when through the course of the fermentation the liquor begins to turn acidic [vinegar-bl], all of the released air is soon absorbed again to enter into the composition of the acid.”

Here might be another “cycle” of airs, like that established between plants and animals, by Joseph Priestly. But were the airs the same – the air released, and the air re-absorbed? Do acids – defined as such by their bitter taste and other observable qualities – all contain air? “Acid fermentation” – the name given to the latter phase, when wine turns into vinegar, and beer goes bad – was not comprehensible yet to Lavoisier. He carried out an experiment, mixing equal amounts of flour and water in two flasks, one exposed to air, and the other placed under a bell jar in a pneumatic trough, to measure the changes in the volume of air and acidity. The results – after a month and a half – were discouraging; there was no identifiable sign of acidity in the mixture exposed to “common air.” Lavoisier noted that he did not understand the processes of “acids” sufficiently, and therefore was not yet prepared to provide “a complete theory” of fermentation.

Lavoisier {then} turned his attention to studying and experimenting with calcination and reduction, as well as combustion and the properties of fixed air – to “flank” the difficulties confronted in his initial skirmish with fermentation.

Lavoisier had been well trained in chemistry and botany by leading French scientists of the old school. The processes of “reduction” and “calcination” were well known from metallurgy. Now, scientists were intrigued because these processes were found to involve the “disengagement” and “fixing” of “airs”, respectively. This aspect of Lavoisier’s work is better known today, textbook wise, but ripped out of the context of his unfolding conceptual understanding of living processes.

Reduction and Calcination

Experiments utilizing the water or mercury troughs, bell jar, and pneumatic pump had allowed scientists to identify that the following processes all disengaged Black’s “fixed air”: fermentation (up until the wine or beer began going bad); the exhalation of air in respiration; metallic “reductions”; and solution of mild alkalis or earths in acids. (Recall, from last week’s pedagogical that Black’s “fixed air” was able to some-how “fix” in lime-water, with lime being precipitated out, and the resultant air “mild.”)

Iron was, and still is, usually extracted from iron ore by burning it (900 degrees plus) with charcoal or “charbon,” in common air. (Coke is now used.) The phlogiston theory, which you may have heard of, explained the former by stating that phlogiston, or the “principle of inflamability,” had been absorbed from the charbon, charbon being the source of this phlogiston, hence inflammability, from which even the word, carbon, is derived. The burning of iron ore, and other metallic ores, with carbon was called “calcination.” What was left after this burning was termed the calx. What was now determined, was the air surrounding the burning of metal with carbon, while contained by a bell jar suspended over a trough of water or mercury, was that calcination involved the “fixing” of airs in the metals – the volume of air in the bell jar was reduced and the calx weighted more than the original metal! (Hold that thought.)

Other metals, such as copper and nickel, were extracted by a different, but related process. First the ore containing copper, for example, was “roasted” in common air. This was termed reduction, as the weight was reduced, and the weight of the surrounding air increased in volume. It was now determined, utilizing the apparatus already discussed, and the test on the “airs” already utilized – lime-water, candle, and bird or mouse – that metallic reduction specifically involved the disengagement of Black’s “fixed air.” The pholgiston theory stated that it was phogiston that had been released, with some kind of effect of the air.

Acids are also used in the extraction of metals such as copper [hydrometallurgy-bl], with an increase in the volume of air. It was also known that acids applied to metal calces (plural of calx), at room temperature and a type of air was measurably “disengaged.” Therefore, Lavoisier thought that calcination and reduction were, combined, “a complete system,” – Gregory Bateson, Von Neumann, etc would call a closed system. Reduce a metal with acid and disengage Black’s fixed air; burn a metal with carbon and absorb Black’s “fixed air.”

There was also a very significant wrinkle: with increasing expertise in manipulating the new experimental apparatus, and increasing knowledge of airs common, fixed, and inflammable, the results of further experiments with calcination and reduction were paradoxical!

Getting the lead out

Experiments with lead undid the attempt at a simple solution – and opened another door. Lavoisier had observed, “with surprise,” that the calcination of lead in a closed chamber could be calcined only to a limited degree. “I began at that time,” he put down in his notebook, “to suspect…that the totality of the air which we respire does not enter into the metals which one calcines, but only a portion, which is not abundant in a given quantity of air.” He found that the calcination of lead could not consume more than one-sixth to one-fifth of the total volume of the air enclosed. The {combustion} of phosphorus also yielded similar results. As regards the reduction of the lead calx, known as minium, a sparrow, a mouse and a rat introduced into the “air” released by the reduction of lead calx (minium) were “dead on the spot.” Reduction of lead calx produced “fixed air,” but if calcination of lead did in turn “fix” this same air, why did it absorb only part of the common air, and stop? Priestley argued that the air was saturated with phlogiston; Lavoisier was attempting understand how a part of the air was converted into fixed air. Adding fuel to the fire, an early experiment with minium, when combined with a volatile alkali also produced an anomalous result. Unlike “fixed air,” which had a “prodigious affinity” for volatile alkali and would have combined, the air released from minium simply dissipated. The air combined in minium must therefore, noted Lavoisier, been “the air of the atmosphere.”

It was Joseph Priestly who would provide the means to sort out these paradoxes, breaking out of the closed system, and setting Lavoisier on his merry way.

To be continued…………………

Living Chemistry – part III


— ——————————

Letter from Antoine Lavoisier to Benjamin Franklin


We have set aside next Thursday, the 12th of the month, to repeat a few of the principal experiments of M. Priestley on different kinds of air. If you are interested in these experiments, we would think ourselves very honored to do them in your presence. We propose to begin at about one o’clock and take them up again immediately after dinner. I sincerely hope that you can accept this invitation; we will have only M.le veillard M. Brisson and M. Beront – too large a number of people not being, in general, favorable to the success of experiments. I hope that you will be so good as to bring your grandson…

At the Arsenal 8 June 1777


Call freshly to mind, Joseph Priestley’s discovery of the vital inter-relationship of animal and plant respiration. Consider the atmosphere itself as, in turn, a coupling of these living processes with non-living processes, and, with Lavoisier, reserve an important role for light and heat. Living processes, a relatively “weak force” in the empirical terms of mass, volume, etc, incorporates the apparently “strong” forces of the abiotic manifold, with its elements, compounds and energetic processes, for the development of the biosphere. Likewise, the noosphere’s relationship to both the biotic and abiotic manfolds, which we are here investigating.

As regards the state of knowledge of the biotic manifold in 1774, Lavoisier noted,”…[P]lant analysis is much less advanced than one believes. Ordinarily we completely destroy the composition of the plants…” Unfortunately, this sounds very modern!

By contrast, in this third part of this “Living Chemistry” pedagogical series(1), we will unfold Lavoisier’s discovery and exploration of the actual ‘well tempered,’ harmonic domain of chemistry. We will follow Antoine Lavoisier, in dialogue with Joseph Priestley, as he utilizes a methodology of ‘inversion’ and ‘counterpoint,’ discovering thereby a rich treasure trove of anomalous singularities, and unfolding revolutionary new orderings and periodicities, for mankind in the development of the noosphere.

Respiration and Combustion

Let us now pick up an important thread in our story of chemical discovery. We had earlier noted that Joseph Priestley foreshadowed Vernadsky, in the way in which living processes, on a universal scale, engage the non-living. What about at the ‘micro’ level? Joseph Priestley ‘coupled’ the biotic and abiotic processes of respiration and combustion, while recognizing certain real differences in the behavior of respiration and, say, burning candles. Both actions produced ‘fixed air,’ and so, Priestley insisted, the two processes must therefore both entail combustion.

In 1775, Antoine Lavoisier further noted,

“The respiration of animals is likewise only a removal of the matter of fire from common air [phlogiston – indicated combustion], and thus the air which leaves the lungs is in part in the state of fixed air…

“This way of viewing the air in respiration explains why only the animals which respire are warm, why the heat of the blood is always increased in proportion as the respiration is more rapid. Finally, perhaps, it would be able to lead us to glimpse the cause of the movement of animals.”

It was in the context of the simultaneous study of respiration, and half-formed hypothesis regarding the unseen relationship of heat to the “movement of animals,” that new discoveries, regarding the equally invisible, ‘inorganic,’ processes of calcination and reduction, were proceeding.**

Airs, Again

Recall that Joseph Black, in 1756, had determined that in respiration we exhale a specific type of air, which became known thereafter as “Black’s fixed air.” You can do a simple chemistry experiment, with a shallow bowl, a short candle stick in the center of a flat piece of cork, and a tall water glass. Fill the bowl with a quarter inch of water, float the lit candle, and carefully place a glass over the lit candle and cork. What happens, over time, to the water level in the glass? What happens to the candle? What happens if you then lift up the glass, without tipping and insert a new lit candle up under the glass? This is a simple example of the tests for the disengagement of Black’s “fixed air.” (It is also a simplified model of the pneumatic trough, and principle of the barometer.)

You will recall, that in the last pedagogical of this series, the careful measurement of these airs, initiated by Lavoisier, in the processes of (non-living) reduction and calcination, produced a wealth of (contradictory) new evidence. Unseen but indirectly measurable air “fixed” and “disengaged,” in still little-understood chemical processes. Respiration was being investigated as a crucial example of the processes at work. Now, a paradox had arisen, that other “airs” were being “fixed,” as we saw with the preliminary investigations of the air fixed in the calcination of lead. All of these airs had different properties, and were compared to the standard of the “common,” breathable air of our atmosphere.

In 1774, Josephy Priestley’s {Experiments and Observations on different kinds of Air} had been published in England. Soon, Lavoisier was studying this report with keen interest, in France. A feature of Priestley’s report was the development of a new measure of “the goodness of air.”

Following up some intriguing findings made by Stephen Hales, Priestley found that combining various metals in spirit of nitre [an acid; nitre as in saltpeter, an organically produced compound, used in making gunpowder and in meat preservation] generated a ‘red flume” of a gas, which Priestley named, “nitrous air.” “Nitrous air,” when introduced into a glass bell suspended over, and slightly into, a trough of water (i.e. a pneumatic trough), caused the volume of air inside the bell glass to actually {shrink}, as measurable by a rising water level inside the bell glass! That is, the water level inside the glass bell was higher than the water level outside. Priestley was amazed that, “a quantity of air…devours another kind of air…yet is so far from gaining any addition to its bulk, that it is considerably diminished by it.”

Mixing his nitrous air in various combinations with common air, Priestley found that the volume diminished by one-fifth the original quantity of common air. Further, he found that this diminution only occurred with common air – that is air known to be fit for respiration – and therefore was a rigorous means of testing the “goodness of air,” scaled according to the reduction in volume. He wrote in 1774,

“[T]hat on whatever account air is unfit for respiration, this same test is equally applicable. Thus there is not the least effervescence between nitrous and fixed air, or inflammable air, or any species of diminished air. Also the degree of diminution being from nothing at all to more than one third of the whole of any quantity of air, we are, by this means, in a possession of a prodigiously large scale, by which we may distinguish very small degrees of difference in the goodness of air.”

A place has now been reached, where we might remind ourselves of Laviosier’s famous “axioms,” as discussed in the last pedagogical in this series. Especially, that which is known today as the “law” of “the conservation of matter.”

Lavoisier’s ‘first’ “axiom” had been drafted out, in detail, in February, 1775. That axiom was that all matter can exist in solid, liquid or gaseous state, depending on temperature and pressure. (This axiom Lavoisier would later reduce to a “corollary,” of his “caloric” hypothesis.) Let us focus on Lavoisier’s ‘second’ axiom, which emerges into view, in his notebooks, in 1775-1776. As first published in his {Elements of Chemistry}:

“We may law it down as an incontestable axiom, that, in all the operations of art and nature, nothing is created; an equal quantity of matter exists both before and after the experiment; the quality and quantity of the elements remain precisely the same; and nothing takes place beyond changes and modification in the combination of these elements.”

By 1775, Lavoisier and friends already possessed a virtual encyclopedia of various invisible “airs,” and compounds, calcinations, reductions, etc. However, such an ‘encyclopedia’ did not provide conceptual closure. Lavoisier’s notebook of this period shows that he was continually working to conceptualize a thoroughly consistent {lattice work}, starting from hypothesized first principles, attempting to order a growing body of closely observed phenomena and conceptual fragments. Our difficulties, dear reader, in following this story of scientific discovery, pale by comparison!

Fleet-Footed Mercury

The closer study of a liquid metal would turn out to be a key. It had been known since alchemical times that by heating liquid mercury one could convert it into a red powder, from which, by further heating, one could convert again to liquid mercury. A number of “physicians” – as scientists were called – were studying this anomalous substance, and the nature of the unseen processes involved. Was the red powder, so produced in the intermediary step, merely a new form of mercury, or was it a true calx, which was then “reduced” back to liquid mercury? (It is worth bearing in mind that the steps involved, in these mercury experiments, took a week or more of continuous heating, maintained around the clock, at stable, sustained temperatures!)

Joseph Priestley took up the anomalous behavior of mercury, from the vantage point of his mastery of techniques which isolated the invisible airs. What airs, Priestley asked, might be involved in the anomalous transformations of mercury? In October, 1774, Joseph Priestley revealed that he had recovered a “new air,” as he heated the red powder and transformed it back into liquid mercury. Lavoisier, intrigued, repeated Priestley’s experiment with the red powder mercury precipitate.

Priestley pushed ahead. Early in 1775, Priestley determined that he had actually produced a new “species of air” from {mercurius calcinatus per se}, under controlled conditions. Repeating again his earlier experiment, utilizing his pneumatic trough to capture the air recovered from heating the mercury calx, he applied his nitrous air test. Once again, he found that the air, derived from the heating of the red precipitate, was diminished to one-fifth less than its original volume, when a measured amount of nitrous air was added, as was common air. On a whim however, Priestley reports that he decided to add a second measure of nitrous air. To his surprise, the volume of air decreased further! More nitrous air was added. Applying other tests, such as the lit candle, Priestley discovered that his new species of air was “five or six times better than common air, for the purpose of respiration, inflammation, and I believe, every other use of common atmospheric air.” He termed this new air, “dephlogisticated air.”

Learning then of Priestley’s new findings in December, through an advance copy of portions of the second volume of Priestley’s {Experiments and Observations of Different Kinds of Air}, Lavoisier proceeded to again replicate Priestley’s experiment. Lavoisier needed nitrous air, for testing the air.

For Priestley’s grand scale, nitrous air could be “easily” produced by dissolving mercury in nitrous acid. Lavoisier went right to work, heating the combination, deciding to collect the air given off, over time, as separate portions. At a certain point, the vapor began to turn reddish and he could see that some of the air was being absorbed, even as it was produced. Lavoisier realized that “common air or dephlogisticated air” – one or the other – was being given off, and he captured these, again over time, in separate glass cylinders.

Lavoisier was surprised. When he tested fractions six though nine, inserting the “nitrous air” which he had just otherwise produced, he noted that, “This air was much better than that of the atmosphere…”, finding that prodigious amounts of nitrous air could be added. With the ninth fraction, he starting with “four parts” of each air, and ended up adding a total of seven parts of nitrous air, while reducing the volume by 7/8. He thus confirmed that this ‘secondary air,’ produced while making nitrous air, was itself the “deplogisticated air” of Priestley!

Now, it was Lavoisier who leaped {conceptually} ahead. It would be natural to infer that the air, which the liquid mercury had originally absorbed in being heated and transformed into a calx (calcination}, was identical to the “dephlogisticated air,” which Priestley had found was produced when the red powder (calx) was converted (reduced) to liquid mercury. However, that simple explanation had proven wrong before, in earlier calcinations and reductions, especially involving charcoal. Conceptually though, from the standpoint of his ‘conservation of matter’ hypothesis, Lavoisier should be able to ‘invert’ the process: If one assumes that “dephlogisticated air” was absorbed, out of the common air, in the heating process which produced the red powder (calx of mercury) in the first place, then adding dephlogisticated air to the portion of air remaining after calcination of liquid mercury, should recompose the original common air. To five parts of the air remaining after the calcined mercury had absorbed one-sixth of the air, Lavoisier now added one part of the dephlogisicated air. The air then behaved exactly as ordinary air!

Consider: Lavoisier had carried out the decomposition and re-composition of the atmosphere.

Almost simultaneously, Lavoisier proceeded to prove that the process of creating nitrous air from nitrous acid could also be ‘inverted,’ this in a demonstration before the Academy. Nitrous air (to test the properties of airs) was produced by combining nitrous acid and mercury (a calcination). Lavoisier now combined measured amounts of nitrous air and “dephlogisticated air,” disengaged while heating the calx of mercury (a reduction), and re-composed predicted amounts of nitrous acid.

The very next page of Lavoisier’s notebook shows that he rushed to next experimentally decompose the atmosphere by respiration, and recompose it with the “dephlogisticated air” derived from the reduction of the mercury calx, proving to himself, “that respiration in absorbing air, renders a portion vitiated,” and can then be restored. Lavoiser, it might be said, had discovered, and now was exploring, the ‘well-tempered’ nature of God’s chemical domain!

Following this trail of discoveries and experiments, you might realize that Lavoisier (and Priestley) had identified that which Lavoisier would name oxygen, after its acidifying quality. More precisely, Lavoisier termed it, in 1780, “the oxygen principle,” first wishing to rigorously clarify what an element was – and was not. The reader can surmise that it is this “oxygen principle,” as an air, which is being absorbed in calcinations, combustion, and respiration. Like Lavoisier, you are conceptualizing what you cannot see. Some of Lavoisier’s further work resulted in his discover of azote, now termed nitrogen, which together with oxygen predominate in the earth’s atmosphere. The oxygen-carbon dioxide cycle and the nitrogen cycle are both essential to life.

Conceptually exploring chemical processes as occurring within an hypothesized harmonic domain, allowed for the emergence of lawfully created dissonances, the basis for new (invisible) discoveries.(2)Apparent “elements,” including water, were discovered to be specific compounds, as measurable amounts of an alleged element disappeared on the ‘other side of the equation.’ Nor were all chemical processes so simply ‘inverted,’ as in those requiring a catalyst. Lavoisier, as can be seen with the mind’s eye, had to work very hard to be a “systemic thinker”!

To be continued …………….

(1) Part I of “Living Chemistry” appeared in the Friday, 10/12/01 briefing. Part II appeared in the Saturday, 10/19/01 briefing. They can otherwise be found as a1415BLZ001 and a1426BLZ001.

(2) The reader might be struck by a parallel to Bruce’s recent pedagogicals on Gauss, where what appear, in the form of natural numbers, as an open series, or, in the case of “powers,” as open, growing cycles, turn out to be periodic, closed cycles with respect to a modulus. From where does this periodicity arise?”

Living Chemistry, Part IV


“Lavoisier, the putative father of all the discoveries that are talked about; as he has no ideas of his own, he seizes those of others; but scarcely ever knowing how to appreciate them, he abandons them as lightly as he took them up, and changes his views as he changes his shoes…”

– M. Marat, from his pamphlet,{Modern Charlatans, or Letters on Academic Charlatanism, published by M. Marat, the friend of the People}, 1791

Last week we re-discovered the harmonic domain of chemistry, with Antoine Lavoisier. Lavoisier’s continued his work, despite extraordinary demands.

In 1783, Cavendish reported that the burning of “inflammable air” had produced water. Lavoiser, repeating the experiment and inverted the process, quickly determined that water was composed of “dephlogisticated air” and “inflammable air” – oxygen and hydrogen. “Inflammable air,” which we have only mentioned in passing and had already been isolated, was the then-current name for hydrogen.

To shake off the cobwebs that so quickly occupy any unused corner of you mind, ask yourself: Did Lavosier ever see or touch or hear these “airs”? We have to almost shake ourselves, to let go of these airs as “things,” and realize that they are rigorously proven {concepts}, the results of fruit of discoveries, not Sarpi’s [facts].

Antoine Lavoisier had never seen any of these “airs,” and he had only determined their existence indirectly.


So,what of these “elements”? A common chemistry textbook will credit Lavoisier with the discovery of nitrogen, and with producing the first table of elements, for his introduction to chemistry, {Elements of Chemistry}.

Here, Edgar Allen Poe’s character, August Dupan, is required. Worthy of note is the easily overlooked fact that the English language title of Lavoisier’s textbook is itself misleading, as it implies to the casual reader that it is a book about {elements}. Compare to the title, in the original French, {Traite’ e’le’mentaire de Chimie} and you grasp the difference. So, how did Lavoisier {think} about what we today classify as elements? You may already have some ideas, from following Lavoisier on his voyage of discovery, over the past weeks. Let us hear from Lavoisier himself, and compare our thinking to his. The following is from the preface to his {Traite’}, as translated in the 1790 English language edition:

“It will, no doubt, be a matter of surprise, that in a treatise upon the elements of chemistry, there should be no chapter on the constituent and elementary parts of matter; but I shall take occasion, in this place, to remark, that the fondness of reducing all the bodies to three or four elements, proceeds from a prejudice which has descended to us from the Greek Philosophers…

“It is very remarkable, that, notwithstanding of the number of philosophical chemists who have supported the doctrine of the four elements, the is not one who has not been led by the evidence of facts to admit a greater number of elements into their theory…All these chemists were carried along by the influence of the genius of the age in which they lived, which contented itself with assertions without proofs; or, at least, often admitted as proofs the slightest degrees of probability, unsupported by that strictly rigorous analysis required by modern philosophy.

“All that can be said upon the number and nature of elements is, in my opinion, confined to discussions entirely of a metaphysical nature. The subject only furnishes us with indefinite problems, which may be solved in a thousand different ways, not one of which, in all probability, is consistent with nature. It shall therefore only add upon this subject, that if, by the term {elements} we mean to express those simple and indivisible atoms of which matter is composed, it is extremely probably we know nothing at all about them; but, if we apply the term {elements} or {principle of bodies}, to express our idea of the last point which analysis is capable of reaching, we must admit, as elements, all the substances into which we are capable, by any means, to reduce bodies by decomposition. Not that we are entitled to affirm, that there substances we consider as simple may not be compounded of two, or even of a greater number of principles; but, since there principles cannot be separated, or rather since we have not hitherto discovered the means of separating them, they act with regard to us as simple substances, and we ought never to suppose them compounded until experiment and observation has proved them to be so.”

Certainly a surprise! Note Lavoisier’s emphasis on an “element” being the “principle of bodies… which analysis is capable of reaching.” (Here we see the caution he had already expressed, when he named “the oxygen principle,” as we pointed out, in the last pedagogical.) Here, we have a concept of elements, drawn methodologically from Liebniz’s “Monadology.” Certainly a healthy dose of “learned ignorance”! It can be more quickly agreed that Lavoisier’s conception of element is not the reductionist, “atomist” conception of matter, usually presented as a British (i.e. Venetian) bloodline of horses’ asses, running from Boyle, through Galileo, Hobbes, Bacon, Newton, and so forth. Do not read too much into his off-handed comment regarding discussions “of a metaphysical nature.” Metaphysical, in the sense of Socratic universal conceptions, is exactly what Lavoisier was all about!

Algebra and Heat

Let us return to the first of Lavoisier’s original axioms. By the time Lavoisier is writing his {Traite elementarie}, his early axiom – that all matter can be, in principle can be converted from one of three states of matter to the others, by altering the relative heat and pressure – has been reduced to a “corollary” of his “matter of heat” or “caloric.” He defined this caloric as, “…a real and material substance, or very subtile fluid, which, insinuating itself between the particles of bodies, separates them from each other.”

It is often overlooked that Lavoisier’s conception of caloric, a form of the hypothesized “aether” entertained by the likes of Hugygens and Mendeleyev, precluded a “blackboard” interpretation of his “law” of the Conservation of Matter. No Venetian double-entry book keeping here! Consider: Lavoisier’s “caloric” does not enter into his “equations” of chemical reactions! Indeed, here we see the flexibility of Lavoisier’s own ‘harmonic’ concept, which, among other things, duly noted the limits of his apparatus to measure exactly the phenomena that might be in question. It is often argued, by academics, that Lavoisier reached correct conclusions through erroneous results, as for example in the fermentation tables of his {Traite elementarie}. Let us not bother with the details of their sniping. Let us rather quote Lavoiser, on his “algebriac” scientific method.

“I can regard the matter submitted to fermentation, and the result obtained after the fermentation, as an algebraic equation; and by considering each one of the elements of this equation successively as the unknown, I can deduce a value, and thereby correct the experiment through the calculation, and the calculation through the experiment. I have often profited by this method in order to correct the first results of my experiments, and to guide me in the precautions to take in order to repeat them.”

No blackboard mathematics here! His equations were not meant as {verification} of the principle, that the material present before the operation is equal to the material afterward. Lavoisier was studying what he could not see, and often was measuring indirectly. Instead, the hypothesis is verified by his effectiveness in producing results.

Now lets explore, in our final pedagogical, Lavoisier’s work on heat.

Respiration and Work

Already in 1776, Priestley had already jumped ahead of Lavoisier with new evidence on the nature of the changes in blood. Coagulated sheep blood, he showed, became “black” and “red”, as it was transferred back and forth, between fixed air and deplogisticated air. Priestly showed the he got a similar response when the blood was enclosed within a bladder which separated it form the air, demonstrating that the lungs too could communicate the phlogiston to the air through the membranes. Lavoisier, following up on these promising results, suddenly {discovered} that there were “two causes tangled in one” – that together with the absorption of a portion of the air, that air which had already served for respiration “approaches the state of fixed air.”

Let us quote from his memoir, co-credited to Seguin, presented late in 1789:

“Starting from acquired knowledge, and confining ourselves to simple ideas which everyone can readily grasp, we would say to begin with, in general that respiration is only a slow combustion of carbon and hydrogen, which is similar in every way to what takes place in a lamp or illuminated candle; and that from this point of view animals that respire are true combustible bodies which burn and consume themselves.

“In respiration, as in combustion, it is the air of the atmosphere which furnished the oxygen and the caloric; but in respiration, it is the very substance of the animal, it is the blood, which furnishes the combustible; if animals do not regularly replenish through nourishment what they lose by respiration, the lamp will soon lack its oil; and the animal will perish, as a lamp is extinguished when it lacks nourishment.

“The proofs of this identity between the effects of respiration and of combustion can be adduced immediately from experiments. In fact, the air which has served for respiration no longer contains the same quantity of oxygen when it leaves the lungs; it includes not only carbonic acid gas, but, in addition, much more water than it contained before being inspired. Now, since vital air can be converted into carbonic acid gas only by an addition of carbon; and it can be converted into water only by the addition of hydrogen; and this double combination can take place only if the vital air loses a part of its specific caloric; it follows from this that the effect of respiration is to extract from the blood a portion of carbon and of hydrogen, and to deposit in its place a portion of its specific caloric, which, during the circulation, is distributed to all parts of the animal economy, and maintains that nearly constant temperature which one observes in all animals that respire.”

It is impossible to deny the influence of Liebniz on the work of Lavoisier.

Lavoisier extended his research and experimentation on respiration, to develop the outlines of a concept of a work function, related to respiration, and thus the atmosphere. Lavoisier, in 1790, posed two important postulates, based on detailed measurements taken during his collaborator’s physical exertions. (The drawings of the experiments survive and, like those done for the {Traite’ Elementaire}, were done by his Madam Lavoisier.)

Mssr. Lavoisier derived two important postulates: that the pulse rate increased in direct proportion to the total weight which a person lifted to a given height; and that the vital air consumed was directly proportional to the product of the pulse rate and the frequency of breathing, arguing that one could calculate the “weight lifted to a given height which would be equivalent to the sum of the efforts he has made.” This is so close to Leibniz’s concept of vis viva that it must give us pause. Antoine Lavoisier may have known Lazare Carnot, ten years his junior. Carnot’s interest in the subject of heat and its utilization in powering machinery was to last through his entire life. In 1783, Carnot had restated Liebniz’s concept of {vis viva} as “the moment of activity exerted by a force” or MgH, where M =the total mass of a system, g = the force of gravity, and H = the height of rise or fall. This, it is reported, is the initial seed crystal for Carnot’s concept of “work.” Lavoisier at one point equated the “weight lifted to a given height” with the “sum of the efforts,” language closely resembling Carnot.

Addendum: The political life of Antoine Lavoisier

Marat, clearly on orders of Jeremey Bentham, made Lavoisier one of his first targets. We should not lose sight of Lavoisier’s nation-building efforts, as this is a necessary part of any pedagogical dealing with driving force behind real discoveries in “the hard sciences.” Let Marat tell us about Lavoisier’s role, from his {Ami du Peuple}, of January, 1791: “I denounce to you the Coryphaeus – the leader of the chous – of charlatans, Sieur lavoisier, son of a land-grabber, apprentice-chemist, pupil of the Genevan stockjobber [Necker] Farmer-General, Commissioner for Gunpowder and Saltpeter, Governor of the Discount Bank, Secretary to the King, Member of the Academy of Sciences…”

Lavoisier, was a friend and collaborator of Bailly, and was a member of the ’89 Club (later supplanted by the Jacobin Club), with Monge, Bailly and others. As we see, he had been appointed to numerous national committees by the King, and continued to serve during Bailly’s period of leadership, including in the Treasury. While Bailly was Mayor of Paris, and the Marquis de Lafayette commanded the National Guard, Lavoisier not only continued to hold his crucial position of the Gunpowder Commission, which had the life-and-death responsibility of producing sufficient supplies of gunpowder for embattled France, but continued as the resident of the Arsenal, where he also continued his scientific research. It is recorded that, on one occasion, Bailly and his wife personally, physically intervened, to rescue Lavoisier and his wife from a threatening mob.

Midst the crisis of these times, Lavoisier presented a reasoned proposal for the reorganization of the national debt, in 1790, and presented to the National Assembly his long-prepared work, {The Territorial Wealth of the Realm of France}, to be the basis of a rational reorganization of the French tax system. In 1793, even after the execution of the King, Lavoisier presented to the National Convention as proposal for national education, with the aim of educating the whole nation, and all mankind. Lavoisier was executed, for counter-revolutionary activity due to his role in the tax farm system, on May 8, 1794, his body thrown into a nameless grave.


Unused notes

Lavoisier now also had the basis for answering one of Priestly’s anomalous findings: Priestly had found that inflammable air could be made respirable by “continued agitation in a trough of water, deprived of its air.” He then ascertained that “this process has never failed to restore any kinds of noxious air on which I have tried it.” Priestley has asked, how could there be such a uniform outcome?

Black’s fixed air, which was released from most forms of reduction, we know today as burning dephlogisticated air (oxygen) with carbon – carbon dioxide, as Lavoisier quickly determined. The phlogiston theory explained the former by arguing that phlogiston, the “principle of inflamability,” existed in charbon, alchohol, etc and was released as heat and light.

Priestley’s “nitrous air” is nitrogen oxide. Lavoisier proceeded to isolate “the unbreathable part” of common air, naming this unbreathable part, which killed animals “on the spot,” termed this azote. Azote was later renamed nitrogen, after its common association with nitre.

We can identify here aspects of a revolution in man’s knowledge, the transformation of the entire lattice work, based on the change of axioms, which Lavoisier was working.

The Unseen World Behind The Compass Needle

by Judy Hodgkiss

The great scientists of the 19th Century, at the inspiration of Alexander von Humboldt, coalesced around the work of the “Magnetischer Verein,” the Magnetic Union, globally coordinating their studies of the varied effects produced by the earth’s magnetic field. Two American presidents enthusiastically supported the effort. This grand project to comprehend the wondrous phenomena called terrestrial magnetism, or “geomagnetism” as it is known today, proved to be the science driver of that century, as the study of electrical phenomena had been for Ben Franklin’s era. But the Verein project died by the end of the century, and is waiting to be taken up again.

John Quincy Adams, in a debate in the Congress over the establishment of the Smithsonian Institute, argued that the promotion of geomagnetic science should be one of the Smithsonian’s primary goals:

“What an unknown world of mind is yet teeming in the womb of time, to be revealed in tracing the causes of the sympathy between magnet and the pole–that unseen, immaterial spirit, which walks with us through the most entangled forests, over the most interminable wilderness, and across every region of the pathless deep, by day, by night, in the calm serene of a cloudless sky, and in the howling of the hurricane or the typhoon. Who can witness the movements of that tremulous needle, poised upon its center, still tending to the polar star, without feeling a thrill of amazement approaching to superstition?”

Later, President Abraham Lincoln spent many happy hours with America’s foremost scientist of the 19th Century, Joseph Henry, the first Secretary of the Smithsonian, participating in the geomagnetic studies and other experiments carried out by Henry at the Smithsonian, conveniently located near the White House.

Alexander von Humboldt, the world’s foremost naturalist, and a member of Friedrich Schiller’s circles in Germany, wrote, of his best-selling book on his 1804 travels to Spanish America:

“The observations on the variations of terrestrial magnetism which I have carried out during a period of 32 years in America, Europe and Asia and with comparable instruments, cover in both hemispheres…a space of 188 degrees of longitude, from 60 degrees northern latitude to 12 degrees S. I have considered the law of the decrease of the magnetic forces from the pole to the equator as the most important result of my American journey.”

Between 1829 and 1834, a young Joseph Henry completed, with the aid of Prof. James Renwick of Columbia University and the the British naval captain who had discovered geomagnetic North in the Artic, Edward Sabine, the first comprehensive magnetic survey of an American city, Albany, N.Y.; “comprehensive” meaning: documentation of the variation in declination, inclination, and intensity through the magnetic needle readings in the area, over time.

In 1834, Humboldt and Karl Gauss established the Magnetischer Verein, the Magnetic Union, to coordinate systematic studies of terrestrial magnetism globally. Three years later, the great-grandson of Ben Franklin, Alexander Dallas Bache, met with Gauss in Germany, and returned to the U.S. with the precision instruments Gauss had prepared for his use in the U.S.

So, what was all this hub-bub about?

And why did this hub-bub die out, as the above mentioned went to their graves, one by one?

The many and wondrous anomalies posed by the geomagnetic phenomena excite the most fundamental questions in the mind of the researcher. Reason enough for the oligarchy to wish to bury the subject.

In 1492, when Christopher Columbus crossed the Atlantic Ocean, somewhere near mid-way, he recorded that his ship’s compass passed from slightly to the west of true North (true North is the axis of rotation North), over to {east} of true North! Did he expect that to happen? I don’t know. There is a similar spot on the other side of the earth where the opposite occurs. Perhaps mariners in the Pacific knew of such phenomena, since China had the use of the compass since at least 1,000 AD.

This deviation of the compass needle from true North is called the “declination” of the compass; and not only does this declination go from East to West at that point noted by Columbus, but that very line of demarcation he crossed in the Atlantic, itself, shifts, wiggles and oscillates, while at the same time, over centuries, it noticably migrates in a westerly direction, at an average of .04 degrees per year. At different longitudes around the globe, lines of equal declination from geomagnetic North to geomagnetic South do not conform with lines of longitude, and are often much further off from true North than just the few degrees experienced by Columbus (from coast to coast for him, something like 10 degrees west to 10 degrees east).

(See for “movies” of such patterns, as presented by the U.S. Geological Survey. The movie for magnetic declination flashes by, decades at at time. You can click either the fully animated or the manually-controlled action movie.)

Also, there is a point on the globe midway between North and South where the compass needle will orient to the southern magnetic pole (the poles are not aligned exactly, but approximately, opposite to each other). Humboldt was the first to record this, in northern Peru.

In addition to the above, every million years or so, the north and south poles reverse themselves! Again, it was Humboldt who was the first to discover the “magnet fossils” demonstrating this, i.e., certain magnetized rocks, such as those in the German Fichtel Mountains, and later, he found in the Peruvian Andes, which, when approached with the compass, demonstrate a magnetism in the reverse direction from other magnetized rock formations in the area. Such rocks were originally formed by molten lava, which solidified with an internal magnetism aligned at a time (compared to lava flows above or below it) when the magnetic poles had a reverse polarity.

Then there is the “inclination” of the needle, measured by a “dip” needle, where your compass is designed to move vertically, swinging up and down, instead of left to right. Held over either magnetic pole, the needle would swing to an extreme, vertical position, while at the equator it would rest horizontally. A map of this phenomena shows the lines of equal inclination oscillating north-south over years, but not wiggling and swirling as much as the lines of declination do.

(Again, see the action movie at the USGS website.)

But the geomagnetic phenomena which demonstrates the most awesome anomalies, is that found in the readings of the “intensity” of the magnetic needle. The needles used by Humboldt for these measurements were large, 1 to 2 feet long, and suspended from a torsionless thread and carrying ivory scales on its two end faces. Joseph Henry described in his laboratory papers in 1833:

“Make a needle in the form of a tube, adjust glasses, suspend it by silk, and look up through the glasses as a telescope at a distant board placed at right angles to the magnetic meridian with divisions on it corresponding to the seconds and minutes of a degree. And in this way notice the variations of the needle daily and hourly.”

The number of oscillations of this free floating needle over a given time then measures an array of irregular and regular variations in the “intensity” of the magnetic force. (Again, see the movie–the one called “total intensity,” since there are sequels called “horizontal intensity,” etc., but those are irrelevant for our purposes here. Also, take note, that the system used in the movies of delineating lines of equal declination, inclination and intensity, called “isogenic,” “isocline” and “isodynamic” lines, are concepts and terms invented by Humboldt.)

Since detailed records have been kept, over the last 150 years, there has been a significant decline in the intensity of the dipole magnetic field (besides the polar magnetism, there are other more complicated field patterns, but those can be sorted out from the main dipole field). In fact, the dipole field intensity is actually decreasing at a rate of about 8 percent per century! In a few thousand years it could go to zero (as measured in gauss or tesla units); or, it could reverse itself, and start to go back up at any time. The “magnetic fossil” record, which can capture declination, inclination and intensity evidence, indicates that this kind of fluctuation, even all the way to zero, may be a frequent occurrence (frequent, as measured in thousands of years). In fact, there are indications that somewhere around 4,000-5,000 B.C., the dipole field disappeared, perhaps for 1,000 years. One might then ask: Is it possible that a global maritime culture, at the time dependent on compass navigation for ocean crossings, might have literally lost its moorings? Thereby stranding what became the American Indian, etc., in outlying areas?

These are the kinds of questions into which a new Magnetischer Verein must delve, with all the joy of a Humboldt, Gauss, Bache, or Henry–or a JQA or Abraham Lincoln.

Dirichlet and the Multiply-Connected History of Humans: The Mendelssohn Youth Movement

by David Shavin

When Lejeune Dirichlet, at 23 years of age, worked with Alexander von Humboldt in making microscopic measurements of the motions of a suspended bar-magnet in a specially-built hut in Abraham Mendelssohn’s garden, he could hear, nearby in the garden-house, the Mendelssohn youth movement working through the voicing of J. S. Bach’s “St. Matthew’s Passion.” Felix and Fanny Mendelssohn, 19 and 23, were the leaders of a group of sixteen friends that would meet every Saturday night in 1828 to explore this `dead’ work, unperformed since its debut a century earlier by Bach.[1]

The two simultaneous projects in the Mendelssohn garden at 3 Leipziger Strasse (in Berlin) are a beautiful example of Plato’s classical education necessary for the leaders of a republic: The astronomer’s eyes and the musician’s ears worked in counterpoint for the higher purpose of uniquely posing to the human mind {how the mind itself worked}. As described in the {Republic}, Book 7, the paradoxes of each `field’ – paradoxes (such as the ‘diabolus’) that, considered separately, tied up in knots the ‘professionals’ in each field – taken together would triangulate for the future statesman the type of problems uniquely designed to properly exercise the human mind. After all, such a mind would have to master more than astronomy and music, simply to bring before the mind the series of paradoxes, so as to be made capable for dealing with the much more complicated dealings of a human society. To oversimplify, since the mind does not come equipped with a training manual, the composer of the universe created the harmonies of the heavens and of music as, e.g., a mobile above a baby’s crib.

In that hut, Dirichlet would have been making microscopic measurements as part of making a geo-magnetic map of the earth. The audacity in thinking that these miniscule motions of the suspended bar-magnet could capture such unseen properties, posed certain appropriate questions to Dirichlet. (Gauss’ geodetic surveying a decade earlier was paradigmatic of the sort of project that mined such riches out of the ostensibly simple affair, e.g., of determining where one actually was! But this also applies to locating oneself in the process of a proper daily political-intelligence briefing.) Similarly, the sixteen youth working to solve amongst themselves the complicated inter-relationships of Bach’s setting of the “Passion” story, as related by St. Matthew, would have forced their grappling with the scientific problem of ascertaining what our Maker would have in store for us, in their attempt to map their own souls. (Just for starters, regarding their `performance’ questions: How does Jesus intone what he says? How does the chorus/audience respond to Jesus, and sometimes to each other? etc.) The following historical sketch is offered as a few measurements, but instead of using a suspended magnetic bar, we’ll use a few years of Dirichlet’s life, and thereby try to triangulate some of the important characteristics for a map of the culture that created the world that, today, we are challenged to master. Humboldts and Mendelssohns

Dirichlet’s patron, Alexander von Humboldt, along with his brother Wilhelm, had studied in the 1780’s with a host of pro-American Revolution leaders in Europe, notably including the Mendelssohn’s famous grandfather, Moses. (These particular studies can be investigated by reading Moses Mendelssohn’s Leibnizian work, {Morgenstunden}, or {Morning-Studies}, which describe the lessons that Moses gave to his son Joseph, and to the Humboldt youth.) Later, two of Moses’ sons, Joseph and Abraham, ran the Mendelssohn Bank, which financed many of Alexander von Humboldt’s scientific expeditions and projects. Abraham Mendelssohn, the father of Fanny, Felix, Rebecca and Paul, had set up, in his garden at 3 Leipziger Strasse, a special magnetically-neutral observation hut for Humboldt to measure minute magnetic fluctuations. Humboldt brought Dirichlet to Berlin in 1828, where he was one of the five or six who shared observational duties with Humboldt, in their mapping of the actual geo-magnetic shape and potential of the earth.

In 1827/8, Humboldt gave public lectures at the Singakademie Hall on physical geography – deliberately open to both men and women. Fanny Mendelssohn commented (in a letter to her friend Klingemann): “[T]he course is infinitely interesting. Gentlemen may laugh at us as much as they will; it is wonderful in this day and age for us to have an opportunity to hear something sensible, for once I must further inform you that we are attending a second lecture series, given by a foreigner on experimental physics. This course, too, is being attended mainly by women.” Humboldt’s presentations on his investigations of the earth were special public versions of his lecture-course at Berlin’s famous Friedrich Wilhelm University (established in the previous decade by his brother, Wilhelm von Humboldt).

Felix Mendelssohn attended the University at the same time that a collaborator of Humboldt at the University, Phillip August Boeckh, the great philologist, lived as a tenant in the Mendelssohn home. (Years later, Felix would compose music for the staging of Boeckh’s German translation of Sophocles’ play, “Antigone.”)

Humboldt also organized the Berlin scientific congress of August/September, 1828 – a conference that Metternich would find most dangerous. For the several weeks that Gauss stayed at Humboldt’s home, they could discuss the implications of the geodetic and geo-magnetic projects. Finally, the representative from England, Charles Babbage, the noted promoter of Leibniz’s analytic methods, found the conference to be historic, but found the highlight of Berlin to be the culturally-optimistic Mendelssohn household. It was at this time and in such circumstances that Dirichlet entered into the Mendelssohn youth movement. The Mendelssohn Youth Movement

Fanny’s reports on the scene (in a 12/27/1828 letter to Klingemann): “Christmas-eve was most animated and pleasant. You know that in our house there must always be a sort of `jeune garde’ (‘young guard’) and the presence of my brothers and the constant flow of young life exercise an ever attractive influence. I must mention Dirichlet, professor of mathematics, a very handsome and amiable man, as full of fun and spirits as a student, and very learned.” Fanny’s sister, and Dirichlet’s future wife, Rebecca, was also at that Christmas party. We may assume that some or all of the sixteen-member `Saturday-night chorus’ were there.

Also in attendance was Fanny’s longtime love, Wilhelm Hensel, back in Berlin for two months now. He had just returned from five years of study of Renaissance art in Italy. Wilhelm, now 33, and a talented artist, had fought as a young man in the German Liberation Wars against Napoleon. Now, he had returned to Berlin to win Fanny as his wife (which somehow involved conquering Fanny’s mother, Lea). A month later, the engagement was announced.

Fanny also mentions three of the suitors of Rebecca (who would all lose out to Dirichlet): * Professor Eduard Gans – “We see him very often, and he has a great friendship for Rebecca, upon whom he has even forced a Greek lesson, in which these two learned persons read Plato. It stands to reason that gossip will translate this Platonic union into a real one…” (Gans was the Jewish student of Hegel, covered in Steve Meyer’s “Fidelio” article on the Haskalah.) Gans had been active in Jewish causes early on, but he converted in 1825 so that he could become a professor. * Johann Gustav Droysen, historian and philologist – Though only 19 years old, Fanny recognized in him “a pure, poetic spirit and a healthy amiable mind.” Droysen published a translation of Aeschylus and the famous work on Alexander the Great, both before he was twenty-five. * Heinrich Heine, poet – “Heine is here… [H]is {Reisebilder} contain[s] delightful things; and though for ten times you may be inclined to despise him, the eleventh time you cannot help confessing that he is a poet, a true poet!” Once, he sent, via his close friend Droysen, his greetings to the 18-year-old Rebecca: “As for chubby Rebecka, yes, please greet her for me too, the dear child she is, so charming and kind, and every pound of her an angel.” It seems that Heinrich Heine’s brand of courtship of Rebecca was little different from his treatment of everything else in life. “St. Matthew’s Passion”

Now picture Dirichlet in the observation hut in the garden at 3 Leipziger Strasse. Close by is the summer house, where Felix and Fanny worked out, with four hands at the piano, the voicing and composition of Bach’s “St. Matthew’s Passion” – left unperformed since Bach premiered it in 1729. In January, 1829, soon after Dirichlet had arrived on the scene in the Mendelssohn youth movement, Eduard Devrient and Felix Mendelssohn decided upon an historic March public performance, despite the discouragement of the music authorities. They knew that they had to defy the professional advice. As described years later by Fanny’s son, the appropriately-named Sebastian Hensel: “Only just then the most intelligent musical people began to comprehend that something must be done to bring this treasure to daylight, and that this was in a musical point of view the greatest task of the period.”

After hiring a hall, and with a performance six weeks away, the chorus swelled from 16 to 400 members, and the initial group had the ‘Monge brigade’ project of rapidly educating all the new-comers. Fanny described this rare and sublime process: “People were speechless with admiration, and faces grew long with astonishment at the idea that such a work could have existed unbeknownst to them… Once they grasped that fact, they began studying the work with warm and veritable interest. The enthusiasm of the singers, from the first rehearsal on – how they poured their heart and soul into the work; how everyone’s love of this music and pleasure in performing it grew with each rehearsal – kept renewing the general wonder and astonishment.” This process created “so lively and detailed an interest that all the tickets were sold the day after the announcement of the concert, and they had to refuse entrance to more than a thousand people… [At the concert itself,] I was sitting in the corner [of the massive chorus] so as to see Felix well, and I had arranged the strongest alto voices near me. The choruses were impassioned with extraordinary strength tempered with a touching tenderness, as I had never heard them before… [A] peculiar spirit and general higher interest pervaded the concert, that everybody did his duty to the utmost of his powers, and many did more…”

And, after the sublime, the ridiculous! At least one Berliner seemed to remain untouched: After the concert, at a celebratory dinner, Devrient’s wife, Therese, sat between Felix and an obnoxious professor, who kept trying to get her drunk: “He clutched my wide lace sleeve in an unrelenting grip… to protect it, he said! And would every so often turn toward me; in short, he so plagued me with his gallantries that I leaned over to Felix and asked: `Tell me, who is this idiot beside me?’ Felix held his handkerchief over his mouth for a moment – then he whispered: ‘The idiot beside you is the celebrated philosopher Hegel!'”

Such were the circumstances of Dirichlet’s first year in Berlin. By 1831, Dirichlet and Rebecca Mendelssohn were engaged, and by 1832, married. They were considered to be, in the extended Mendelssohn family discussions and debates, the most revolutionary. The couple had four children. Rebecca died late in 1858, age 47 (evidently of a similar type of stroke as had felled her older sister, 43, and brother, 39, a decade earlier). Dirichlet’s compromised health declined further, and he followed her to the grave five months later, May 5, 1859. Dirichlet’s Republican background and LaFayette’s July 1830 Revolution As a youth of 17, Dirichlet was studying Gauss’ {Disquisitione Arithmeticae), when he was sent to study in Paris. According to his nephew, Sebastian Hensel, Dirichlet was introduced there to General Foy by a republican associate of Dirichlet’s parents, one Larchet de Charmont.[2] Foy employed Dirichlet as a tutor in his household from the summer of 1823 until Foy’s death in November, 1825. Foy was in France’s chamber of deputies, and was the leader of the opposition to the royalist restoration of the 1815 Congress of Vienna. Dirichlet thrived in this environment: “… [I]t was very important for his whole life that General Foy’s house – frequented by the first notabilities in art and science as well as by the most illustrious members of the chambers – gave him an opportunity of looking on life in a larger field, and of hearing the great political questions discussed that led to the July Revolution of 1830, and created in him such a vivid interest.” (Hensel’s {The Mendelssohn Family}, Vol. I, page 312.)

The July Revolution of 1830 was led by LaFayette, and was at best a mixed affair. It overthrew the reactionary arrangements of the Congress of Vienna, and set up a tenuous arrangement whereby Louis Phillippe, the “Citizen King,” would be a constitutional monarch. LaFayette gambled that this might work, as the “Citizen King” had pledged to be subservient to the written constitution. Two items of note reflect Foy’s connections to the 1830 Revolution: In October, 1825, a few weeks before his death, Foy had troubled himself to write to LaFayette; and in 1823, Foy had sent from his care the 21-year-old Alexandre Dumas (three years Dirichlet’s senior) to be Foy’s agent in the household of Louis Phillippe. Later, in 1830, Dumas would serve as a captain in LaFayette’s National Guard. Dumas had sought Foy’s guidance, as Foy himself had earlier, in the 1790’s, looked to Dumas’ father, General Alexander Davy Dumas, as his military and political leader. General Dumas was first a hero of the French army, who then became an early opponent of Napoleon’s imperial ambitions. He was part of the 1798 invasion of Egypt, but was imprisoned by Napoleon from 1799 to 1801 for publicly opposing Napoleon’s imperial turn. (Similarly, Beethoven at this time also had hopes for Napoleon that he quickly recognized were greatly mistaken.) After the imprisonment, Napoleon’s harsh treatment of General Dumas led to his early death at age 44 in 1806, leaving behind his four-year-old son.

After Foy died in November, 1825, there was a competition between Alexander Humboldt and Fourier for Dirichlet’s services. Fourier, according to Hensel, “tried to avail himself of Larchet de Charmont’s influence, to induce him [Dirichlet] to return to Paris, where he felt sure it was his vocation to occupy a high position at the Academy.” Humboldt arranged for Dirichlet, then 21, to teach at Breslau, 1826-1828, and then brought him to Berlin in 1828, where he was the professor of Mathematics at the Berlin Military Academy, and where he joined the Mendelssohn youth movement. LaFayette, Dumas, Galois, Poe and Heine

Alexander von Humboldt returned to Paris in 1830 because of the ripened political situation. Cauchy – the Emperor of mathematics – had to flee Paris in July, 1830, when his King was deposed. For a short period, LaFayette thought that they could control the new “Citizen King.” However, within a few months the financiers moved in and gained the upper hand in running the king, Louis Phillippe. In December, 1830, they succeeded in the arrest of the nineteen leaders of LaFayette’s republican National Guard, the key defenders of the constitution. LaFayette testified at the March, 1831 trial; and the jury found them all not guilty.

At the celebratory dinner for the “19” were, among others, LaFayette, Dumas and another brilliant student of Gauss’s work, Evariste Galois. (The latter had been, along with Neils Abel, a victim of Cauchy’s ham-handed skullduggery at the head of the French Academy of Science.) At the dinner, Galois evidently made a notorious toast to Louis Phillippe’s health, while putting his other hand on his sword, and adding that the king had better not fail in his duty to the constitution. Dumas reports that at that point, several of the attendees, including himself, jumped out of the windows of the hall, fearing, accurately, that the spies at the event would bring the police.[3] Galois was arrested, tried and released, when the jury refused to convict him.

He was re-arrested that summer, 1831, by the police prefect, Gisquet, for wearing a republican guard uniform in public. Gisquet avoided the path of the previously unsuccessful trials, and instead kept him in jail with no trial until the next spring – when his release, and the setup of his fatal `duel’, fell hard one upon the other. When Galois’ suspicious death roused a crowd to come to his funeral, and a public accounting was threatened, Gisquet carried out, the night before the funeral, pre-emptive arrests of Galois’ friends.

Which of these events in Galois’ last year, 1831/2, were witnessed by Edgar Allan Poe is unclear, but clearly Poe’s “The Purloined Letter” skewers Gisquet (the “prefect G -“), and, by inference, celebrates the “poet-mathematician,” Galois. While Poe does also refer, and explicitly so, to the mathematician, Charles Auguste Dupin (the historical figure that was, literally, a member of the Monge brigade, having been taught by Monge), Poe’s “poet-mathematician” image does not need to be `reduced’ to one individual. However, the politically-sensitive case of Galois at the time of Poe’s visit to Paris, and the reference to the “prefect G-“, makes it clear that the Galois case would have been understood by astute readers of Poe’s time. Regardless, Poe’s “poet-mathematician” image would appropriately apply to any of the leading (1820’s) students of Gauss: Galois, Abel or Dirichlet. Poe’s “poet-mathematician” would have been fully at home in the Mendelssohn garden at 3 Leipziger Strasse. Finally, Heine, upon the news of the July Revolution, decided to leave Berlin for Paris. He would have been there, with Alexander von Humboldt, during these events. His early work in Paris during this period may be examined in his {The Romantic School}, where he diagnosed for the French and the Germans, the evil medievalism of the cultural string-pullers that had deliberately set out to murder the Germany of Moses Mendelssohn, Lessing and Schiller. No successful European revolution could proceed without dealing with these skeletons. And none did.

The rapid sketch, above, is only a beginning suggestion as to the interplay of: Gauss’s {Disquisitiones Arithmeticae}; the healthy benefits of opposing evil (e.g., the imperial beastman, Napoleon); the children and grandchildren both of Moses Mendelssohn and of the American Revolution in Europe; and the passion of magnetic measurements and the revival of Bach’s “St. Matthew’s Passion.” Much more can, and should be, covered in this specific period, regarding the activities of J. F. Cooper, J. Q. Adams, LaFayette, Friedrich List, Poe, etc. But this abbreviated historic sketch, centered around Dirichlet, should take us back to the Gauss/Dirichlet/Riemann dialogue somewhat refreshed.


[1] J.S. Bach had composed and performed this work in Leipzig, 1729. The manuscript was given to Felix by his aunt Sarah Itzig Levy, a proponent of Bach. (Otherwise, one could say that it was fortunate that Felix Mendelssohn had exactly sixteen friends to cover the four quartets of soprano/alto/tenor/bass – but it were more likely that the orbit defined the planet; that is, the Bach project cemented the potential friendships.)

[2] Larchet is unknown to this author. Since it is thought that Dirichlet’s parents were active republicans who had to leave Napoleonic France years before, and since Larchet de Charmont was a friend both of Foy and of Dirichlet’s parents, it were likely that they shared their anti-Napoleonic republicanism.

[3] Recall that Dumas is also the one who made the knowing allusion, as part of Dumas’ typically `factitious’ fiction, to Poe’s stay in Paris. This is the reference that Allen Salisbury reported on years ago in his “Campaigner” article on Poe.

Understanding Nuclear Power, #3


[Figures available at]

The discovery of radioactivity and its properties in the period from 1896-1903 created a crisis in physical chemistry. The phenomena seemed to challenge several fundamental axioms of science. These were (1) Carnot’s principle describing the relationship of heat and work, and (2) the principle which had guided all chemical investigations since Lavoisier that no new element was created or destroyed in a chemical transformation–a principle sometimes known as the indestructibility of matter. In the usual textbook approach, these paradoxes are passed over quickly, and the problems “solved” by the modern theory of radioactive decay and nuclear transformation. It is much more fun to look at the real papers from the period, to puzzle over the mystery, and work through the process of hypothesis formation and experiment by which the paradoxes are resolved. That is the only way to get any real understanding of what nuclear science is about. Here we will try to summarize some of the basic material which is to be mastered.

In the French scientific journal {Comptes Rendus} of December 1898, a note co-authored by Pierre and Marie Curie, and G. Bemont describes the properties of a new and strongly radioactive substance extracted from the ore of pitchblende. The new substance possessed many analogous properties to barium, and the team had made considerable effort to be sure it was not some unique form of the element barium. They called this new substance {radium.} In an earlier note the same year, this team of collaborators had described another radioactive substance separated from the same ore, this one sharing similar properties with the metal bismuth. They called it {polonium.} [fn 1] “Radioactivity” had been discovered just two years earlier by Henri Becquerel. The curious emissions from uranium ore which he discovered, while looking for something else, were first called Becquerel rays. Marie Curie first used the term “radioactivity” in 1898 when she discovered that minerals containing the element thorium also showed these properties. Becquerel had been studying phosphorescence, a property of certain materials which glow in the dark after exposure to light. He had been curious if the phenomena of phosphorescence might in some way be related to the peculiar x-rays which had just been discovered in 1895. As these curious things come up again in our story we will pause here to briefly explain them. X-rays were first discovered in a simple apparatus called a Geissler or cathode ray tube. A tube of glass is formed with a metal electrode inserted into each end. The air in the tube is pumped out by a vaccum pump, until only a small amount remains inside; or, other gases are introduced in very small amounts. When a voltage is applied across the electrodes, the interior of the tube begins to glow, its color dependendent on the gas contained. The neon lights in signs are a familiar example of such a device. The behavior of gases in apparatus such as these had been under study since the 1840s by Auguste de la Rive, a collaborator of Ampere. Studies of the tubes were made in Germany in the 1850s, and they received the name Geissler tubes after the Bonn instrument maker Johann Geissler. The alternate name of cathode ray tube, came about after Eugen Goldstein discovered in 1876 that a faint ray could be seen propagating from the negative electrode (cathode) to the positive anode. With a high voltage, it was noticed that the glass of the tube also develops a glow. Experimenting with such devices in 1895, Wilhelm Conrad Roentgen observed something really unusual. A faint green light which developed at the wall of his tube was passing through nearby materials, including paper, a book, and some wood. As he tried putting other materials in front of the tube, he saw the bones of his hand projected on the wall! He described the phenomenon in a paper in 1896 calling them “Radiation X,” or X-rays. They are also known as Roentgen-rays.

– Radioactivity – Reports of this exciting discovery spread quickly, and Becquerel wondered if the phenomenon of phosphorescence he was investigating might be related to this radiation X. One of his experiments had been to place each of the mineral samples which showed phosphorescence over a photographic plate wrapped in black paper and left in the dark. All results were negative, until he tried minerals containing the element uranium. (The element uranium had been discovered by Martin Klaproth in 1789 in ores containing the mineral pitchblende; at the time it was primarily used as an additive in the glassmaking process for giving color to glass.) Becquerel’s uranium samples caused the photographic plate to darken. The darkening occurred even if the uranium had not been previously exposed to light, so it was clear the phenomenon was not due to phosphorescence. The radiation was passing through black paper and exposing the photographic plate. Perhaps it was the radiation X?

Pierre and Marie Curie soon began experiments with samples of uranium ore, most of them obtained from mines in Bohemia, then part of Austria. While still supposing that the effect might be due to the radiation X, their work led to the discovery of a very important anomaly. The work began with the creation of a device for measuring the activity of the sample more accurately than could be done with a photographic plate. It had been found that these substances had the property of making the air around them conductive. To measure how much, the sample was ground into a powder and placed on the lower of two parallel metal plates (B). (See Figure 1). This plate was attached to a set of batteries producing a potential usually around 50 or 100 volts. The upper plate (A) was attached through a switch to ground. A radioactive substance would cause the air to become conductive allowing a current to flow through plate A to ground, when the switch was closed. When the switch was opened, the upper plate developed a charge whose value could be determined by the electrometer (E in the figure). The quantity of charge produced was considered a measure of the radioactivity of the substance. A device developed by Pierre Curie from his studies of the piezoelectric properties of crystals, the quartz piezoelectric balance, greatly improved the accuracy of the electrometer. (See Denise Ham article in {21st Century,} Winter 2002.)

Being accomplished chemists, the Curies tried experiments to remove the uranium from the pitchblende ore. By subjecting samples of the ore to acid, they could cause much of the uranium to precipitate out as a salt. When these samples of ore with much of the uranium removed were placed in the measuring device a remarkable thing happened. They showed more radioactivity than the ore samples containing uranium. The Curies then isolated pure uranium metal from the ore and compared its activity. The ore samples they had from several Austrian mines showed a radioactivity three to four times greater than the pure uranium. They became convinced that a new element, many times more active than uranium, must be present in the ore. They began a process of chemical separatiom. Aided by theri precision device for measuring radioactivity, they were able to separate out the portions of the ore which showed greater radioactivity. By June 1898, they had separated a substance with 300 times the radioactivity of uranium. They supposed they had found a new element which they named {polonium,} after Marie Sklodowska Curie’s embattled Poland. There was still some doubt as to whether it was an element. It had not been isolated yet, but always appeared with the already known element bismuth. By December of 1898, the Curies had separated another product from the Bohemian ores which showed strong radioactive properties. This one appeared in combination with the known element barium, and behaved chemically much like barium. Again it had not been isolated in a pure form, and there was uncertainty as to whether it was a distinct element. Spectral analysis showed mostly the spectral lines characteristic of barium, but their friend, the skilled spectroscopist Demarcay, had detected a very faint indication of another line not seen before. [fn. 2] On the basis of the chemical and spectral evidence and the power of its radioactivity, the Curies supposed it to be a new element, which fit in the empty space in the second column (Group II) of Mendeleyev’s periodic table, below barium. They named it {radium.}

The Curies now dedicated themselves to obtaining pure samples of these new elements. It took four years of dedicated labor, working heroically under extremely difficult conditions to isolate the first sample of pure radium. Polonium proved more difficult. While they were engaged in this effort, research was under way in other locations, sparked by the earlier papers of Becquerel and the Curies announcement of two new radioactive elements.

One of the most important lines of development led to the discovery that there was more than one type of radiation coming from the radioactive substances. Becquerel had already reported from his early experiments with uranium that he suspected this to be the case. In 1898 Ernest Rutherford, a young New Zealander working at the Cavendish Laboratory in England, used an apparatus based on the Curie’s radiation detector to examine the radiation from uranium in a slightly different way. He placed powdered uranium compounds on the lower metallic plate of the Curie apparatus described above, and covered it with layers of aluminum or other metal foils. It was found that most of the radiation as measured by the charge collected on the upper plate was stopped by a single thin layer of foil. But some of it got through and was only stopped after a considerable number of layers had been added. The conclusion, already suggested by earlier work of Becquerel, was that there were at least two different types of radiation, to which Rutherford gave the name {alpha rays} for the less penetrating, and {beta rays} for those which were stoped only by more layers of foil.

In 1899, three different groups of experimenters (Becquerel in France, Stefan Meyer and E. von Sweidler, and Friedrich Giesel in Germany) found that the radioactive radiations could be deflected by a magnetic field. A sample of the substance was placed in a lead container with a narrow mouth, so that radiation could only escape in one direction. The container was placed between the poles of a powerful electromagnet, and it was found that the emerging radiation was curving in the same direction as had been observed with the cathode rays mentioned above (Figure 3). It had been recently demonstrated that these cathode rays were electrical particles of negative charge, to which G. Johnstone Stoney had given the name {electron.} Thus, it was supposed that radioactive susbtances were probably giving off electrons.

More careful experiments by Pierre and Marie Curie in 1900, showed that only a part of the radiation was deflected by the magnet. Marie Curie then showed that the undeflected part of the radiation had a lesser penetrating power. It was thus likely that the rays which behaved like electrons were what Rutherford had named beta radiation, and the other part the so-called alpha radiation. It was to take a few more years before these were identified. Under a stronger magnetic field, these more massive alpha particles could be deflected by a smaller amount in the opposite direction of the beta rays, indicating that they were more massive and positively charged.

A laboratory anecdote recounted by Marie Curie in her doctoral thesis provides a striking illustration of the identity of the radiation from radium with electricity. In preparation for opening a sealed glass vial containing a solution of radium salt, Pierre scored a circle around the glass vial with a glass cutter. He immediately recieved a considerable shock. The sharp edge made by the glass cutter had permitted the sudden discharging of the electrical charge accumulated on the container, according to a simple principle which readers of Benjamin Franklin’s writings on the lightning rod will recognize. [fn. 3]

– Induced Radioactivity and Transmutation – One other paradoxical phenomena first observed by the Curies is important to the next step in the understanding of radioactivity. In their work with radium, the Curies had noted that every substance which remains for a time in the vicinity of a radium salt (usually radium chloride) became radioactive. The radioactivity disappeared some time after the substance was removed from the presence of the radium. They called this new phenomenon {induced radioactivity.} Careful studies of the rate of decay of the radioactivity showed that it declined according to an asymptotic law. The effect was independent of the substance put in the vicinity of the radium; glass, paper and metals all acquired the same degree of induced radioactivity. The induced radioactivity was greater in closed spaces, and could even be communicated to a substance through narrow capillary tubes. The air or other gas surrounding the radium was found to be radioactive, and if captured and isolated it would remain active for some time.

Many things suggested that the induced radioactivity might be due to a new gaseous element. But the Curies carried out spectral analysis of the gas found around radium, and found no evidence of the presence of a new element. A peculiar experiment carried out in 1900 by a very peculiar English scientist, Sir William Crookes, set the stage for the next big step in the understanding of radioactivity. Crookes added ammounium carbonate to a solution of uranium nitrate in water, causing a precipitate to form and to redissolve leaving a small quantity of a residue which resembled a tuft of wool. He found the residue to be very radioactive, as determined by its effect on a photographic plate, while the remaining solution was virtually inactive. Crookes concluded that this new substance, which he gave the name uranium X, was the radioactive component of uranium, and that Becquerel and the Curies were mistaken in supposing that radiactivity was an inherent property of the element uranium.

Becquerel tried a similar experiment, precipitating barium sulfate from a solution of uranium. He found that the barium sulfate precipitate was radioactive, while the solution, which still contained all of the uranium was not. However, he could not accept Crookes’ conclusion, arguing that “the fact that the radioactivity of a given salt of uranium obtained commercially, is the same, irrespective of the source of the metal, or of the treatment it has previously undergone, makes the hypothesis not very probable. Since the radioactivty can be decreased it must be concluded that in time the salts of uranium recover their activity.” [fn 4]

To prove his supposition that the uranium would recover its activity, Becquerel set aside some of the inactive uranium solution and its radioactive barium sulfate precipitate for a period of 18 months. Late in 1901, he found that the uranium had completely regained its activity, whereas the barium sulfate precipitate had become completely inactive. Becquerel wrote: “The loss of activity … shows that the barium has not removed the essentially active and permanent part of the uranium. This fact constitutes, then, a strong presumption in favor of the existence of an activity peculiar to uranium, although it is not proved that the metal be not intimately united with another very active product.” [fn 5]

Relocated to McGill University in Montreal, Ernest Rutherford, working with the young Oxford chemist Frederick Soddy, took the next crucial step in resolving the paradox. Instead of uranium X, they created a radioactive residue from a precipitate of thorium which they called thorium X. Like Crookes’ uranium X, the residue showed all the radioacitivity, whereas the thorium which remained in solution appeared inactive. But the activity of the substances was such that after only a few days they observed what Becquerel had seen after 18 months. The thorium X lost some of its radioactivity, while the thorium from which it had been obtained, which was kept a considerable distance away, regained some its activity. A quantititave study of the rate of decay and recovery of the activity by the two substances showed that the rates of decay and recovery were the same, about one month. The famous chart depicting their relative activity is pictured in Figure 4. Rutherford and Soddy repeated the observations using uranium X, and found the same effect occurring over a longer time span, about six months. These observations were considered together with the anomalous phenomenon of induced radioactivity discovered by the Curies. Rutherford had carried out his own investigations and concluded in 1900 that the induced radioactivity was due to a radioactive gas, which he called an emanation. The work with thorium X showed evidence of an emanation, which we know today as the radioactive gas radon.

Rutherford and Soddy now drew a radical conclusion from these results. They posited that the atoms of the radioactive elements were undergoing a spontaneous disintegration. By the emission of an alpha or beta particle they were changing to form a new element, and they posited that this process continues in a series, at a different rate for each step. They summarized the viewpoint in the introduction to their first paper on the subject, in 1902:

“Radioactivity is shown to be accompanied by chemical changes in which new types of matter are being continuously produced. These reaction products are at first radioactive, the activity dinminishing regularly from the moment of formation. Their continuous production maintains the radioactivity of the matter producing them at a definite equilibrium-value. The conclusion is drawn that these chemical changes must be sub-atomic in character.” [fn 6]

As later developments were to show, Rutherford and Soddy were fully correct in their general statements, even if some of the details required further elaboration. It could be argued, as the Curies and Becquerel did, that there was not sufficient evidence to support the hypothesis with certainty when put forward in 1902. I am not sure at what point they became fully convinced. In 1903, when the Curies and Henri Becquerel gave their Nobel prize acceptance speeches, they were still cautious about the Rutherford-Soddy hypothesis. One reason for the caution was that chemistry since the time of Lavoisier had relied on the assumption of the stability of the elements. Transmutation was associated with the unscientific practices of alchemy. An assumption underlying all of Lavoisier’s experiments was that in the course of a chemical reaction, the weight and elemental identity of the products would not change. Mendeleyev underlined this point in the preface to the Seventh Russian edition of his textbook Principles of Chemistry, written in St. Petersburg in November 1902. By the dating, one suspects that Mendeleyev may have been adding his voice to the skepticism concerning the Rutherford-Soddy hypothesis. [fn 7]

Today it is well understood that the radioactive elements uranium, thorium, and plutonium pass through a decay series by which they are transformed successively down the periodic table until arriving at a stable form of lead (atomic number 82). There are four known decay series, that of uranium-238, uranium-235, throrium, and plutonium. Without any interference by man, all of the elements above lead are continuously undergoing such transmutation in the Earth. Elements such as radium, polonium and radon are steps on this path, appearing temporarily and then decaying to pass over on to other elements.

In 1903 Soddy with Willam Ramsay established the identity of the alpha particle with helium. Later the alpha particle was understood to be the ionized (positively charged) nucleus of helium with its two electrons stripped off. As we understand it today, when an element emits an alpha particle it is transformed two steps down the periodic table. But before this could be fully grasped, two important new concepts had to emerge: the notion of atomic number, which describes the number of positive charges or protons in the nucleus, and the existence of isotopes–nuclei of the same charge but different atomic weights. These conceptions, along with the picture of the atom as consisting of a compact, positively charged nucleus surrounded by distant electrons, emerged in the period about 1909-1913. With the addition of one more conception, the neutron, which was first proposed in the early 1920s by Robert J. Moon’s teacher, William Draper Harkins, and experimentally established in 1932 by Chadwick, it became posible to explain the radioactive decay series with precision. So, for example, when the abundant isotope of uranium, U-238, emits an alpha particle, it transmutes two atomic numbers down to become 90-thorium-234. Now, thorium-234 is a beta emitter. We view the beta emission as resulting from the decay of a neutron in the nucleus. Harkins first conceived the neutron as an electron condensed on a proton. (When it was detected experimentally, the neutron was found to be a neutral particle with a mass almost exactly equal to the sum of the masses of the electron and proton.) When it decays, the neutron throws off the very light electron and leaves the more massive proton behind, increasing the charge of the nucleus by plus one. Thus beta decay causes the atomic number to increase by one, without increasing the atomic weight. 90-thorium-234 becomes 91-protactinium-234. This is also a beta emitter which thus decays to 92-uranium-234. (Notice that we have gone two steps down and two steps back up, but we are at a much lighter isotope of uranium. From here the U-234 emits an alpha particle to become 90-thorium-230. This emits an alpha particle to become 88-radium-226, which emits an alpha particle to become 86-radon-222 (see Figures 5a,b). To add to the fun, each of these decay products has its own rate of decay which is measured as a half-life, the time it takes for one half the mass of the substance to disappear. For some substances in the decay chain, this is quite fast–3.82 days for radon-222, for example, and 0.00016 seconds for polonium-214. Others give off their radiation at a much slower rate–uranium-238, for example, takes 4.5 billion years to lose half its mass. When Becquerel, the Curies, and the other early experimenters were detecting the radioactivity of uranium, for example, most of the emissions they detected were not from the uranium, but rather from the decay products mixed in with the uranium. Crookes’s creation of uranium-X was thus actually the chemical separation of the decay product, thorium-234, from the uranium. As the half-life of thorium-234 is just 24.1 days, it was emitting radiation millions of times faster than the uranium. Actually the uranium itself was a mixture of the slow decaying U-238 (4.5 billion years), U-235 (half life = 713 million years), and the decay product, U-234 (half life = 248,000 years). This is why the uranium-X sample at first showed such a high activity, while the remaining uranium seemed inactive. Over time, the uranium-X lost its activity by decay, while the mixture of uranium isotopes slowly built back up their decay products, thus increasing the measurable activity of that portion. It was not the uranium emission that was increasing, but the emission from its faster decaying products. The radon gas which was also a part of the decay chain was what Rutherford had called the {emanation.} Part of the difficulty of detecting it was its short half-life. Rutherford’s thorium-X, was what is now known as radium-224. It decays with a half-life of 3.64 days, by alpha particle emission, to Radon-220, the emanation.

By extrapolating the rate of decay of natural uranium, we can determine that about 4.5 billion years ago there was twice the amount of uranium-238 in the Earth as today. Half of it has undergone a transmutation in that time span, which is thought to be about equal to the age of the Earth. Radium, polonium, radon gas, and the other elements above lead on the periodic table, are all temporary appearances on their way to becoming something else. It is not out of the question that all the 92 elements are undergoing natural transmutation, and that those we call stable are simply decaying on a time scale longer than we have been able to observe. In any case, by artificial means, such as collision with a charged particle from an accelerator, and with enough expenditure of energy, we can today transmute virtually any element into any other. The alchemists’ dream of transmuting base metals into gold is thus acheivable, and has been demonstrated in the laboratory. This however can be only be accomplished in very small amounts, and at a high cost, so that even with Weimar rates of hyperinflation, laboratory transmutatiion is not presently a viable means of producing the metals we need.

So we see, that even this non-living domain within the biosphere is not quite dead either. It is undergoing constant change of a very radical sort. Even the stable elements, whether or not they ever change their identity, are in a state of constant and very rapid internal motion and, as I believe, of continuous and very rapid re-creation on a nonlinear time scale.

– Notes –

1. {Comptes Rendus,} vol. 127, pp. 1215-1217 (1898) The earlier discovery of polonium is described in {Comptes Rendus,} vol. 127, pp. 175-178,

2. We shall have more to do with spectroscopy later. Upon heating, each chemical element shows a characteristic color. Most people have seen the green color produced in a flame by a copper-bottomed pot. If the light produced when the element is heated be passed through a prism, it is dispersed into a band of color, just as sunlight passing through a prism forms a rainbow. Within the colorful band, known as a spectrum, certain sharp and diffuse lines appear. Bunsen and Kirchoff began work in 1858 which established a means for identifying each element by its flame spectrum (Figure 2).

3. We mention in passing one other anomaly associated with the discovery of radium: its production of light and heat with no apparent source for the energy. We will have more to say on this in coming installments. In the 1898 paper cited above, Curie, Curie and Bemont noted:

“The rays emitted by the compounds of polonium and radium make barium platinocyande flouorescent. Their action from this point of view is analogous to that of Roentgen rays [x-rays], but considerably weaker. To make the experiment, one places on the active substance a very thin leaf of aluminum, upon which a thin film of barium platinocyanide is spread; in the dark the platinocyanide appears weakly luminous in front of the active substance.”

This property of the radioactive substances of producing light (and, it was later noted, considerable heat) without any apparent source of energy was quite paradoxical and caused the team to note at the end of the second paper of 1898: “Thus one constructs a source of light, a very weak one to tell the truth, but one that functions without a source of energy. There is a contradiction, or an apparent one at the very least, with Carnot’s principle.”

Later in her 1903 doctoral thesis, Curie noted that samples of radium are also much warmer than the surrounding air. Calorimetric measurements were able to quantify the heat produced.

Sadi Carnot’s principle, derived from his study of steam engines, stated that the work gained by use of steam depended upon the difference in the heat of the steam coming from the boiler, and the heat of the water vapor after it had done its work in expanding against a piston. Work could only be gained by transfer from a warmer to a colder body. This is the beautifully adduced principle of the operation of heat engines, which Rudolf Clausius attempted to make into a universal principle of amorality by arguing that all processes progress to a state of increasing disorder (“entropy strives toward a maximum.”) What was the source of power for the light and heat produced by these radioactive substances? In noting the apparent contradiction with Carnot’s principle, Marie Curie, the probable author of the jointly signed note, had put her finger on a new principle of power. It was to take another several decades, and the work of many teams of investigators to begin to unravel the puzzle. The answer in short, was the existence of a new domain within the microcosm, the atomic nucleus, in which processes of enormously greater raw power than could be observed on the macroscopic or chemical scale took place.

4. cited in Samuel Glasstone, {Sourcebook on Atomic Energy,} (Princeton: Van Nostrand, 1958) p. 121]

5. cited in Glasstone, op cit, p.121

6. Rutherford and Soddy, Philosophical Magazine, 4 (1902), 370-396, and

7. I have examined the circumstances surrounding the Rutherford-Soddy paper with some care. The question on my mind was how, given the evident epistemological weakness of the British school, so much of the progress in atomic science during several decades beginning about 1900 could have taken place there. A subsidiary question was how Rutherford, who by the 1920s had become such an obstacle to new ideas in atomic theory, according to the testimony of Dr. Moon and his teacher Harkins, should have taken such a bold step in 1902. I found it useful to think of the question in two aspects, both of which are clarified by examining it in the historical context.

First, at the time of Rutherford’s discovery, the British were carrying out a buildup for world war, and feared the German pre-eminence in science. For a brief window of time, a general unleashing of scientific progress was permitted. Rutherford and Soddy were both outsiders in the British class system, the one a colonial, and the other the son of a shopkeeper, permitted to carry out their work in the outpost of Montreal. Later, by the 1920s and after the great war, Rutherford had become a part of the insider establishment, which was already asserting a kind of non-proliferation doctrine. H.G. Wells’s adoption of Soddy’s work, as in his popularization of an ultimate weapon to control populations by one-world government (“The World Set Free,” 1914), exemplifies this general aspect of the problem. The later achievement of nuclear fission put nuclear science even more tightly under the control of a military-industrial elite of Wellsian predilection well known to us.

Second is the unfortunate fact that the hegemony of British empiricism, dating approximately to the death of Leibniz, has meant that progress in science has been forced to proceed largely through the resolution of experimental paradox, without benefit of the superior method of metaphysics–as Leibniz called it. We know some very few but notable exceptions, among which Riemann stands out. Otherwise, the better scientists have developed a use of the creative method, as if by instinct, drawn from cultural traditions which are not necessarily evident to them. The general demoralization which followed the First World War tended to wipe out much of the epistemological advantage which had remained in some German and French scientific practice from the respective Kepler-Leibniz and Ecole Polytechnique traditions. A figure such as Dr. Moon represented a countercultural trend, in the good sense of the word, embodying in his deepest moral-philosophical outlook the better aspects of the American Leibnizian tradition, even where that might not be explicitly enunciated. Moon’s creative reaction to LaRouche and Kepler in his 1986 formulation of his nuclear space-time hypothesis conclusively demonstrate that point.

Understanding Nuclear Power, #2: THE PERIODICITY OF THE ELEMENTS

Larry Hecht April 21, 2006

[Figures for this pedagogical can be accessed at:]

Dmitri Mendeleyev discovered the concept of the periodicity of the elements in 1869 while he was in the midst of writing a textbook on inorganic chemistry. The crucial new idea, as he describes it, was that when the elements are arranged in ascending order of their atomic weights, rather than simply increasing in some power or quality, he found periodically recurring properties. Mendeleyev noted explicitly that this discovery led to a conception of mass quite different from that in the physics of Galileo and Newton, where mass is considered merely a scalar property (such as F = ma). Mendeleyev believed that a new understanding of physics would come out of his chemical discovery. It did, in part, in the developments that led into the mastery of nuclear processes, even if the flawed foundations of the anti-Leibnizian conceptions injected by British imperial hegemony were never fully remedied. The development of the sort of conception connected with Dr. Robert Moon’s nuclear model will help to fulfill Mendeleyev’s insight on this account.

There are just 92 naturally occurring elements in the universe. Their existence and organization in the periodic table discovered by Mendeleyev is the most fundamental fact of modern physical science. We will soon see how the discovery of radioactivity and nuclear power, among so many other things, would not have been possible without the prior achievement of Mendeleyev. Let us first get a general idea of what the periodic table is, and then examine some of the considerations which led Mendeleyev to his formulation.

The periodic table systematizes the 92 elements in several ways (Figure 1). The horizontal rows are known as {periods} or {series}), and the vertical columns as {groups}. The simplest of the organizing principles is that the properties of the elements in a group are similar. Among the many properties which elements in a group share: Their crystals, and the crystals of the compounds which they form with like substances, usually have similar shapes. Elements in the same group tend to combine with similar substances, and do so in the same proportions. Their compounds then often have similar properties. Thus sodium chloride (NaCl) which is table salt and potassium chloride (KCl) combine in the same 1:1 proportion, and show similar chemical and physical properties. Partly because they tend to make the same chemical combinations, the members of a group and sometimes adjacent groups, are often found together in ore deposits in the Earth. For example, copper usualy occurs in ores with zinc and lead, or with nickel and traces of platinum. If you look at a periodic table, you will see these elements in nearby adjacent columns. Or for another example, when lead is smelted, trace amounts of copper, silver, and gold (which occupy a nearby column to the left), and arsenic (in the adjacent column to the right) are found. We will look at more of these sorts of relationships shortly.

(To prevent confusion, we should interject this note of warning. When the periodic table is taught in the schools today, it is usually presented as an ordering principle for the electron shells which are thought to surround the nuclei of atoms. The modern explanation of chemical reactions invokes the interaction of the outer electrons in these shells. It is important to understand that at the time of Mendeleyev’s discovery, no chemist had any idea of the existence of an atomic nucleus nor electrons. The electron was considered as a theoretical entity in the electrodynamic work of Wilhelm Weber (1804-1891), but this had little to do with chemical thinking at the time. The first approximate measure of the mass of the electron came in the first decade of the 20th century, and the validation of its wave properties came in 1926. In the prevailing view of the atom at the opening of the 20th century, there was no central nucleus, but rather a homogeneous spread of charges. Thus, to understand how Mendeleyev came to his discovery of the periodic table in 1869, we must discard most of what we might have learned of the subject from modern textbooks. If we feel a slight pang of remorse in giving up what little we think we know of the subject, we shall soon find that we are rewarded by a far greater pleasure in discovering how these discoveries really came about. We shall then also be at the great advantage of knowing where the assumptions lie which will surely need correcting to meet the challenges of Earth’s next 50 years.)

By arranging the elements in increasing order of their atomic weights, Mendeleyev found that they fell into periods which repeated themselves in such a way that elements possessing analogous properties would fall into columns one below the other. Within the periods, many properties, including the valences (defining the small whole number proportions in which the elements combine with each other), the melting and boiling points, and the atomic volumes (which we shall discuss further on) showed a progessive increase and decrease which was analogous for each period.

By examining these periodic properties, it was also possible to see that there were gaps in the table. Some viewed those gaps as a weakness in Mendeleyev’s hypothesis. But Mendeleyev was convinced the conception was right, and that the gaps represented elements still to be discovered. He worked out the probable properties of some of these unknown elements on the basis of their analogy to the surrounding elements. Within a few decades of Mendeleyev’s publication of his periodic concept, several of these missing elements were discovered.

For example, in the Fourth Group (the 14th in the enlarged numbering system adopted in 1984), below the column containing carbon and silicon, Mendeleyev saw that there must exist an element which was unknown at the time. He called it {eka-silicon,} the prefix {eka-} meaning {one} in Sanskrit. By looking at the properties of silicon above and of tin (Sn) below, and also of zinc and arsenic surrounding it, he could guess such properties as its atomic weight, the probable boiling point of some of its compounds, and its specific gravity. In 1886, C. Winkler from the famous mining center of Freiberg in Saxony found the new element in a mineral from the Himmelsfurt mine and called it Germanium. Its actual properties were found to correspond entirely with those forecast by Mendeleyev. There had also been a gap in the Third Group (the 13th in the new system) in the position just under the elements boron and aluminum. In 1871 Mendelyev had named this still unknown element {eka-aluminum.} In 1875, Lecoq de Boisbaudran, using techniques of spectrum analysis, discovered a new metal in a zinc blende ore from the Pyrenees. He named it Gallium. At first it semed to differ considerably from the density Mendeleyev had predicted it would have if it was indeed eka-aluminum. But as observations proceeded the new element was found to possess the density, atomic weight and chemical properties which Mendeleyev had forecast.

That is the essential concept of periodicity. In order for Mendeleyev to arrive at it a great deal of prior chemical investigation was required. Perhaps the most important prerequisite had been the discovery of new elements. The ancients knew 10 of the substances we call elements today, most of them metals. These were iron, copper, lead, tin, antimony, mercury, silver, gold, carbon, and sulfur.[fn 1] All but two of the rest were discovered in the modern era. Between 1735 and 1803, 13 new metals and four gaseous elements were discovered. In 1808 six new elements from the alkali and alkali metal groups (Group 1 and II) were discovered. [fn 2] And the discoveries continued through the 19th century, capped by Marie Curie’s isolation of radium in 1898. In 1869 when Mendeleyev conceived the idea of periodicity about two thirds of the 92 naturally occurring elements were known. Still a few more remained to be discovered in the 20th century. And then came the synthesis of the artificial elements beyond the 92 naturally occurring ones, beginning with neptunium and plutonium.

What do we mean by an element? Chemistry deals primarily with homogeneous substances, not differing in their parts. But the fact that a substance is the same in all its parts does not distinguish it as an element. Sulfur which we consider an element is a yellow powder or cake, but many compounds such as chromium salts can take on a similar appearance. Table salt is uniform and crystalline, but not an element. We consider hydrogen gas an element but carbon dioxide gas a compound. Sometimes elements are described as the elementary building blocks from which more complex substances are formed. But a better definition is the one Lavoisier gave, which describes an element as the result of an action, as that which cannot be further separated by chemical procedures:

“[I]f by the term {elements} we mean to express those simple and indivisible atoms of which matter is composed, it is extremely probable we know nothing at all about them; but, if we apply the term {elements,} or {principles of bodies,} to express our idea of the last point which analysis is capable of reaching, we must admit, as elements, all the substances into which we are capable, by any means to reduce bodies by decomposition. Not that we are entitled to affirm that these substances we consider as simple may not be compounded of two, or even of a greater number of principles; but, since these principles cannot be separated, or rather since we have not hitherto discovered th means of separating them, they act with regard to us as simple substances, and we ought never to suppose them compounded until experiment and observation has proved them to be so.” [fn 3] Lavoisier’s warning remains applicable today. By heeding it, we do not fall into the trap of supposing we are dealing with irreducible elementarities, for the history of scientific progress has shown that increasing mastery over nature always permits us to delve deeper into the microcosm. For chemical technology, the element was the irreducible substance. But later developments allowed us to reach down to the electron, the nucleus, and to subnuclear particles.

It was necessary to perform chemical operations on substances to know if they were elements or compounds. Many things that were once considered elementary were later found to be composite. Lavoisier’s study of the separation of water into hydrogen and oxygen gas, and their reconstitution as water is exemplary. Similarly, his demonstration that the atmospheric air consists primarily of oxygen and nitrogen gas. The metals that were discovered in the 18th century were mostly separated from their ores by processes of chemical reaction, distillation, and physical separation.

At the time Mendeleyev was writing his textbook experimenters had accumulated an enormous store of information concerning the properties of elements and their compounds. Especially of note were the many analogous properties among the elements and their respective compounds. For example, lithium and barium behaved in some respects to sodium and postassium, but in other respects to magnesium and calcium. Looking at such analogies as markers of an underlying ordering principle, Mendeleyev suspected that there must be a way to find quantitative, measurable properties by which to compare the elements. There were four different types of measurable properties of the elements and their compounds, which he took into consideration in formulating his concept of periodicity. He identifies these in Chapter 15 of his textbook as:

(a) isomorphism, or the analogy of crystalline forms; (b) the relations between the “atomic” volumes of analogous compounds of the elements; (c) the composition of their saline compounds; (d) the relations of the atomic weights of the elements.

Think of each of these types of properties as different means of “seeing” into the microcosm. Let us begin with the first, crystal isomorphism. When a compound is dissolved in water or some other solvent, and the water removed by evaporation or other means, it can usually be made to crystallize. All of the familiar gemstones and many rocks are crystals that have been formed under conditions present within or at the surface of the Earth. Table salt and sugar are familiar crystals. Most metals and alloys cool and harden in characteristic crystalline forms. Organic compounds, even living things like proteins, can be made to crystallize for purpose of analyzing their structure. With the development of chemistry following Lavoisier, the crystalline form began to receive more attention, and close study eventually showed that every compound crystallizes in a unique form. Many of these forms are quite similar, but careful measurement of the the facial angles and the proportional lengths of their principal axes will always show some slight difference. Crystallography thus became a means of chemical analysis, and by the 1890s there existed catalogues of the crystallographic properties of nearly 100,000 compounds. [fn 4]

Despite these very fine differences, the general forms of crystals fit into certain classifiable groups. Their shapes include the cube and octahedron, hexagonal and other prisms, and a great number of variations on the Archimedian solids, their duals, and many unusual combination forms. The German chemist Eilhard Mitscherlich first demonstrated in 1819 that many compounds which have similar chemical properties and the same number of atoms in their molecules also show a resemblance of crystalline forms. He called such substances isomorphous. He found that the salts formed from arsenic acid, (H3AsO4) and phosphoric acid (H3PO4), exhibited a close resemblance in their crystalline forms. When the two salts were mixed in solution, they could form crystals containing a mixture of the two compounds. Mitscherlich thus described the elements arsenic and phsophorous as isomorphous.

Following Mitscherlich a great number of other elements exhibiting crystal isomorphism were found. For example, the sulphates of potassium, rubidium and cesium (KSO4, RbSO4, CsSO4) were found to be isomorphic; the nitrates of the same elements were also isomorphic with each other. The compounds of the alkali metals (lithium, sodium, postassium, rubidium) with the halogens (fluorine, chlorine, bromine and iodine) all formed crystals which belonged to the cubic system, appearing as cubes or octahedra. The cubic form of sodium chloride (table salt) crystals is an example, as one can verify with a magnifying glass.

This was the first of the clues which suggested the concept of periodicity. When Mendeleyev arranged the elements in order of increasing atomic weights, the isomorphic substances were found to form one above the next in a single column. Thus arsenic and phosphorous were part of Group V (15, in the modern nomenclature). The alkali metals fell under Group I; the halogens became Group VII (17 in the modern nomenclature). Not only this, but the elements of the same groups combined with one another in the same proportions. Thanks to the work of Gerhardt and Cannizzaro in establishing a uniform system of atomic weights, it had become a simple matter to determine the chemical formula for a great number of substances, once the proportion by weight of the component elements had been determined. It thus turned out that the elements of the first group (designated R) combined with the elements of the seventh group (designated X) in the proportion RX, as in NaCl. The elements of the second group combined with those of the seventh group in the proportion RX2, as in CaCl2, and so forth. If the combinations with oxygen were considered (the oxides being very prevalent), the first group produced RO2, the second group RO, the thrid group R2O3, and so forth. This is what Mendeleyev is describing in the periodic chart we show in Figure 2. We shall save the fascinating question of the investigation of the atomic volumes and many other properties of the elements which prove to be periodic for another time, and end this exercise for now.


1. Mining and metallurgy was clearly a part of ancient science, though the thinking and discovery process is mostly lost to us. Heinrich Schliemann, the discoverer of Troy, suggests that the word “metal” came from Greek roots (met’ alla) meaning to search for things, or research. Archaeological remains indicate an ordering of discovery of the metals and the ability to work them, with copper and its alloys preceding iron for example. Ironworking is associated with the Hittite and Etruscan seafaring cultures of Anatolia and north central Italy, who spoke a common language related to Punic or Phoenician.

2. The four gaseous elements were hydrogen (Henry Cavendish, 1766); nitrogen (Daniel Rutherford, 1772); oxygen (Carl Scheele, Joseph Priestley, 1772); chlorine (Scheeele, 1774). Among the metals discovered in the 18th century were:

Platinum (Antonio de Ulloa, 1735); Cobalt (Georg Brandt, 1735); Zinc (Andreas Marggraf, 1746); Nickel (Axel Cronstedt, 1751); Bismuth (Geoffroy, 1753) Molybdenum (Carl Scheele, 1778); Zirconium (Martin Klaproth, 1778); Tellurium (Muller, 1782); Tungsten (Juan and Fausto d’Elhuyar, 1788); Uranium (Klaproth, 1789), Titanium (William Gregor, 1791), Chromium (Louis Vauquelin, 1797); Beryllium (Vauquelin, 1798)

In 1803, William Hyde Wollaston and Smithson Tennant found the elements rhodium, palladium, osmium and irridium in platinum ore. In 1808, Humphry Davy isolated the alkali elements sodium, potassium, magnesium, calcium, strontium, and barium by electrolysis of their molten salts.

3. Antoine Laurent Lavoisier, {Elements of Chemistry,} translated by Robert Kerr, in {Great Books of the Western World,} (Chicago: Encyclopedia Briitannica, 1952) p. 3.

4. In the history of physical chemistry, the study of crystals provided one of the first means of access to the microcosm. It continues to be of importance today.This is great fun because Kepler’s playful work {The Six-cornered Snowflake,} is actually the founding document of modern crystallography. The student must take advantage of this, for the topic, as presented in the usual textbooks, is a confusion of mathematical formalisms and systems of classification. In Kepler, we see that the question is really very simple: why is the snowflake six-sided? why is the beehive made from cutoff rhombic dodecahedra? How shall we get an answer? It can only be by attempting to shape our imagination in conformity with the mind of the creator. If we do not get the complete answer, we see, nonetheless, that it is through the playful exercise of the mind in advancing and pursuing hypothesis that we come closer to it.

Among the many discoveries presented in that small work, Kepler introduces the concept that the study of the close-packing of spheres, which copy the space-filling property of rhombic dodecahedra, can help to explain the mineral crystals, all of which exhibit the characteristic hexagonal symmetries. Kepler thus suggested the existence of an atomic or molecular structure within the abiotic domain. Kepler’s insights were carried forward in the study of mineral crystals especially by the work of the Abbe Hauy (1743-1822) in France, who was followed by a great number of other investigators.