An Incredible Discovery Of Archimedes

by Jeremy Batterson

Archimedes’ discovery of the method of determination of the volume of a sphere was a discovery of such beauty and with such astonishing implications, that Archimedes, before his death, instructed that it be engraved upon his tombstone. And, yet, almost none, in our day, have ever worked through its proof, although it stands as a precursor of Leibniz’s later idea of the integral, as well as, it seems to me, hinting at the existence of the unseen domain which Gauss and others would later investigate. I had thought, myself, that this proof must have involved enormously complicated calculations, when, in fact, it is very easily accessible to any who desires to do a bit of mental work, so that, once having worked it through, we realize how totally laughable it is that every person who has EVER studied geometry has not worked through this and other problems of the ancient Greeks. It is so ridiculous that it would be as if you were to go to a modern university to study the subject of economics and not even study LaRouche’s works. What would YOU think about such a laughable thing?

Beginning with a sphere, Archimedes circumscribed it with a cylinder whose height was equal to the diameter of the sphere it circumscribed, but whose diameter was twice the diameter of this sphere, with the two solids based on a common center, namely, the center of the base of the cylinder. Finally, at this same common center, he placed the apex of a cone, whose height was equal to that of the sphere and cylinder, and with base equal in diameter to that of the cylinder. Thus, all three solids had the same height and the same common central axis, with the cylinder having a constant diameter all up and down this common axis, but the sphere and cone having constantly CHANGING diameters along its length. To draw a cross section of this construction, which will be needed for our further elaboration, draw a circle, and denote two opposite poles of this circle as “A” and “B”, so that line AB is the diameter D of this circle. Next, draw a rectangle whose shorter side is D and whose longer side is 2D, such that the circle is exactly centered within it, with point A lying on the center of one of the longer sides of the rectangle. Now, from point A, draw the two lines of maximum possible length from A to the opposite side of the rectangle, which will cause these lines to terminate at the far corners of the rectangle, points D and E, producing a triangle ADE, which is half of a square. Let the corner of the rectangle above A be point C and the corner below A be point F. This diagram represents a cross section of the sphere (circle), cone, (triangle) and cylinder (rectangle).

And now, Archimedes did something surprising! (Remember it was Archimedes who said: “Give me a fulcrum large enough and I will move the earth!”) If you ever played on a see-saw, as a kid, you may remember the principle of the fulcrum: A far lighter weight can lift a heavier weight, across a fulcrum, if the length of the balancing board between the two weights is longer on the side of the lighter weight. Thus, a toddler can hold his heavier teenage sibling up in the air this way, by simply placing the fulcrum far away from his end of the balancing board, and much closer to the end at which his sibling is sitting. Think of the fulcrum as being the “sun”, and the distance of balancing board between it and the end of the shorter length of board concerned as being the “perihelion” of an orbit, with the longer distance being the “aphelion.” The principle is that the heavier weight, of weight X times that of the lighter weight, equally balances the lighter weight when its distance from the fulcrum is 1/X the distance of the lighter weight from the fulcrum. Thus, for example, were the heavier weight twice the weight of the lighter, it would have to be 1/2 the distance from the fulcrum as this lighter. Or put otherwise, the product of the heavier weight and its distance from the fulcrum must equal the product of the lighter weight with its distance from the fulcrum, if they are to balance out. (X times 1/X = 1/X times X.) Behold, now, how Archimedes used the principle of the fulcrum in his discovery:

First, imagine the cut, line M2M, in our diagram, passing horizontally down the exact center of the construction, from the center of DC to the center of EF. This line passes exactly through the center O of the sphere. and also intersects both the sphere and cone at two points M3 and M4. Line OM3 is the radius of both the cone and the sphere at this particular point and is also the radius R of the sphere. If R is one, than the AREAS of the two corresponding circular slices of the cone and sphere will both equal 2?. The CYLINDRICAL cut, however, creates a circle of radius 2R, and hence, an area of 4?. Archimedes now asks, if the SUM of areas of the conical and spherical cuts is 2?, thus exactly HALF the area of the cylindrical cut, and if we treat the relative areas as WEIGHTS, where will these differing weights have to be to balance each other out? Leaving the cylindrical cut where it is, at distance R from A, he places a fulcrum at point F, and moves the areas of the conical and spherical cuts TWICE R, or D, to the other side of the fulcrum, to point G!

Now, project from any ARBITRARY point X along axis AB a perpendicular which extends upwards to line DC, intersecting this line at point X2. This line intersects the sphere at Xs and the Cone at Xc. We know, from Jonathan Tennenbaum’s pedagogical on Archytus that the line XXs, which is the radius of the circular cut of the sphere at this point, is the geometric MEAN between line XA and line XB. Thus, it follows that length XA/XXs = XXs/XB. Now, imagine point X, on the axis AB, as it travels from A to B. As its distance from A widens, the cylinder’s radius XX2, and, hence, corresponding WEIGHT of the cylindrical cut, remains constant, since all cylindrical cuts in the construction have the same radius. However, its DISTANCE XA FROM A is increasing, so that the weight needed to balance it out must either increase, or pass further and further away from the fulcrum. Let us now look more closely, then, at the SUM of the areas of the circular and conical cuts, the counterbalancing weight concerned:

Since all conical cuts in the construction result in icosoles right triangles, it follows that, for any such cut, the corresponding vertical radius (the line XXc in our diagram) will always equal this cut’s corresponding distance XA from A, and, hence, its lateral distance from F. Henceforth, let us call the line XA, which is the same as the radius of the conical cut, “MINOR.” Similarly, we will call the line XB “MAJOR” and the line XXs the geometric “MEAN” of these two extremes. (Don’t be confused by the fact that MINOR becomes longer than MAJOR once X crosses point O, but, rather, think of it as being the first of the two extremes. We could also call the two “origin” and “destination,” for example.) The sum of proportional areas of the conical and spherical cuts, will thus be, respectively, (MINOR times MINOR) + (MAJOR times MINOR), the spherical cut’s proportional area being such because MEAN times MEAN will always equal MAJOR times MINOR. Since the two proportional areas have a common factor, namely, MINOR, and since the sum of lines MINOR and MAJOR is the DIAMETER D of the sphere, the sum of the proportional AREAS of the conical and spherical cut must, thus, always be D times MINOR. Meanwhile, the proportional area of the cylindrical cuts, as we noted, remains D times D, since D is the radius of all cylindrical cuts.

As we recall, for the two weights to balance each other, the weight of the cylindrical cut times its distance from A (hence its lateral distance from F) must equal the sum of weights of the conical and spherical cuts times their distance from A. So, where does any particular pair of conical and spherical cuts balance out their corresponding cylindrical cut, assuming that we leave the cylindrical cut in its original position of distance MINOR from A? We know that DD times MINOR must equal some UNKNOWN DISTANCE times (D times MINOR.) What is that unknown distance? Indeed, it is D, since we can clearly see that

DD times MINOR = D times (D times MINOR)! And, this must be true for ALL cases, since the sum of proportional areas of the spherical and conical cuts is ALWAYS (D times MINOR), as we showed above! Thus, FOR ALL CASES, THE SUM OF WEIGHTS OF THE CONICAL AND SPHERICAL CUTS WILL BALANCE OUT THEIR CORRESPONDING CYLINDRICAL CUT’S WEIGHT AT A DISTANCE D FROM THE FULCRUM!!

Now, since this be true, it follows that, were we to make all possible infinite numbers of cuts through the construction, and thus encompass its entire VOLUME, we would end up with the cylinder remaining exactly in its original position, with all its weight focused at its center, namely the point directly beneath the origin of the circle, or point M on our diagram. This point is at R, or (1/2)D distance from the fulcrum. Meanwhile, the entire volume of both the cone and sphere would be squashed together into a PLANE, a circle balanced at point G, which is at distance D from the fulcrum. This balancing ratio of D:(1/2)D tells us that the weight of the SUM OF VOLUMES of the cone and sphere, contained within this squashed up plane, must be exactly HALF that of the Cylinder! But we are trying to find the weight, and, hence, the VOLUME of the sphere. Archimedes already knew (from Eudoxus, I believe) that the volume of the cone was 1/3 that of the cylinder encompassing it. Thus, If the total volume of the cylinder were 1, then the volume of the cone would have to be 1/3, while the volume of the sphere would have to be that which, when added to 1/3, yields 1/2 of 1. Thus, the volume of the sphere must be 1/6 that of the cylinder.

Archimedes took this one step further, by noting that the sphere which exactly encompasses the sphere would be of diameter D, instead of 2D, but have the same HEIGHT, and, hence, would have a volume 1/4 that of the larger cylinder he had used in his construction. Thus, the sphere would have a volume of FOUR times 1/6, which is 4/6 or 2/3 that of the sphere which encompassed it. However, since the volume of a cylinder is the area of its base, times its height, and since the height of the cylinder encompassing the sphere is 2R, this cylinder’s volume would be (?R2)2R. Since the sphere must be 2/3 of THIS volume, it is (4/3)?R3. Hence the famous solution for the volume of a sphere.”

Now, ask yourself this: Did Archimedes figure this out by “tinkering,” by playing around with different shapes, until the right arrangement popped out, “by magic,” or, rather, did he find this particular construction BECAUSE he was proceeding from a principle? For example, imagine the following possibility: Since he knew that MEAN times MEAN was equal to MAJOR times MINOR, might he have, then, asked which form would create the circumstance wherein its cut would equal MINOR times MINOR, and seen instantly that this must be a isosceles right triangle, hence a section of the particular cone he used in the construction? Similarly, might he have then asked which Cylinder, when coupled with this arrangement, led to a certain lawful result? Or might it have been something similar to this, perhaps far more elaborated?

Now, is that beautiful, or what?

Greece: Child Of Egypt, Pt. I

Lyndon LaRouche recently described classical Greece as the “child of Egypt.” The great figures of the sixth century B.C., Solon, Thales and Pythagoras, were, in fact, the children of Egypt, each having travelled to Egypt and studied under the Egyptian astronomer- and geometer-priests. Through them, and others, Egypt transmitted a science — a method of knowing the universe which has reached its current height in the works of Gauss, Riemann and LaRouche. Yet, the role of Egypt in relation to science, astronomy and mathematics has been almost universally rejected by modern historians of science, as the following samples show:

” … looking at Egyptian mathematics as a whole, one cannot escape the feeling of disappointment at the general mathematical level. … Babylonian mathematics … did supply a basis for Greek mathematics. … We do not need to set up a hypothesis concerning a lost Egyptian higher mathematics.” from Science Awakening, Van der Waerdan

” … mathematics and astronomy played a uniformly insignificant role in all periods of Egyptian history … mathematics and astronomy had practically no effect on the realities of life in ancient civilizations.” from Exact Sciences in Antiquity, Neuegebauer

” … The Greeks owed much more to the Babylonians than to the Egyptians.” from Greek Astronomy, Heath

Nor will one find much literal evidence of Egypt’s role in these fields in available, ancient writings. There are only a few written mathematical-scientific papyri that have been discovered, most dating from Egypt’s Middle Kingdom (2000-1800 B.C), and none from the great Pyramid Age of the Old Kingdom. Of Pythagoras, the central figure in this transmission, there are no extant writings. Nor are there any from other Pythagoreans of his generation.

But, if you look with your mind, instead of with your senses, the evidence is abundant.

A comparison of a passage from Kepler, to one from Plato, begins the journey. Kepler, in the introduction of Book 5 of the “Harmonici Mundi,” pays homage to the importance of Egypt, “I am free to taunt the mortals with the frank confession that I am stealing the golden vessels of the Egyptians, in order to build of them a temple for my God, far from the territory of Egypt. If you pardon me, I shall rejoice; if you are enraged, I shall bear up. The die is cast and I am writing this book — whether to be read by my contemporaries or not. Let it await its reader for a hundred years, if God himself has been ready for his contemplator for six thousand years.”

Kepler’s is echoing a passage from Plato’s “Laws,” during which Plato, in the person of the Athenian Stranger, cites the same Egyptian golden vessels: “Then there are, of course, still three subjects for the freeborn to study. Calculations and the theory of numbers form one subject: the measurement of length and surface and depth make a second; and the third is the true relation of the movement of the stars to one another … Well then, the freeborn ought to learn as much of these things as a vast multitude of boys in Egypt learn along with their letters… The boys should play with bowls containing gold, bronze, silver and the like mixed together, or the bowls may be distributed as wholes.”

What is the subject of this boys’ play?: the incommensurable, as the Stranger elaborates next. In questioning Cleinias, he establishes that Cleinias believes he knows what is meant by “line,” “surface,” and “volume.” Then:

“Ath: Now does not it appear to you that they are all commensurable (measurable) one with another?

Clein: Most assuredly.

Ath: But suppose this cannot be said of some of them, neither with more asurance nor with less, but is in some cases true, in others not, and suppose you think it is true in all cases: what do you think of your state of mind in this matter?

Clein: Clearly, that it is unsatisfactory.

Ath: Again, what of the relations of line and surface to volume, or of surface and line one to another; do not all we Greeks imagine that they are commensurable in some way or other?

Clein: We do indeed.

Ath: Then if this is absolutely impossible, though all we Greeks, imagine it possible, are we not bound to blush for them all as we say to them: Worthy Greeks, this is one of the things of which we said that ignorance is a disgrace?”

With this brief section of the “Laws,” Plato has given us the essence of the “who” and the “what” behind the development of classical Greece: the “who” is Egypt, the “what” is a geometrically-grounded mathematics, for which the questions involving the incommensurable were primary. Plato unpacks the various paradoxes which deal with the incommensurable in the Meno, the Theaetetus, and the Timaeus.

Most readers will be familiar with Plato’s “introduction” of the problem in the Meno, that the diagonal of the square is incommensurable with its side. That Socrates is threatened for his method, in the course of the dialogue, by Anytus (who later helps precipitate his trial and execution), perhaps foreshadows Kepler’s recognition that some will be “enraged” by such ideas.

But it is in the Theaetetus and the Timaeus that Plato establishes, directly, the debt to Egypt. The Theaetetus begins to introduce the necessary concept of “power” or dunamis. The power which creates a square or a cube is an action in the universe, an action knowable to the mind, but not reducible to the sense-certainty numbers of the visible domain. The two characters in this dialogue, besides Plato, are two real geometers who made fundamental breakthroughs. The older of the two, Theodorus, comes from the Greek-Egyptian city of Cyrene, a city on the western edge of Egypt, and dominated by the Temple of the Egyptian god, Zeus Ammon. Theodorus is the teacher of the young Theaetetus who goes on to discover the uniqueness of the five Platonic solids.

In his masterwork, the Timaeus, Plato is even more direct in identifying Greece’s debt to Egypt. Plato opens the dialogue by having Critias tell of Solon’s trip to Egypt and his instruction by the priests of Heliopolis. When the priests chide Solon, that the Greeks are children, and have no knowledge of ancient things, the tell Solon that Egyptian knowledge and civilization extend back 9000 years (hence, 9600 B.C.). With that introduction, Plato unfolds his composition on the universe, in a very Pythagorean discussion of astronomy, harmony and geometry.

Indeed, Pythagoras was the key figure in the transmission of Egyptian knowledge to Greece. The sixth century B.C. was the century of Solon, Thales and Pythagoras, and was the century in which the leadership in this method of thinking, passed from Egypt to Greece. Iamblichus, a third century A.D. biographer of Pythagoras wrote that it was Thales, the Ionian scientist, who deployed Pythagoras to Egypt:

“When he had attained his eighteenth year, there arose the tyranny of Polycrates: and Pythagoras foresaw that under such a government, his studies might be impeded .. So by night he privately departed (from the island of Samos) … going to Pherecydes, to Anaximander the natural philosopher and to Thales at Miletus…. After increasing the reputation Pythagoras had already acquired, by communicating to him the utmost he was able to impart to him, Thales, laying stress on his advanced age, advised him to go to Egypt, to get in touch with the priests of Memphis and Zeus (priests of Ammon, ed.). Thales confessed that the instruction of these priests was the source of his own reputation for wisdom, while neither his own endowments nor achievements equalled those which were so evident in Pythagoras. Thales insisted that, in view of all this, if Pythagoras should study with those priests, he was certain of becoming the wisest and most divine of men…. He (Pythagoras) visited all of the Egyptian priests, acquiring all the wisdom each possessed. He thus passed twenty-two years in the sanctuaries of the temples, studying astronomy and geometry, and being initiated in no casual or superficial manner in all the mysteries of the Gods.”

Working back from Plato’s various identifications of Egypt as the wellspring of a geometrical, astronomical and harmonic tradition which is embedded in the study of incommensurables, to the history of the sixth-century B.C. travels and studies of Solon, Thales and Pythagoras, one might ask Van der Waerden and his cothinkers, why they think that Egyptian higher mathematics is either “lost” or non-existent. Perhaps, as Kepler suggests, it is the rage induced by living inside a reductionist’s mind that can only see the shadow’s cast on the cave wall.

A future pedagogical will “let the stones speak” of Egyptian astronomy.

On Polygonal Numbers [; And So On]

Larry Hecht

Diophantus, who lived probably around 250 A.D., wrote a book called {On Polygonal Numbers,} of which only fragments remain. One of the famous fragments refers to his work on a definition by Hypsicles, an earlier Greek mathematician, concerning polygonal numbers. Working out what Diophantus means in this short fragment proves quite interesting, and relevant to the topics we have been discussing in these series. I will first give you a translation of the fragment from Diophantus. Don’t worry if it seems incomprehensible at first. We will construct it, and then it will all be quite clear.

Diophantus writes:

“There has also been proved what was stated by Hypsicles in a definition, namely, that `if there be as many numbers as we please, beginning from 1 and increasing by the same common difference, then, when the common difference is 1, the sum of all the numbers is a triangular number; when 2, a square number; when 3, a pentagonal number. The number of angles is called after the number which exceeds the common difference by 2, and the sides after the number of terms including 1.'”

To understand what he means, let’s take the most familiar case, that of the square numbers. Most books discussing this subject (sometimes referred to as the “figurate numbers”) draw dots as illustration; but there is a flaw in this, which you will understand after we have done the complete construction. It is far better to find some square objects, or cut them out of paper. Using these square tiles as the units, you will discover that only certain numbers of tiles go together into squares. The first grouping is 1, the second 4, and the third 9. But you should construct this for yourself, for it is already telling you something important about a certain kind of bounding condition, which interested Kepler very much.

Now, cut out some equilateral triangles and do the same thing–that is, make triangular numbers. This is a little less familiar, so I will illustrate how to count in triangles for you:

    /\     1                  /\/\/\/\    4

	/\/\    2                  /\/\/\/\/\  5

	 /\                          /\
	/\/\    3 (2-triangled)     /\/\       6 (3-triangled)
	                           /\/\/\        

You see that the first three triangular numbers are 1, 3, and 6. These have sides of lengths 1, 2, and 3, just as the first three square numbers (1,4,9) do. You might notice that there are also holes in these numbers, which the squares did not have. There is no need to worry about them. You will see by the end, why they must be there.

Finally, we come to the pentagonal numbers. Now, you must cut out at least 5 equal pentagons, although 12 would be better. Here the fun began for me: to figure out what 2-pentagoned would look like. As I don’t want to spoil it for you, I will not say right here, but let you pause and puzzle over the construction a bit. For now, I will give you the numerical values: the first three pentagonal numbers are 1, 5, and 12.

Now, it is easy to see from these constructions what Hypsicles had discovered, and described in words. We can illustrate it in the following series:

Triangular numbers (common difference = 1)
	Series: 1  2  3  4   5   ...
	Sums:      3  6  10  15  ...

Square numbers     (common difference = 2)
	Series: 1  3  5  7   9
	Sums:      4  9  16  25

Pentagonal numbers (common difference = 3)
	Series: 1  4  7  10  13
	Sums:      5  12 22  35

In each case. we start with one, and increase by the common difference, characteristic for the series. The sum of the numbers in the series is the number of tiles we had to employ to make the triangular, square, or pentagonal numbers.

This may all seem innocent enough, but there is a “fighting” matter of epistemology buried within. It is the same point which Gauss addresses from a more advanced standpoint, in his refutation of Euler, Lagrange and d’Alembert’s attempts to prove the Fundamental Theorem. Namely, do we accept any notion of number, or operations on number, that is not constructible, or subject to “constructible representation” (as Gauss once described the same issue respecting a matter in physics)? It is not only a fighting matter for us. Our enemies also get very upset over the issue. I recognized how much so, after I contemplated why the translator of the Loeb Classical Library Edition {Greek Mathematical Works, II} felt it necessary to add the bracketed phrase “[; and so on]” following the words “when 3, a pentagonal number” in the citation from Diophantus that I gave above. If Hypsicles or Diophantus had wished to say “and so on,” why would they not have done so? Sir Thomas Heath, the leading British commentator on these matters, finds it a shortcoming that Hypsicles had not gone further than the pentagonal number, and claims that what Hypsicles was really showing was how the n-th term of a series, with any common difference, could be determined.

Yet, anyone who has properly considered the significance of the Platonic solids, and stuck to the principle of mathematical rigor employed by both of Gauss and his Greek predecessors, would immediately recognize why Hypsicles stopped at the pentagon. What is being considered is not a math-class game of number series, which seem to go on forever to a bad infinity, but a process of examining the lawful constructibility of number. There is a clue to this also in the {Theaetetus} dialogue of Plato, which had been in the back of my mind, as I wondered what was getting Heath and company so worked up. Consider how Theaetetus describes there, in his examination of the problem of incommensurable numbers, the generation of the numbers 1, 2, 3 as the sides of the square numbers 1,4,9. He calls the numbers 1,2,3 “powers” (where we were taught to call them “roots”), because they have the “power” to generate squares, the singularity under examination in this case. The point in both cases, is that number must be lawfully constructed, and it is obvious that Hypsicles was doing so by examining the paradoxes generated by the Platonic solids.

So, let us now see what happens, if we take these polygonal numbers into the next dimension. The case most familiar to us is that of the square turning into a cube. Thus 1-cubed is 1, 2-cubed is 8, and 3-cubed is 27. (Remember, we are not doing a multiplication table operation, but a construction.) What, then, is the equivalent construction for the other polygons? We can see the case for the triangle most easily, if we now build ourselves four tetrahedra (that is, the Platonic solid made of four equilateral triangles), using triangles of the same size as those we cut out for the construction of the triangular numbers. Construct again the triangular number three, and place a tetrahedron atop each of those triangles. Then, place one more tetrahedron at the summit. Examining the solid so constructed, you will see that it has sides of length 2 in every direction–hence we have constructed 2-tetrahedroned. You can figure out for yourself, what 3-tetrahedroned would be [; and so on].

You might have noticed that there was a hole on the inside of the figure 2-tetrahedroned. That space in there turns out to be an octahedron, by the way. We also had those holes in the plane when we built the triangular numbers. This is telling us something interesting about the tiling of the plane, and the filling of space. Only squares and hexagons, among the regular polygons, can tile the plane, and only cubes and rhombic dodecahedra, among the regular (or quasi-regular) solids can fill space without gaps, which you can investigate for yourself, as Kepler did to his great delight. If you try to tile the plane with pentagons, you notice that when three come together at a point, there is an overlap. That is the key to constructing the figure 2-pentagoned, which I left for you to figure out earlier. You must break the unwritten rule in your mind, and allow yourself to overlap the sides.

Now, if tetrhahedrons do not quite fill space, but leave gaps, and cubes just manage to fill it up, you might expect that dodecahedra would go too far. and overfill it, just as the pentagons overtiled the plane. If you have now constructed your 2-pentagoned figure, with the overlapped sides, you can try your luck at placing a dodecahedron atop each of the five overlapped pentagons, and another dodecahedron atop each of these, to produce the number 2-dodecahedroned. You will see that, just as the pentagons had to overlap, so the dodecahedra must overlap, or interpenetrate, and so the figure 2-dodecahedroned will be of a different type than the cubic or tetrahedral numbers.

Those of you who know why there cannot be more than five regular solids, will now see why Hypsicles stopped at the pentagon. For, while the series with increasing common differences can be extended out to a bad and boring infinity, the interesting paradoxes are not going to arise, unless we have a concept of a constructive process for these numbers.*

Before closing, and since you have all the materials at hand, let us review why there can be only five regular solids. It is a famous proof, given by Kepler. The regular solids are, in fact, the plane projections of the figures produced by tiling the sphere, and they are five ways to do it. As plane solids, they must have regular polygons for their faces, so the problem is reduced to great simplicity by considering only how many of these figures may come together at a vertex. Start with the equilateral triangle. Three of these may be joined at a point, and brought together into a sort of cup, so that they could hold water. This will become the vertex of the tetrahedron. Four triangles may also be brought together, and cupped; they will form the vertex of the octahedron, which looks like two Egyptian pyramids brought base to base. Five triangles may also be brought together and cupped; they form the vertex of the 20-sided icosahedron. However, when six triangles are brought together, it is seen that they just lie flat, and cannot be made into a vertex of anything solid. Next, we try the square, and find that three can be brought together and cupped into a vertex of what becomes the cube. But four are too many; they lie flat. Three pentagons lying in the plane, and joined at a vertex, leave just enough space to be cupped into a vertex of what becomes the dodecahedron. But that is the end of the possibilities, for if we next take a regular hexagon, we find that when three are brought together at a point, they simply lie flat and cannot become the vertex of any solid.

So, you see, there is no “[and so on],” as Hypsicles and Diophantus appear to have understood better than their modern commentators of Oxford erudition. Avoiding “[and so on]” is also good advice for your speaking practice–that you not recite a series of things in sing-song fashion, as so many people do these days, as if there were no lawful cause for their being there. This is nominalism in language, as the idea of number without constructibility is nominalism in mathematics. It is part of the same disease, which we are trying to cure.

—————————–

* Of such interesting paradoxes, you might consider, as a topic for more advanced consideration, that a prime number is a constructible species in the series of numbers constructed using squares as the tiles. Following Theaetetus’s specification that we allow only square or oblong (rectangular) numbers, the prime is a number of the form that it can only be represented as a rectangle of width 1. What, then, is a prime number in the triangular or pentagonal series? What else is peculiar about the square and rectangular numbers?

Pierre de Fermat became quite fascinated with the polygonal numbers, and discovered many things about their properties of combination. His famous Last Theorem might be seen as an investigation of the constuctible properties of solid numbers of the square, cubic, and higher variety.

The Crab Nebula And The Complex Domain

By Jonathan Tennenbaum

It is a fair guess, that the Crab Nebula will play a comparable role, in a coming series of revolutions in astrophysics, to that of Mars’ anomalous motion in Johannes Kepler’s launching of modern astronomy four centuries ago. The anomalies of the Crab Nebula confront us directly with the issue of the interrelationship between the Sensorium of perception and the manifold of efficient physical principles, that Lyn addressed in his recent paper on “Visualizing the Complex Domain”. Ironically, any person who has mastered Lyn’s paper, will be incomparably better qualified, to grasp the fundamental question posed by the Crab Nebula, than 99% of today’s professional astrophysicists!

The present state of astronomy and astrophysics, exemplify the way empiricism has killed science. The Platonic method of hypothesis, upon which Johannes Kepler founded the science of astrophysics, has been suppressed. Instead, “scientific method” is fraudulently equated with the practice of interpreting and “explaining” data according to the supposedly authoritative “laws of (textbook) physics”. Thereby, the contemporary astrophysicist degrades himself to the level of an animal, that interprets sense perceptions according to blind instinct. The equivalent of animal instinct, which controls the afflicted scientist’s mind in this case, is adherence to “accepted norms of academic performance”, engrained in the student through drilling in the methods for “getting the right answer”, and enforced among professionals by fear of being ostracized from the “scientific community”.

Moreover, astrophysics has been perverted by the monstrous concoction known as “modern cosmology”, with the imposition of arbitrary, ivory-tower doctrines, such as the “Big Bang”, that have no basis in the actual astronomical evidence. Continuing in the line of the bogus, entropic “theories” of Laplace and Kant, the “Big Bang” and related fairy-tales work as a cover for a veritable inquisition against original scientific inquiry, and of a suppression of scientific evidence, that rivals the Dark Ages of Aristotle and Ptolomy.

The resulting, “official” doctrine of the Universe, admits no true principles, no generation of ideas, and therefore no possibility for Man to transform the Universe. It is expressly designed to make people feel tiny, impotent and morally indifferent, as Kant recommended in his monstrous 1755 treatise, “General Natural History and Theory of the Heavens, or Attempt to Account for the Nature and Mechanical Origin of the Entire Universe according to Newtonian Principles.” By imposing a false, Euclidean-Cartesian projection of reality in terms of a supposed primacy of scalar extension, galaxies and other astronomical objects — which in fact are located at “virtually zero distance” from us in the causal ordering of the Universe — were made to appear “hopelessly far away” and inaccessible to Man. By the same token, human Reason and Man’s own activity in the Universe, were made to appear as if totally irrelevant, and Man himself to shrink to almost nothing, relative to the inhuman vastness of thousands, millions or billions of light years, that supposedly characterizes the “objective” Universe around us.

But, the situation is overripe for revolution. Ironically, while the process of creative hypothesis-formation in astrophysics has all but collapsed, major advances have occurred in the technology of astronomical observation, exemplified by the development of X-ray and gamma-ray telescopes; the stationing of telescopes and other astronomical instruments in orbit; and the advent of “very long baseline” interferometry, creating the effect (“synthetic aperture”) of a radio telescope the size of the Earth’s diameter. These technological advances have led to an unprecedented proliferation in the number and variety of astrophysical anomalies, just waiting for a new Johannes Kepler to appear on the scene, and to liberate science from the chains of the Enlightenment.

Will such a new Kepler emerge from the ranks of a victorious LaRouche Youth Movement? Many Keplers, we should hope! In the meantime, the following introduction to the Crab Nebula should provide a foretaste of the delights in front of us.

A first look at the Crab Nebula

The Crab Nebula (M1 in the classification of Messier) is located in the night sky in the constellation Taurus. While not directly visible to the naked eye, it appears in low-power telescopes as an approximately elliptically-shaped, luminous cloud, whose long axis describes an angle of 5 minutes of arc “on the celestial sphere”. The apparent minor axis of the Crab is about 3 minutes of arc across (1).

The Crab Nebula was first noted around 1731, as an oval-shaped nebulous patch in sky. By the middle of the 19th century, with the rapid improvement of telescopic instruments, a complex of irregular filaments became visible within the nebula, inspiring the name “the Crab”. In the course of the 20th century, the region of the Crab Nebula was found to be a powerful source of radiation, spanning practically the entire known electromagnetic spectrum — from radio waves, microwave and infrared radiation, across the visible spectrum, all the way through the ultraviolet and X-ray ranges, into the domain of ultra-short-wavelength gamma-rays (“cosmic rays”).

In the meantime the development of astrophysical instrumentation, has made it possible to register the emissions from Crab Nebula over a large section of the above-mentioned “registers” of electromagnetic radiation, mapping the intensity (and sometimes polarization and spectral characteristics) of the radiation received, in the given wavelength interval, as a function of direction on the celestial sphere. The result is a growing array of images, all covering the same angular area on the celestial sphere, but differing very greatly from one electromagnetic “register” to the other, and also changing in time in a most extraordinarsy fashion.

Figure i s a recent, very beautiful photograph of the Crab in the visible wavelength-range, taken by the European Southern Observatory.

Figure 1

Figure 2 shows a set of four images, made with visible, infrared, radio frequency and X-ray radiation — all totally different! (Note: the images are not all on the same scale.)

Figure 2

Seeing with your mind, not just with your eyes

The contrast between the images immediately raises the question: If the “real” object not any one of those images — projected, as it were, on the extended Sensorium of astronomical instruments — then what kind of object is it, that we are observing? The question prompts a brief aside, before getting on with the Crab.

Take a very simple illustration from everyday life. You report to someone, “I saw Jonathan today”. This statement could mean different things, and be truthful or untruthful, depending on how you intended the verb, “saw”. If “to see” meant nothing more than an act of visual perception, then the statement could not possibly be true; since for sure you perceived only Jonathan’s face, not the actual person! For, a human personality is not a visible object! A human personality — a mind — can be recognized and known only to the cognitive processes of another mind. The report, “I saw Jonathan”, could only be truthful, if the verb, “saw”, subsumed a cognitive process in your mind, by which you identified and conceptualized a specific human personality, lying “behind” the image of the face and other effects, your sense perception reported to you.

What then is the character of the object of astronomical observation? What does today’s astronomer have in mind, when he says “I have been observing the Crab Nebula”? Does he simply mean, that he has pointed his telescope or other instruments at a certain luminous smudge in the heavens, and registered certain signals? No doubt, the astronomer means more than that. He will claim he was observing a “real object out there”. But, what sort of object does he have in mind? How could he demonstrate that “it” actually exists, as an efficient entity in the Universe, in the way he thinks it does?

From the standpoint of naive sense-certainty, the modern astrophysicist does not “observe” the Crab Nebula in any direct sense (the luminous smudge is anyway not directly visible to the naked eye!) What he observes, is something happening to certain physical systems, called scientific instruments, which the astronomers have developed as “generalized sense organs”. The action, the “happening” ,is occuring at the location of the instruments, not at the putative location of the Crab Nebula, many light years away! Sometimes the events the astronomer studies, are nothing but certain harmonic correlations of phase among signals generated in a network of instruments. Yet he ascribes these events to that remote, unseen object: the Crab Nebula!

All this underlines the fact, that there is no simple, self-evident relationship between the processes of perception or “observation”, and “the object itself”. That relationship depends exclusively on the cognitive powers of the human mind, to adduce the existence of thought-objects, in the form of principles lying beyond the reach of mere sense perception, and to demonstrate their efficiency over the phenomena of the Sensorium.

The Crab in the Sensorium of “multiwavelength astronomy”

Now return to the images themselves. Figure 3 shows photographic images of the Crab Nebula, taken in blue and red wavelengths of visible light. Note, that the complex fabric of filaments, that appear in red light, but are virtually absent in blue. That outer, filamentary “cocoon” of the Crab displays characteristic, sharply defined spectral lines, differentiating it from an inner core, whose emission is continuously distributed over the entire electromagnetic spectrum.

Figure 3

Figure 4 is a closeup of the filaments, taken with the Hubble orbital telescope in visible wavelengths. The highly-organized morphology of the filaments is astonishingly reminiscent of certain types of living tissue.

Figure 4

Figure 5 shows the Crab photographed in the visible light range, but with filters selecting light with different angles of polarization. The striking difference in overall strength between the two, shows that the whole, gigantic system possesses a strong axis of polarization. Presumably, the Nebula as a whole is powerfully magnetized.

Figure 5

Most extraordinary, Figure 6 shows an image produced in the shortest of the four wavelength-ranges, X-rays, by the orbiting X-ray telescope Chandra during 1999-2000 The X-ray-emitting region appears to coicide with the core of the Crab Nebula. It is organized around a clearly-defined axis which coincides roughly with the long axis of the Crab as a whole, as well as the axis of polarization; it has a toroidal structure with smaller concentric rings and a variety of rapidly-changing features.

Figure 6

Figure 7 shows a superposition of photos taken in the visible and X-ray wavelengths, from which you can see the location and proportions of the X-ray-emitting “core” to the Crab Nebula as a whole (here the outer filaments do not show) This can also be seen on Figure 1, where the outer filaments come out strongly, and the core is faintly visible.

Figure 7

Located on the axis of symmetry of the X-ray torus, in the middle of the entire structure, lies a highly anomalous object, key to the entire Crab Nebula: a star that emits repeated, powerful pulses of electromagnetic radiation, in almost the entire spectrum from radio to gamma-rays, at a precise rate of 30 pulses per second! The stills and time-lapse movie of this “Crab pulsar” show huge jets of magnetized, X-ray-emitting plasma, flowing outward from what are presumably the polar regions of the star, along the axis of the torus in both directions, then curving off on both sides as if to form an “S”-shape, and perhaps continuing out into the outer tissue of filaments.

Again, the harmonic features of X-ray core region, plus its striking left-right dissymmetry, are coherent with Leonardo and Pasteur’s observations on the characteristic morphology of living processes.

The reader should now view the time-lapse movie of the X-ray region of the Crab, which can be downloaded from the website: http://chandra.harvard.edu/photo/2002/0052/

The movie referred to is the middle one of the three on the cited page, entitled “Chandra timelapse movie”. It is made from 7 successive images of the Crab, taken at approximately 21-day intervals by the Chandra orbiting X-ray telescope between November 2000 and April 2001. Figure 8 shows the sequence of stills from which the movie was made. (Note: The sequence is repeated several times, in a loop, to make it longer. The resulting impression of periodic pulsation, is an artifact of the editing, and has nothing to do with the much shorter, 33 millisecond pulsing of the central star.

Figure 8

This time-lapse movie displays most strikingly, what has long been discussed as a central, anomalous feature of the Crab: processes in different locations of this gigantic object, are evidently synchronized with each other, in a fashion that can hardly be explained on the basis of a point-to-point propagation of “signals” or other influences between those locations. Note the coherent, synchronous changes in prominent features of the X-ray-emitting core, including the evolution of “hot spots” on the inner X-ray ring, the concentric outward-moving shock waves, as well as synchronous changes on the larger, concentric “torus”.

These changes are occurring at what, for a system of astronomical dimensions, is an extraordinarily rapid rate. From the movie, in fact, there is nothing to suggest to us, that we are looking at an object possibly many light-years across. There is nothing in the pattern of changes of the Crab as a whole, for example, that suggests that the Crab experiences any significant limitation connected with the finite velocity of propagation of light, for example.

This raises the question, how “big” the Crab Nebula “really is”. Interestingly, it is the growth process of the Crab that provides the chief means for estimating its approximate scale.

A growing anomaly

Figure 9a shows an image of the Crab, taken in 1973, and Figure 9b shows an image of the crab taken in 2000. Close comparison of the superimposed images, particularly the details of the filamentary structure, suggests that the Crab Nebula — or at least, the outer, “cocoon”-like outer shell of filaments — is constantly expanding! Systematic comparison of photographs, taken over the last 80 years, show that the outer filaments of the Crab are expanding radially from the center of the Crab, at an overall average rate of 0.1 seconds of arc per year as seen from the Earth. Accordingly, the apparent angular size of the Crab, as seen on the celestial sphere, grows by twice that amount, i.e. 0.2 seconds of arc per year as seen from the Earth.

Figure 9a

Figure 9b

A crucial additional piece of evidence, is the peculiar spectrum of the visible light received from the Crab, sections of which are shown in Figure 10 and Figure 11. Alongside the continuous spectrum emitted from its central region, the overall spectrum of the Crab contains an array of discrete emission lines, originating mostly in the surrounding filaments, at wavelengths that are characteristic of certain known chemical elements.

Figure 10

Figure 11

There is, however, a very striking difference to the Earth-bound spectra. The difference shows up most clearly in Figure 11, where the spectra of light from Crab Nebula is “scanned” at varying positions along its major axis. The strongest set of lines — a group of three lines characteristic of the element oxygen — appear “double” and split into two sets, expanding a “necklace”-like shape as the scan moves toward the middle of the axis. One set is “down-shifted” from their normal positions toward the longer wavelengths; the other “up-shifted” toward shorter wavelengths. Note that the gap is biggest in the middle of the Crab Nebula, while the two sets of lines approach each other toward the ends of the axis.

What is the Crab telling us, with this bizarre “necklace” of spectral lines?

So far I have mainly just described the observations, steering clear of any elaborate interpretations of the observed, anomalous characteristics of the Crab Nebula in terms of the “standard textbook knowledge” of physics. We are now approaching a point, where the application of “standard theory” can have the useful result, albeit of a negative sort: it leads us into what, for “standard theory” itself, is an insoluable paradox.

The splitting of the spectral lines from the Crab has a simple interpretation in terms of the known principles of propagation of light. Assuming the whole ellipsoidal “shell” of the Crab is indeed expanding — as the angular growth of its projection on the celestial sphere suggests –, the light coming from the portions of the expanding “shell”, that are expanding toward us, should be upshifted in frequency (i.e. toward shorter wavelengths); while the light from portions of the shell moving away from us will be shifted toward lower frequencies and longer wavelengths. Based on presently-known principles, the wavelengths of the emitted radiation would be expected to increase or decrease, respectively, by a proportional amount equal to the ratio of the velocity of expansion of the shell, to the rate of propagation of light.

Now, the actually observed magnitude of the upshift and downshift in the lines is on the order of 0,4% of the wavelengths involved. From the above reasoning we would have to conclude, that the rate of expansion of the Crab’s shell were also of the order of 0.4 percent (or about 1/230th) of the rate of propagation of light — corresponding, in linear-scalar terms, to a velocity of 0.4% of 300.000 kilometers per second, i.e. about 1300 kilometers per second. (This assumes, of course, that the characteristics and rate of propagation of light are the same in the vicinity of the Crab, as on Earth.)

But when we compare this estimate for the rate of radial expansion, derived from the magnitude of the spectral shift, with the apparent size and rate of expansion of the Crab as seen from the Earth, we come to the conclusion, that the Crab Nebula must be enormously large — many light-years in diameter!

Recall, that the Crab’s rate of radial expansion, as observed from the Earth, amounts to 0.1 second of arc per year. We just concluded, however, that light propagates 230 times faster than the radial motion of the outer filaments of the Crab. That would mean that a light wave, propagating along the Crab’s major axis, would traverse in one year a segment 230 times longer than the yearly increase in distance from the center. The angular size of that segment, as seen from the Earth, would therefore be about 230 x 0.1 seconds of arc = 23 seconds of arc, or about 0.38 minutes of arc on the celestial sphere. As I mentioned earlier, the apparent major axis of the Crab corresponds to roughly 5 minutes of arc on the celestial sphere. Our conclusion: a light wave would take 5/0.38 years, or about 13 years, to propagate from one end of the Crab to the other!

More refined estimates, based on the same method, yield something closer to 10 light-years for the major axis of the Crab. For such a length to subtend an angle of 5 minutes for arc, as seen from the Earth, the distance of the Crab Nebula from the Earth would have to be about 6300 light years. At least, this is what simple geometry would lead us to conclude.

The shadow of a physical principle

The above estimates serve to clinch the paradox of the Crab’s coherent behavior, which I already pointed out in discussing the time-lapse movie.

If, on the one hand, we assume that the Crab is really such an immense object as the above argument implies, then how is the Crab able to maintain the synchronicity and coherence of the rapidly chaning processes occuring in different regions, situated many light years apart? That coherence could only be due to a “something” that were acting isochronically, at all loci of the Crab simultaneously!

If, on the other hand, the assumptions underlying our estimate of the size of the Crab are invalid, then “something” is acting to the apparent effect, of drastically changing the assumed characteristics of the emission and propagation of light, and the assumptions of geometry, upon which our estimate of the scale-dimension of the Crab Nebula was based.

Either way, the anomaly of isochronic action “outside” the domain of the “standard physics” accounts of “chains of cause and effect” cannot be made to disappear. Different attempts at interpretation mere change the location and apparent form of the anomaly — just as different methods of mapping a curved surface onto a flat plane, “blow up” in different ways.

It is by focussing in on this sort of irreducible paradox, that we become able to go beyond the Sensorium and any formalistic interpretation of the Sensorium, to conceptualize the real object that has generated the anomaly.

A number of other, gross anomalies of the Crab Nebula should be brought into the picture, that share the same underlying character. (A more rigorous treatment, to be developed, would replace the scalar measures, employed in the following brief exposition, by appropriate geometrical magnitudes for a corrected, anti-Euclidean representation of the Sensorium.)

Chief among the anomalies to be mentioned, is the circumstance, that the Crab Nebula is a powerful emitter of cosmic rays — in fact, one of the most powerful ones known — in the form of photons of ultra-short-wavelength light (gamma rays) having wavelengths trillions of times shorter than those of visible light, and thereby quantum energies many orders of magnitude larger than all known types of nuclear reactions, including presently known forms of “matter-antimatter” reactions.

In fact, the entire spectrum of radiation emitted from the Crab Nebula is drastically “upshifted” relative to that of our Sun. While our Sun has most of its output in the visible and near-visible range, most of the gross power output of the Crab is in X-rays, with a substantial part extending into the gamma-ray region. Combining the above estimate of the scale-dimensions of the “Crab” and of its distance from the Earth, with the intensity (brightness) of the radiation received in the vicinity of the Earth, one arrives at the conclusion, that the overall radiation output of the Crab Nebula must be approximately one hundred thousand times that of our Sun, but with the greatly “upshifted” spectrum.

Recently additional anomalies have come to light, which are coherent with the same “curvature”. It was discovered last year, that the periodic electromagnetic pulses, attributed to the central star (pulsar) of the Crab, contain high-power subpulses lasting only about 2 billionths of a second. On the assumption, that the known characteristics of light emission apply to the Crab, the effective sources of such nanosecond subpulses could be no larger than about 60 centimeters across — the distance light travels in 2 nanoseconds! But to produce signal of the observed strength at the Earth 6000 light-years away, the emitting region would have to achieve a power-density corresponding to a billion times that generated in the core of an H-bomb detonation! Alternatively, the effect of a tiny source region could be schieved through isochronic, coherent emission from a larger region of the Crab, according to the principle of a laser.

In both cases, “something” is acting to “shape” the Crab Nebula processes to the effect, of shifting its activity toward the higher energy-flux-density registers of coherent electromagnetic action. Let us look more carefully at this aspect.

Conical-spiral functions

All evidence points to the role of the Crab’s pulsating star as the “motor” and “organizer” of the entire Crab Nebula, and to the likelihood, that this star is a rapidly rotating body, making one revolution every 33 milliseconds (30 cycles a second), which is the period of the star’s apparent pulsation. The axis of rotation of that central star, would coincide with the axis of symmetry of the toroidal structure, revealed in the Chandra X-ray images, which in turn coincides roughly with the major axis of the ellipsoidal form of the Crab as a whole. From a very slight, but measurable slowing-down of the observed rate of pulsation it is surmised, that the Crab is constantly converting a portion of the rotation action of the pulsar, into various forms of electromagnetic radiation, and other forms of work that might be going on.

Now since electromagnetic radiation, as it projects into a generalized Sensorium, also has the characteristics of rotational action, the form of the general effect we are looking at, is the transformation of low-frequency action (rotation of the star at 30 Hz) into high-frequency registers of action (X-ray radiation at 10**17 Hz, gamma rays at 10**26 Hz or more).

Aha! What we have, minimally, is a form of conical spiral action: not simple rotational action, but rotational action which constantly transforms itself to higher registers of rotational action.

Let’s not forget, however, that we are not dealing with a simple geometry. We have to locate the real object behind the images projected on the surface of our generalized Sensorium in different wavelengths — and the conical-spiral characteristics of action adduced so far — , within the real Universe: the Universe of three interconnected, Vernadskian phase spaces of nonliving, living and cognitive species of physical principles.

The flux of high-frequency radiation in the core of the Crab is such, that condensed matter, of the familiar earthly sort, could not exist there. Instead, we have a highly polarized plasma being acted upon by the rapidly spinning, intensely magnetic star — a setup suited, we would presume, to actually generate, by polarized fusion or analogous sorts of processes, the kinds of organized forms of “matter” that would be a precondition for further evolution in the direction of a solar system in the direction of Biosphere and Noosphere phases of development. The conical function of “upshift” of electromagnetic radiation in the Crab would thus be multiply-connected with a second conical function, expressing the generation of an evolving “Mendeleyev table” of “eigenstates of matter” within the core region of the Crab.

This suggests the notion of a manifold of multiply-connected conical action, as a necessary feature of the region of tangency between an anti-entropic intention guiding a physical process, and the phenomena generated by that process in a generalized, spherically-bounded Sensorium. Therefore, the Crab Nebula should not be conceived as an object in a Euclidean-Cartesian “three dimensional space”, but rather as a singularity in terms of that doubly-multiply-connected domain. (We can surely do better, but that is our first shot!)

Does the Crab Nebula have a “personality”?

The more we investigate the Crab, the more it closely it resemble the kind of object, that Lyndon LaRouche hypothesized, many years ago, as an early phases of development our own solar system:

“Currently, our best knowledge is, that the Solar system began as a fast-spinning, youthfully exuberant solitary Sun in the universe at large. According to Kepler’s principles, this young Sun spun off some part of its material into a disc orbitting the Sun itself. If we assume polarized nuclear fusion occurring within that disk, then it were possible for polarized fusion, and, presumably, only polarized fusion, to have generated the observed periodic table of the Solar system. That fusion-generated material from the disk would have been “fractionally distilled” into approximately the Platonic orbits defined by Kepler.”

Granted, this view of the evolution of the solar system is totally at odds with the “official” account, both in nominal content and, most importantly, in spirit.

Pick up any astronomy book or research paper, and you will find, in ritual propitiation to established academic doctrine, the Crab Nebula constantly referred to as a “supernova remnant”. Note the implied, entropic misconception of the Universe, expressed by that expression. We are supposed to think of the Crab, not as a process evolving lawfully toward higher states of organization, but as a mere “remnant”, a “left over” from an exploded star. In an aging, entropic Universe there would seem to be no room for the “youthful exuberence” of stars to generating their own, brand-new planetary or analogous systems. Not surprisingly, none of the astrophysical specialists predicted anything like the features revealed in 1999-2000 by the Chandra X-ray telescope, despite the elaborate ivory-tower mathematical models they develop to “explain”, after-the-fact, earlier observations. In fact, Lyndon LaRouche has once again been shown on the mark, while the so-called experts were way off.

In reality the Crab Nebula displays all the characteristics of a happily evolving Keplerian system, including the driving, organizing role of its central singularity, in exact accordance with Kepler’s conception of our Sun. The Crab continues its exuberant development, and no Earthbound pessimist can do anything about it!

This brings us back to the question, of the nature of astrophysical “objects”. Is the Crab Nebula merely an “effect” or phenomenon of the overall laws of the Universe, like the attraction of magnets or the blue color of the sky? Or are we justified in associating to it a certain, individual character or personality — a character that could only be known as a thought object to the mind? Certainly, the quality of exuberant passion, could pertain only to a Leibnizian monad, not a mere physical effect.

Any rigorous exploration of this question should adopt the Lyn’s suggestion, from some years ago, to organize experimental scientific inquiry around a “3×3” schema: We make three rows, one for each of the three Vernadskian sub-phase-spaces of the Universe, corresponding to the domain of ostensibly non-living processes, of living processes, and of the processes associated with the action of cognition in the Universe. Then, we make three columns, corresponding to the microphysical, macrophysical and astrophysical ranges of scale-lengths of experimental investigation. To establish the validity of any purported universal physical principle, we must demonstrate its efficiency vis-a-vis all 9 of the 3×3 domains of experimental investigation — the latter understood, of course, not in the sense of “objective science”, but as domains subsumed by Man’s action upon the Universe.

The most fascinating question, posed immediately by what we have said here, is the relevance of the Crab Nebula to the manifestations of the principle of life on the astrophysical scale.

Beyond that, do we not perceive a certain playful, daunting character to the anomalies, the Crab Nebula seems to throw at us? The face seems far away, but the smile is very close.

Construct a Solar Astronomical Calendar

by Larry Hecht

The evident success of the ongoing project to measure the retrograde motion of Mars, suggested to me that we are ready to take up another challenge in observational astronomy–the construction of a solar astronomical calendar.

This is a challenge that Lyn posed to us nearly 25 years ago, in part influenced by a trip to India, where he came into contact with the work of turn-of-the-century Indian independence leader Bal Gandaghar Tilak. I first began to seriously take up Lyn’s challenge in connection with my own efforts to understand Tilak’s work some time in the mid-1980s. To be honest, I could not understand at first why Lyn kept talking about “constructing a calendar,” which I thought was something easily obtainable at any stationery store. Once I began to understand what was involved, however, I found that this project led in a number of very interesting directions.

Tilak’s work involves the hypothesis that verses in the sacred Vedic hymns refer to astronomical phenomena, which could only be known by a people living at a point at or above the Arctic Circle. His hypothesis immediately brings into play at least three important and interlocking branches of science: astronomy, Indo-European philology, and climatology, all necessarily subsumed under the topics physical economy and universal history. One of the most provocative aspects of Lyn’s discussion on the subject was the hypothesis that a highly-developed, poetical-musical language, (such as was indicated by the Sanskrit, for example) would be required for the task of recording and preserving astronomical observations over long periods of human history. Rather than the object-fixated grunts of some doomed society of primitive Rave-dancers, forms of verbal action capable of expressing the transformative nature of natural law would be required, including verbal forms capable of expressing the subjunctive mood necessary for any hypothesis, varying degrees of completion of action, and many other subtleties.

Attempting to read Tilak’s {Arctic Home in the Vedas,} however, produced some immediate problems. Early in the book, the author began talking about astronomical phenomenon, such as the precession of the equinox, the seasonal motions of the Sun, and the relationship of the Sun to the zodiacal constellations, which, I soon realized, I had no real understanding of. To make any sense out of his thesis, it was clearly necessary to have some grasp of these things, so I decided at some point to dig in and make the effort. As I had been reading books about astronomy since childhood, it was something of an embarassment to have to admit to myself that I could not even explain the meaning of the seasons in any cogent way. A joke which Jonathan T. had been making at that time had stuck in my mind, and was helpful in overcoming the embarassment. The joke, which I think he may have included in the title of a Fusion magazine article he wrote at the time, was the phrase “Astronomy without a Telescope.”

A Simple Calendar Observatory

The method I suggest here for constructing a solar astronomical calendar, is not an exact replica of the steps I took. However, I think it will work, and, under the present circumstances, where collective and enthusiastic pedagogical activity is taking place all around, it should allow us to proceed quickly and happily.

I suggest we begin by constructing something which will resemble, in principle, the famous Stonehenge, an historical artifact which has unfortunately taken on all sorts of cult-like significance, but which is actually just one of many still-standing astronomical observatories from the Megalithic period. Our observatory will be much simpler. Probably the most difficult part of this project will be to find a level site with a good view of the horizon, especially to the east and west, to which we can return regularly. The calendar observatory need consist of no more than some stakes in the ground, arranged around part of the circumference of a circle, and one stake at the center.

Now, here is what I suggest we do. On the first day, we make two observations, one at sunrise and one at sunset. We begin by locating a center for our circle, and driving a stake in the ground at that point. This will be the siting post for all the observations. Now, choosing an appropriate circumference for our circle, and using a rope or chain to keep a constant distance, we plant a stake in a line from the siting post to the point where the Sun rises over the horizon. We return before sunset, and similarly drive a stake in the ground on the other side of the circle where we see the sun set.

That simple observation, repeated over the course of a year, will provide us with an experimental understanding of many important concepts in astronomy, including the summer and winter solstice, the vernal and autumnal equinox, and the equation of time (which, by the way, bears a certain relationship to the lemniscate). But this is only a beginning. For, using no more than our simple observatory, we may next begin to observe the motion of the Sun, not only with respect to fixed positions on the ground, but also with respect to the stars. From this we may develop many new concepts, including that of the precession of the equinox, which plays an interesting part in this history of science, which, we shall also come to see, is the history of language.

But we will also have an advantage over our predecessors, who were carrying out such observations probably tens of thousands of years ago, several cycles of glaciation back into the pre-historic past. By use of modern means of communication, we will be able to rapidly compare observations made at widely divergent positions on the Earth. We shall have the great advantage of having access to observations at the high northern latitudes of Stockholm and Copenhagen, the near-equatorial latitudes of Bogota and Lima, and many middle latitude sites in both the Northern and Southern Hemispheres. This will really make for some fun, some paradoxes, and definitely ensure that there is no “right answer” to be looked up in the back of the back.

To start out, I suggest we take the time to explore and secure a good site for our calendar observatory, and begin with the first very simple observation of marking the rising and setting points of the Sun. Between these points, we will have a circular arc on the ground, whose angle can be measured and recorded. It would also be useful to make some observations of the path of the Sun in the sky over the course of a day. From this observation and the position of our two stakes in the ground, we should also be able to come to a clear understanding of the meaning of North, South, East and West, and also of the word Noon. For some added fun, we might try to measure the highest declination of the Sun, and observe what time it occurs on our watches.

With the measure of the circular arc between the two stakes in the ground recorded, it will be most interesting to immediately compare the results with those found on approximately the same day at other calendar observatories around the globe, as one could do, for example, on an international youth call. If it should happen that some of the observations should take place around the 22nd of September, a very interesting paradox will arise when the observations from different latitudes are compared. (The path of the Sun and its position at Noon ought also to be observed on that day.) But it will only get more interesting, as the subsequent observations are taken, and compared for the different latitudes.

So, let the fun begin.

The Case Of Max Planck

By Philip Rubenstein

If the history of humanity and human knowledge proceeds by crises–by solving the problems and anomalies that arise as we reach the boundaries of our development or knowledge–then might it not be the case that if the path we have chosen at some past point is wrong or in error, that we must, of necessity, go back to that fork in the road and correct that choice? Or, if it is said that we cannot really go back in time, still we must go back and change our choices, our axiomatic orientation, thus to allow an actual change in path from here on, and to see what else was misled as a consequence of that event. It is often just such a rigorous journey that is rejected, sometimes merely out of the horror of the labors involved but, also, to what wreckage we may find on the way.

When we look at 20th century science, for all of its accomplishments, it rests, in the main, on achievements derived from the 18th and 19th century continuation of Leibniz’s tradition. In fact, much of the fundamental science of the present obscures that reality, and little of a fundamental nature, but confusion, has been added in this past century, except as derived from that obscured heritage.

If we look to the case of Planck and the attack on him, and his defense by Einstein and himself (as referred to recently by Lyn and Caroline Hartmann’s work), we find a very significant such point, much obscured. And seeing what was obscured is of great importance.

While others know this story far better and would wish to point out critical areas to fruitfully pursue, Planck’s quanta simultaneously upset two groups. The notion put forward by Boltzmann and those who viewed the universe as a simple continuum, was that the absorption and emission of radiant energy would occur in a way to conform to that uniformity, Thus, as an absorbing body was heated, it would emit through all frequencies. Since, however, the upward direction of increasing frequencies was infinitely larger, we would be led to “violet or ultra-violet catastrophe.” The predominant, and infinitely so, range would be in the upper frequencies. One might note how OFTEN these views lead to catastrophes—Olbers’ paradox, entropy, etc.

In reality, of course, this does not happen. In fact, the emissions peak, and fall off. Like other such cases, a real event is paradoxical from a given set of assumptions.

What solution is available? Planck ultimately, and with great thought, proposed that radiant energy is, in fact, emitted in quanta, such that a constant proportion exists between the frequency of radiation and a quantum in which it is released. That ratio is Planck’s constant, {h}. Thus, as the frequency increases, the “packets” or quanta likewise increase, thus the amount of work increases to accomplish this, and the condition is bounded such that the “catastrophe” fails to occur.

But this is a radical idea! The simple infinite continuum of spreading electromagnetic radiation is now transformed. In some way, individuals, singularities are formed and at increasing densities. This concept was anathema.

Further, as this applies to atoms and the like, an apparently predetermined ordering seemed to be placed–for example, in electron placing in atoms.

This prompted Rutherford to write to Bohr, who used this part of Planck, “it seems to me that you would have to assume that the electron knows beforehand where it is going to stop.” A response that resonated with the usual offense that philosophical empiricists feel at the nature of science.

Bohr himself was one of those who nonetheless attempted to contain Planck’s idea within the “either, or” scheme of “wave and particle” of the mechanistic outlook, by saying it is both—- embracing contradictions, so to speak.

This path was not unlike the Machians or positivists for whom science is not the search for truth, but merely for logically consistent systems, with which different formulations may be equally acceptable as long as the appearances are saved.

In fact, both the entropists and the positivists recoiled at precisely the concept that inorganic nature exhibited a density, ordering and creating individuals that mediate higher ordered processes. Thus, the challenge of the turn of the 19th century was placed before us with Planck’s concept, along with other similar ideas in physics, biology, etc. The 20th century chose to REJECT the continuation of the Leibniz tradition.

Planck, who was a conscious Leibnizian said in his autobiography:

“While the significance of the quantum of action for the interrelation between entropy and probability was thus conclusively established, the great part played by this new constant in the uniform regular occurrence of physical processes still remained an open question. I therefore tried immediately to weld the elementary action, {h}, somehow into the framework of classical theory. But in the face of all such attempts the constant chose itself to be obdurate…..

” My futile attempts to fit the elementary quantum of action somehow into the classical theory continued for a number of years and they cost me a great deal of effort. Many of my colleagues saw in this something bordering on a tragedy. But I feel differently about it, for the thorough enlightenment I thus received was all the more valuable. I know knew for a fact that the elementary quantum of action played a more significant part in physics than I had originally been inclined to suspect, and this recognition made me see clearly the need for the introduction of totally new methods of analysis and reasoning in the treatment of atomic problems.”

Einstein, brought to Berlin by Planck, contributed to developing Planck’s idea (of which relationship much could be said o significance for the political history of the 20th century). In a speech on Planck’s 60th birthday in 1918, he said:

“The supreme task of the physicist is to arrive at those universal laws from which the cosmos can be built up by pure deduction. There is no logical path to these laws; only intuition, resting on sympathetic understanding of experience, can reach them. In this methodological uncertainty, one might suppose that there were any number of possible systems of theoretical physics all equally well justified; and this opinion is no doubt correct, theoretically. But the development of physics has shown that at any given moment, out of all conceivable constructions, a single one has always proved itself decidedly superior to all the rest. Nobody who has really gone deeply into the matter will deny that in practice the world of phenomena uniquely determines the theoretical system, in spite of the fact that there is no logical bridge between phenomena and their theoretical principles; this is what Leibniz described so happily as a `pre-established harmony.’ Physicists often accuse epitstomologists of not paying sufficient attention to this fact. Here, it seems to me, lie the roots of the controversy carried some years ago between Mach and Planck.

“The longing to behold this pre-established harmony is the source of the inexhaustible patience and perseverance with which Planck has devoted himself, as we see, to the most general problems of our science, refusing to let himself be diverted to more grateful and more easily attained ends. I have often heard colleagues try to attribute this attitude of his to extraordinary will-power and discipline–wrongly, in my opinion. The state of mind which enables a man to do work of this kind is akin to that of the religious worshipper or the lover; the daily effort comes from no deliberate intention, or program, but straight from the heart..”

Much of the obfuscation of the 20th century could be corrected by returning to the mistaken path foisted by the likes of Bohr and formalism and discovering the true nature of Planck’s contribution.

How Benjamin Banneker Discovered The Principle Of Proportionality In A Mathematical Puzzle: A Peace Of Westphalia Pedagogical

by Pierre Beaudry,

Leesburg, October 30, 2003

Some people said that the design for the city of Washington D.C. came from the heavens; that the French architect, Pierre L’Enfant, determined the location of the House of Congress, and the House of the President, in accordance with a divine plan written in the stars, and that such an orientation was PROPORTIONAL with the design of the policy of MANIFEST DESTINY, which had inspired George Washington, Benjamin Franklin, and Alexander Hamilton in the creation of a true sovereign republican nation-state on these shores. This is absolutely true. That was the rigorous and conscious intention, and only foolish people believe that the design of Washington, D.C. was based on some mystical freemasonic mumbo-jumbo.

Continue reading How Benjamin Banneker Discovered The Principle Of Proportionality In A Mathematical Puzzle: A Peace Of Westphalia Pedagogical

Living Chemistry

by Brian Lantz

Recall that yellowing periodic table, hanging on a wall in your science classroom, or perhaps the color-coded version that appeared at the back of your chemistry textbook. You read it in that textbook: modern science bows in the direction of Dimitri Ivanovich Mendeleyev, and gives him credit for the discovery of the periodic table of elements. Ask yourself whether “textbook” science understands Mendeleyev at all. The answer may not be known to you, and that, perhaps, will peak your curiosity. What is taught today, of the actual methods of Lavoisier, Pasteur, Mendeleyev? Do we know anything of those methods of Mendeleyev, which led him to his famous discovery? He knew nothing about electron shells, which explained the periodic table in your textbook. Consider that his writings are now virtually nonexistent in English, and only scantily available, or studied, anywhere in our noosphere. Perhaps, a benefit derived from this pedagogical series will be an appreciation of the methodological “roots” of that enormous chemical knowledge bequeathed to modern society, and further recognition that only if noetic methods are applied, might we reverse the very definite, measurable entropic effects of ignorance!

Consider the following comment of Dmitri Ivanovich Mendeleyev (1834-1907) – a correspondent of Henri and Marie Curie, and intellectual predecessor of Vernadsky – taken from a lecture before “The Royal Institution of Great Britain,” May 31, 1889. He is speaking of his periodic table, whose “groups,” “families,” and “periods” reveal the periodic ordering of the elements.

“The tendency to repetition – these periods – may be likened to those annual and diurnal periods with which we are so familiar on the earth. Days and years follow each other, but, as they do so, many things change; and in like manner chemical evolutions, changes in the masses of the elements, permit of much remaining undisturbed, though many properties undergo alteration. The system is maintained according to the laws of conservation in nature, but the motions are altered in consequence of the change of parts.”

Can we not surmise that, like Kepler, Mendeleyev appears to have plumbed the universe, and found it alive with {intention}? Dimitri’s lecture was entitled, “An attempt to apply to chemistry one of the Principles of Newton’s Natural Philosophy.” In that lecture, he stated that {only one} of Newton’s three laws of motion could be applied to chemical molecules, and he thanked Lavoisier (and also Dalton) for recognizing, in “the unseen world of chemical combinations,” {the same orderings} which, he pointed out, Kepler – and, he said, Copernicus – discovered in the planetary universe.

We will return to dialogue with our new-found friend, Dimitri Mendeleyev, soon. In this and following pedagogicals, we prepare the way by considering some of the chemistry of the seventeenth and eighteenth century, and particularly the revolution worked by Mendeleyev’s ‘friend’, Antoine Laurent Lavoisier (1743-1794).

What Is Elementary?

Today, the typical chemistry textbook begins from discrete “building blocks.” These discrete parts, presented as self evident in-and-of-themselves, are ripped from the larger cycles, ‘periods,’ and evolutions of which Mendeleyev spoke. They can only appear as if dead: Elements are compiled from atoms, which in turn are differentiated by their atomic number, etc. Molecules are then built up out of combinations of these discrete elements which, we have just been told, are not really so elementary. Then only, interactions of molecules – “inorganic” by their nature – are built up. Today, the colors and techniques of computerized graphics present this all vividly to the eye, but not more alive. The principle of life is really no where to be found. Lyn has pointed us to Erwin Schrodinger’s influential little paper,{What is Life?}, and there you may find a banal, lifeless rationalization: life comes down to chromosome fibres, which are “aperiodic crystals,” albeit “novel and unprecedented.”

How refreshing then, to consider that the scientific revolution associated with Lavoisier and his circles, which in turn was also the acknowledged foundation for Mendeleyev’s work, began with the study of {respiration}, and what Lavoisier (borrowing from Stephen Hales) termed “plant and animal economy” – life! Lavoisier’s conscious jumping off point, in 1773, as guide to his future work, were the topics of fermentation, vegetation, respiration and the composition of bodies formed by plants and animals. The development of the scientific field of chemistry, proceeded from the study of {life} as certainly as the physical sciences, taken as a whole, began with the study of the heavens (astrophysics).

“Airs”

Of course mankind had long had a practical understanding of many natural, chemical processes. Man has been making wine and beer for thousands of years, to generally good effect, but that is not science. Today, many of our post-modern denizens might find the idea, of the discovery of oxygen, an ‘intuitive’ no-brainer: “Hey, it’s what we breathe, and somebody named the stuff oxygen.” Thank God that Leibniz, Franklin, Priestley, Lavoisier, among others, understood that the development of physical economy required that man discover new physical principles, not name them! Let us lay a foundation. Consider now, albeit briefly, a few, provocative examples of early work, prior to Lavoisier, into the why’s of chemical and physical processes.

It had long been know that an animal could only live for a certain length of time in a given quantity of common air. But why? In 1660 Robert Boyle demonstrated that a flame is extinguished and an animal dies in an evacuated chamber of an air pump. Is there a connection between these two empirical facts, and what is it? In the 17th century it was also shown that venous blood become arterial in passing through the lungs, and that the color change takes place only so long as the lungs are supplied with fresh air. However, it was also known that air {in} the blood could be fatal – certainly a paradox. It was also thought, based on no small amount of empirical evidence, that air – one of the four physical elements along with fire, water and earth – did not enter into chemical combinations. To the practitioners of the principle of sufficient reason, the contradictions and paradoxes were everywhere!

The answers, as we will discover, were not “right in front of their noses.” The efforts, to carefully isolate the essential paradoxes, required painstaking work, the proofs were indirect, the actual experiments tedious, and the means cognitive. Facts did not, and do not, “add up.”

By the middle of the 18th century, work on the chemistry of “airs” prompted the consideration of new postulates, if not revolutionary new axioms. Joseph Black isolated what he coined “fixed air” – a distinct “aeriform” substance which, unlike ordinary air, could combine (“fix”) with lime and with alkalis. This fixed air was deadly; observation found that animals placed in it died in a matter of seconds. Joseph Black then convinced himself that the exhaled air of respiration was the same as his fixed air, “that the change produced on wholesome air by breathing it, consisted chiefly, if not solely, in the conversion of part of it into fixed air. For I found, that by blowing though a pipe into lime-water, or a solution of caustic alkali, the lime was precipitated, and the alkali was rendered mild.” Black also found that fermentation and burning charcoal produced his “fixed air.” Air obviously entered into chemical combinations.

We leave it to the reader to investigate what modern chemistry would say about the process(es) involved here. (“Lime-water” is made up from the mineral, not the fruit.) We do see, even without satisfying the itch to look into a chemistry textbook, that Joseph Black, among others, was onto something. Respiration produced a kind of gas, which combined (“fixed”) to lime, but why?

To ordinary air and “fixed air” were soon added others. “Inflammable air” was produced by certain metals in dilute acids, and rigorously determined to be distinct from both common and “fixed” air – including by observing its effects on animals. Even though it was not known what animals (and humans) inhale or exhale, or the actual role of respiration in physiology, the effects of “airs” on respiration was a obvious reference point!

Cycles

Enter Benjamin Franklin’s student and collaborator, Joseph Priestley, who became, by the early 1770’s, the most determined investigator of new “species” of airs. Joseph Priestley was among those who became intrigued by an experiment first done decades earlier: Placing a small animal under a glass inverted over water, he observed that its breathing caused the water level to rise in the glass, up to 1/27 (or there-abouts) of the total volume of the common air originally enclosed. The air diminished in volume! The “common air” we breath, Priestly hypothesized, drawing upon his wide ranging work with various airs, was “disposed to deposit one of the parts which compose it.”

That air might be a composite was, in itself, a potentially axiom-busting notion. Priestley, who studied putrefaction and compared it, through experiments, with respiration, also did not believe that Joseph Black, et al had proven that “fixed air” was alone created by respiration. “Animal and plant substances which are corrupted furnish putrid emanations, and fixed air or inflammable air, according to the time and circumstances,” reported Priestly, a not unimportant observation, as we will see.

Further, in studying the effect of respiration on air, which he originally understood to be a “corruption or infection” of the air, he rigorously reported, “There is no one who does know that a candle can burn only a certain time, and that animals can only live for a limited time, in a given quantity of air; {one is no more familiar with the cause of the death of the latter than with that of the extinction of the flame under the same circumstances, when a quantity of air has been corrupted by the respiration of animals placed within it}.” [Emphasis added -bl]

Let us pause, along our trail, leading up to Lavoisier and his work. We have seen that types of airs – almost entirely ‘invisible,’ directly, to the senses – were now being differentiated, and compared. The ability of some airs to “fix” to certain known substances had also been recognized – indirectly. We have seen that exhaled air of respiration had features that compared to that produced by the burning of fermentation and coal. Also, whatever air, or change air, that caused a candle to go out, in almost every case also killed a mouse or bird! (They also found that the animal, if removed from the bell jar could also often recover.) All of this is indirect – non-empirical – as we have seen with the lime-water experiments, but perhaps most transparently with the rise in the water level, in the bell jar, with the respiring mouse – a kind of barometer.

Consider now a stunning contribution, ‘holistic’ in nature, from Joseph Priestley: Priestley tenaciously believed that “nature must have a means” of reversing the process of respiration which “corrupted” ordinary air! Why? As animals died if exposed only to the corrupted air (or ‘fixed air’) expelled in respiration, Priestly argued, the mass of the atmosphere would have long ago become inhospitable for the sustenance of animal life! Basing himself on this certainty – that, in effect, the universe was not entropic, but rather ‘the best of all possible worlds’ – and testing the effects that plants might have on the “corrupted air” of man and animal, Priestly discovered that green plants restored this corrupt air to respirable common air! Here was a cycle, discovered among “airs”, as certain as those to be found in the orbits of the planets.

Lavoisier

Lavoisier, who warmly admired and carefully studied Joseph Priestly’s ongoing work, and was himself a part of Franklin’s extended network, shared Franklin’s and Priestley’s underlying, if unstated, {Leibnizian} outlook.

In his early review of Priestley’s work, and undertaking his own experiments to confirm Priestley’s, Lavoisier recognized apparent, crucial anomalies in Priestley’s results, as based on Priestley’s own thorough, well-circulated reports. Utilizing “baths” of mercury (first utilized by Priestley), rather than water, in which a glass bell of “airs” could be contained and changes in their volume measured, Lavoisier drew certain distinctions. Lavoisier noted, in particular, the difference in airs from putrefying animal matters (which, in what follows, Lavoisier designates as the “fixed air”), and that of respiration, and that of both from common air:

“Air which has thus served for the respiration of animals is no longer ordinary air: it approaches the state of fixed air, in that it can combine with lime and precipitate it in the form of calcareous earth; but it differs from fixed air (1) in that when mixed with common air it diminishes the volume, whereas fixed air increases it; (2) in that it can come into contact with water without being absorbed; (3) in that insects and plants can live in it, whereas they perish in fixed air.”

In short, Lavoisier noted that exhaled air, and what he here distinguishes as fixed air, may also be distinct “airs”. You may have already leaped to conclusions, or tried to, calling up terms like “CO2,” “nitrogen,” etc. Stop yourself and consider what you actually know – have discovered – about the phenomena in question. Relax, and place yourself in the shoes of Joseph Priestley and Antoine Lavoisier. After all, how could {we} prove, for example, something which we probably all assume: that these different “airs” are actually, elementarily, different airs, as opposed to being different “fluxes” or “variations” of a single air, under varying conditions of moisture, light, pressure, etc.? That was still something that Priestley and Lavoisier have not answered for themselves.

Let us jump ahead, to Chapter II of Lavoisier’s {Traite’e’lmentaire de Chimie}, published in Paris in 1789, to also appreciate Laboisier’s universalizing, non-empirical standpoint, along side that of Joseph Priestley. Lavoisier was to coin the term ‘gasses’ to replace the more confusing term, ‘airs,’ as we will see. In chapter I, Lavoisier outlined his working premise, of an underlying process in nature by which there is “separation of particles of bodies, occasioned by caloric.” (Caloric (heat) was understood by Lavoisier to be a substance, itself a gaseous state of matter.) Here then, just from the second chapter, is what he writes:

“These views which I have taken of the formation of elastic aeriform fluids or gasses, {throw great light upon the original formation of the atmospheres of the planets, and particularly that of our earth}. We readily conceive, that it must necessarily consist of a mixture of the following substances: First, of all bodies that are susceptible of evaporation, or, more strictly speaking, which are capable of retaining the state of aeriform elasticity in the temperature of our atmosphere, and under a pressure equal to that of a column of twenty-eight inches of quicksilver in the barometer; and secondly, of all substances, whether liquid or solid, which are capable of being dissolved by this mixture of different gasses.”

[Emphasis added-bl]

Lavoisier then writes that, {to better consider the issues involved}, one might consider, “If, for instance, we were suddenly transported into the region of the planet Mercury, where probably the common temperature is much superior to that of boiling water”, and pressures would also be transformed. For Lavoisier, no Aristotelian or neo-Aristotelian division exists, between the heaven and earth or between macrocosm and microcosm! Lavoisier concludes chapter II with an hypothesis, regarding the possible “inflammable fluids” that might exist in the lighter upper stratta of air (atmosphere), and their relationship to “the phenomena of the aurora borealis and other fiery meteors.” [Emphasis added – bl]

To be continued……….

On the political economy of the Leibniz-Franklin-Priestly tradition, the interested reader is referred to the February 9,1996 EIR feature, “Leibniz, Gauss shaped U.S. science successes”.

Living Chemistry, Part II

REVOLUTIONARY CONSERVATION

In his private memorandum of February 1773, Antoine Lavoisier stated that it was “the operations of the plant and animal economy,” together with “the operations of art,” which absorb and disengage air. Lavoisier continued, “one of the principal operations of the animal and plant economy consists in fixing the air, in combining it with water, fire, and earth in order to form all of the composed with which we are acquainted.” Can we consider this vantage point a foreshadowing of Vernadsky’s much later discovery of the ordered phase-space relationship of the noetic, to the biotic and abiotic domains? Place Lavoisier’s 1773 statement in context, here simply considering Joseph Priestley’s discovery, acknowledged by Lavoisier, that the functioning of the atmosphere necessarily includes the respiration of plants, as the complement to the respiration of animals and man. Here, the atmosphere itself is a creation of living processes, taken as a totality, and those living processes act on the rest of nature, “in order to form all of the composed…” Benjamin Franklin’s own work with lightning also comes to mind.

That Priestley and Lavoisier be understood as forerunners of Vernadsky, as figures united in the simultaneity of eternity, is now of special significance. While anyone familiar with Lavoisier’s work and notebooks would realize that that the principle of life is central, today his best known idea is used to promote the opposite. An “axiom” of Lavoisier’s is given the modern, imputed content of systems analysis, a principle of no-change, ruling out the efficient existence of life.

Lavoisier’s Hypothesis

I think that it is very important to quote Lyn, from his latest paper, “A new Guide For The Perplexed – How The Clone Prince Went Mad!” to help us consider Lavoisier’s axiom. This is taken from the section of his paper titled, “The Definition Of Knowledge,” wherein he referred to Kepler’s discovery of universal gravitation and Fermat’s preliminary, experimental definition of the isochronic principle. He writes,

“The solution for such an ontological paradox, is the discovery of a verified hypothesis. By hypothesis, we signify an idea which has the quality, in form, of a universal physical principle. To qualify for the title of hypothesis, that idea must show either that some relevant axiomatic assumption of the believer was false, or that some additional axiomatic assumption, that of the hypothesis, would produce a new system of thought consistent with all of the relevant evidence. If a certain uniquely appropriate quality of design of experiment, shows that that hypothesis is universally correct, we adopt that hypothesis as a universal physical principle. The result of incorporating such an hypothesis as a universally efficient principle, in that way, is not merely the addition of a new universal principle to the system, but also a revolutionary transformation of the system itself.

“Universal physical principles, and non-deductive transformations of systems, effected in that way, qualify as scientific knowledge, as distinct from, and opposed to sense-impressions. No knowledge was ever acquired, except by means of hypotheses defined a sI have just summarized the functional meaning of the term hypothesis, contrary to the famous, silly aphorism of Isaac Newton…”

Lavoisier’s first, explicitly stated “axiom” is already stated in 1775, in the midst of intensive work on the conumdrum of “airs.” In a manuscript titled, “Of elasticity and the formation of elastic fluids,” Lavoisier states that it is to be “an axiom” of his method that all substances can exist as solids, fluids and in “the state of vaporization,” and that “a vaporous fluid is the result of the combination of the molecules of any fluid whatever and in general of all bodies,” with the matter of fire.

This may seem like another “no-brainer” to you, but someone had to actually discover, as a necessary hypothesis, that gasses were another form of what we see as liquids and solids! Without recognition of gasses as a {state} of matter, to which quantified measurements, could be extended, one could no more account for complete chemical processes (reactions) as account for the terror attack on the World Trade Center and the Pentagon by the doings of Osama Ben Laden. His hypothesis would show that the process of chemical change was knowable, subject to man’s reason and utilizable for economic development, as opposed to no-change.

First, let us consider Lavoisier’s second axiom, here presented in the context wine’s chemistry, fermentation.

“This operation is one of the most extraordinary in chemistry: We must examine whence proceed the disengaged carbonic acid and the inflammable liquor produced, and in what manner a sweet vegetable oxyd becomes thus converted into two such opposite substances, whereof one is combustible, and the other eminently the contrary. To solve these two questions, it is necessary to be previously acquainted with the analysis of the fermentable substance, and of the products of the fermentation. We may lay it down as an axiom, that, in all the operations of art and nature, nothing is created; an equal quantity of matter exists both before and after the experiment; the quality and quantity of the elements remain precisely the same; and nothing takes place beyond changes and modifications in the combination of these elements. Upon this principle the whole art of performing chemical experiments depends: We must always suppose an exact equality between the elements of the body examined and those of the products of its analysis.”

From Chapter XIII, “Of the Decomposition of Vegetable Oxyds by the Vinous Fermentation”

Taken from {Elements of Chemistry}, 1789

How many times have we heard, “Matter can neither be created or destroyed”? Issac Asimov and many others have popularized Lavoisier’s ‘conservation of matter’ principle – or ‘conservation of total mass’ – as “a closed system” model, in effect a predecessor to the rantings of radical positivist John Von [sic] Neumann.

Taking historical specificity into account however, Lavoisier’s ‘conservation of matter’ “axiom” was a revolutionary supposition, adduced from a newly identified “type” of physical action occurring in the atmosphere, one closely identified with living processes. This new type of action, involving {empirically invisible} cycles and periodicities, is made comprehensible (measurable), as Lavoisier states above, by an experimental decomposition and re-composition of substances, which carefully includes the measurement of the airs that are “fixed” or disengaged in the process. The universe, for man, was increasingly one of multiply-connected action.

Recall that it had still been commonly believed in the 18th century, that air was one of four physical elements, along with fire, water and earth. Air was distinct in part because it did not enter into chemical combination. Certainly there was little visible evidence to assume that air did.

With gases axiomatically understood as a third state of matter, Lavoisier was able to zero-in on a necessary “sufficient cause” of various hither-too mysterious, or misunderstood phenomena. Lavoisier, famously “with the aid of the balance,” proceeded with the systematic weighting (indirectly) of what he could not see (gases), weighting elements and compounds in their solid or liquid states, and then weighing their reduction (or increase) in weight, as they were combined, and/or converted into “airs” filling measurable volumes. Lavoisier developed and utilizing most of the instruments and techniques that we think of today when we think of a chemistry laboratory – flasks, retorts, distillation techniques, etc. – and systematically revamped the nomenclature of chemistry to “name” the newly unlocked discoveries of nature’s processes. Has not Lyn been engaged in this same kind of process?

Let us briefly examine how Lavoisier, working in dialogue with Joseph Priestley, went about laying the basis for this revolution in science and technology.

Weighty Airs

In the early 1770’s Lavoisier prepared systematic reviews of Joseph Priestley’s published reports, a continuing source of new experimental techniques, paradoxes, and results for the world. Lavoisier wrote, “The works of the different authors I have cited, considered from this point of view, have presented me with separate portions of a great chain; they have joined together several links. But there remains an immense series of experiments to carry out in order to forge a continuity…” Lavoisier turned first to {fermentation}.

Priestley, examining the processes of a nearby brewery, became fascinated with the “air” that lay over the liquids in the fermentation vats. He soon announced findings that fermentation produced prodigious amounts of fixed air, “of almost perfect purity.” Lavoisier, reporting on Priestley’s findings, as well as his replication of Priestley’s experiments, told a meeting of the Academy, “…one observes that as soon as the spirituous fermentation takes place there is a release of air in great abundance, but when through the course of the fermentation the liquor begins to turn acidic [vinegar-bl], all of the released air is soon absorbed again to enter into the composition of the acid.”

Here might be another “cycle” of airs, like that established between plants and animals, by Joseph Priestly. But were the airs the same – the air released, and the air re-absorbed? Do acids – defined as such by their bitter taste and other observable qualities – all contain air? “Acid fermentation” – the name given to the latter phase, when wine turns into vinegar, and beer goes bad – was not comprehensible yet to Lavoisier. He carried out an experiment, mixing equal amounts of flour and water in two flasks, one exposed to air, and the other placed under a bell jar in a pneumatic trough, to measure the changes in the volume of air and acidity. The results – after a month and a half – were discouraging; there was no identifiable sign of acidity in the mixture exposed to “common air.” Lavoisier noted that he did not understand the processes of “acids” sufficiently, and therefore was not yet prepared to provide “a complete theory” of fermentation.

Lavoisier {then} turned his attention to studying and experimenting with calcination and reduction, as well as combustion and the properties of fixed air – to “flank” the difficulties confronted in his initial skirmish with fermentation.

Lavoisier had been well trained in chemistry and botany by leading French scientists of the old school. The processes of “reduction” and “calcination” were well known from metallurgy. Now, scientists were intrigued because these processes were found to involve the “disengagement” and “fixing” of “airs”, respectively. This aspect of Lavoisier’s work is better known today, textbook wise, but ripped out of the context of his unfolding conceptual understanding of living processes.

Reduction and Calcination

Experiments utilizing the water or mercury troughs, bell jar, and pneumatic pump had allowed scientists to identify that the following processes all disengaged Black’s “fixed air”: fermentation (up until the wine or beer began going bad); the exhalation of air in respiration; metallic “reductions”; and solution of mild alkalis or earths in acids. (Recall, from last week’s pedagogical that Black’s “fixed air” was able to some-how “fix” in lime-water, with lime being precipitated out, and the resultant air “mild.”)

Iron was, and still is, usually extracted from iron ore by burning it (900 degrees plus) with charcoal or “charbon,” in common air. (Coke is now used.) The phlogiston theory, which you may have heard of, explained the former by stating that phlogiston, or the “principle of inflamability,” had been absorbed from the charbon, charbon being the source of this phlogiston, hence inflammability, from which even the word, carbon, is derived. The burning of iron ore, and other metallic ores, with carbon was called “calcination.” What was left after this burning was termed the calx. What was now determined, was the air surrounding the burning of metal with carbon, while contained by a bell jar suspended over a trough of water or mercury, was that calcination involved the “fixing” of airs in the metals – the volume of air in the bell jar was reduced and the calx weighted more than the original metal! (Hold that thought.)

Other metals, such as copper and nickel, were extracted by a different, but related process. First the ore containing copper, for example, was “roasted” in common air. This was termed reduction, as the weight was reduced, and the weight of the surrounding air increased in volume. It was now determined, utilizing the apparatus already discussed, and the test on the “airs” already utilized – lime-water, candle, and bird or mouse – that metallic reduction specifically involved the disengagement of Black’s “fixed air.” The pholgiston theory stated that it was phogiston that had been released, with some kind of effect of the air.

Acids are also used in the extraction of metals such as copper [hydrometallurgy-bl], with an increase in the volume of air. It was also known that acids applied to metal calces (plural of calx), at room temperature and a type of air was measurably “disengaged.” Therefore, Lavoisier thought that calcination and reduction were, combined, “a complete system,” – Gregory Bateson, Von Neumann, etc would call a closed system. Reduce a metal with acid and disengage Black’s fixed air; burn a metal with carbon and absorb Black’s “fixed air.”

There was also a very significant wrinkle: with increasing expertise in manipulating the new experimental apparatus, and increasing knowledge of airs common, fixed, and inflammable, the results of further experiments with calcination and reduction were paradoxical!

Getting the lead out

Experiments with lead undid the attempt at a simple solution – and opened another door. Lavoisier had observed, “with surprise,” that the calcination of lead in a closed chamber could be calcined only to a limited degree. “I began at that time,” he put down in his notebook, “to suspect…that the totality of the air which we respire does not enter into the metals which one calcines, but only a portion, which is not abundant in a given quantity of air.” He found that the calcination of lead could not consume more than one-sixth to one-fifth of the total volume of the air enclosed. The {combustion} of phosphorus also yielded similar results. As regards the reduction of the lead calx, known as minium, a sparrow, a mouse and a rat introduced into the “air” released by the reduction of lead calx (minium) were “dead on the spot.” Reduction of lead calx produced “fixed air,” but if calcination of lead did in turn “fix” this same air, why did it absorb only part of the common air, and stop? Priestley argued that the air was saturated with phlogiston; Lavoisier was attempting understand how a part of the air was converted into fixed air. Adding fuel to the fire, an early experiment with minium, when combined with a volatile alkali also produced an anomalous result. Unlike “fixed air,” which had a “prodigious affinity” for volatile alkali and would have combined, the air released from minium simply dissipated. The air combined in minium must therefore, noted Lavoisier, been “the air of the atmosphere.”

It was Joseph Priestly who would provide the means to sort out these paradoxes, breaking out of the closed system, and setting Lavoisier on his merry way.

To be continued…………………

Living Chemistry – part III

MIND OVER MATTER

— ——————————

Letter from Antoine Lavoisier to Benjamin Franklin

Sir

We have set aside next Thursday, the 12th of the month, to repeat a few of the principal experiments of M. Priestley on different kinds of air. If you are interested in these experiments, we would think ourselves very honored to do them in your presence. We propose to begin at about one o’clock and take them up again immediately after dinner. I sincerely hope that you can accept this invitation; we will have only M.le veillard M. Brisson and M. Beront – too large a number of people not being, in general, favorable to the success of experiments. I hope that you will be so good as to bring your grandson…

At the Arsenal 8 June 1777

———————————-

Call freshly to mind, Joseph Priestley’s discovery of the vital inter-relationship of animal and plant respiration. Consider the atmosphere itself as, in turn, a coupling of these living processes with non-living processes, and, with Lavoisier, reserve an important role for light and heat. Living processes, a relatively “weak force” in the empirical terms of mass, volume, etc, incorporates the apparently “strong” forces of the abiotic manifold, with its elements, compounds and energetic processes, for the development of the biosphere. Likewise, the noosphere’s relationship to both the biotic and abiotic manfolds, which we are here investigating.

As regards the state of knowledge of the biotic manifold in 1774, Lavoisier noted,”…[P]lant analysis is much less advanced than one believes. Ordinarily we completely destroy the composition of the plants…” Unfortunately, this sounds very modern!

By contrast, in this third part of this “Living Chemistry” pedagogical series(1), we will unfold Lavoisier’s discovery and exploration of the actual ‘well tempered,’ harmonic domain of chemistry. We will follow Antoine Lavoisier, in dialogue with Joseph Priestley, as he utilizes a methodology of ‘inversion’ and ‘counterpoint,’ discovering thereby a rich treasure trove of anomalous singularities, and unfolding revolutionary new orderings and periodicities, for mankind in the development of the noosphere.

Respiration and Combustion

Let us now pick up an important thread in our story of chemical discovery. We had earlier noted that Joseph Priestley foreshadowed Vernadsky, in the way in which living processes, on a universal scale, engage the non-living. What about at the ‘micro’ level? Joseph Priestley ‘coupled’ the biotic and abiotic processes of respiration and combustion, while recognizing certain real differences in the behavior of respiration and, say, burning candles. Both actions produced ‘fixed air,’ and so, Priestley insisted, the two processes must therefore both entail combustion.

In 1775, Antoine Lavoisier further noted,

“The respiration of animals is likewise only a removal of the matter of fire from common air [phlogiston – indicated combustion], and thus the air which leaves the lungs is in part in the state of fixed air…

“This way of viewing the air in respiration explains why only the animals which respire are warm, why the heat of the blood is always increased in proportion as the respiration is more rapid. Finally, perhaps, it would be able to lead us to glimpse the cause of the movement of animals.”

It was in the context of the simultaneous study of respiration, and half-formed hypothesis regarding the unseen relationship of heat to the “movement of animals,” that new discoveries, regarding the equally invisible, ‘inorganic,’ processes of calcination and reduction, were proceeding.**

Airs, Again

Recall that Joseph Black, in 1756, had determined that in respiration we exhale a specific type of air, which became known thereafter as “Black’s fixed air.” You can do a simple chemistry experiment, with a shallow bowl, a short candle stick in the center of a flat piece of cork, and a tall water glass. Fill the bowl with a quarter inch of water, float the lit candle, and carefully place a glass over the lit candle and cork. What happens, over time, to the water level in the glass? What happens to the candle? What happens if you then lift up the glass, without tipping and insert a new lit candle up under the glass? This is a simple example of the tests for the disengagement of Black’s “fixed air.” (It is also a simplified model of the pneumatic trough, and principle of the barometer.)

You will recall, that in the last pedagogical of this series, the careful measurement of these airs, initiated by Lavoisier, in the processes of (non-living) reduction and calcination, produced a wealth of (contradictory) new evidence. Unseen but indirectly measurable air “fixed” and “disengaged,” in still little-understood chemical processes. Respiration was being investigated as a crucial example of the processes at work. Now, a paradox had arisen, that other “airs” were being “fixed,” as we saw with the preliminary investigations of the air fixed in the calcination of lead. All of these airs had different properties, and were compared to the standard of the “common,” breathable air of our atmosphere.

In 1774, Josephy Priestley’s {Experiments and Observations on different kinds of Air} had been published in England. Soon, Lavoisier was studying this report with keen interest, in France. A feature of Priestley’s report was the development of a new measure of “the goodness of air.”

Following up some intriguing findings made by Stephen Hales, Priestley found that combining various metals in spirit of nitre [an acid; nitre as in saltpeter, an organically produced compound, used in making gunpowder and in meat preservation] generated a ‘red flume” of a gas, which Priestley named, “nitrous air.” “Nitrous air,” when introduced into a glass bell suspended over, and slightly into, a trough of water (i.e. a pneumatic trough), caused the volume of air inside the bell glass to actually {shrink}, as measurable by a rising water level inside the bell glass! That is, the water level inside the glass bell was higher than the water level outside. Priestley was amazed that, “a quantity of air…devours another kind of air…yet is so far from gaining any addition to its bulk, that it is considerably diminished by it.”

Mixing his nitrous air in various combinations with common air, Priestley found that the volume diminished by one-fifth the original quantity of common air. Further, he found that this diminution only occurred with common air – that is air known to be fit for respiration – and therefore was a rigorous means of testing the “goodness of air,” scaled according to the reduction in volume. He wrote in 1774,

“[T]hat on whatever account air is unfit for respiration, this same test is equally applicable. Thus there is not the least effervescence between nitrous and fixed air, or inflammable air, or any species of diminished air. Also the degree of diminution being from nothing at all to more than one third of the whole of any quantity of air, we are, by this means, in a possession of a prodigiously large scale, by which we may distinguish very small degrees of difference in the goodness of air.”

A place has now been reached, where we might remind ourselves of Laviosier’s famous “axioms,” as discussed in the last pedagogical in this series. Especially, that which is known today as the “law” of “the conservation of matter.”

Lavoisier’s ‘first’ “axiom” had been drafted out, in detail, in February, 1775. That axiom was that all matter can exist in solid, liquid or gaseous state, depending on temperature and pressure. (This axiom Lavoisier would later reduce to a “corollary,” of his “caloric” hypothesis.) Let us focus on Lavoisier’s ‘second’ axiom, which emerges into view, in his notebooks, in 1775-1776. As first published in his {Elements of Chemistry}:

“We may law it down as an incontestable axiom, that, in all the operations of art and nature, nothing is created; an equal quantity of matter exists both before and after the experiment; the quality and quantity of the elements remain precisely the same; and nothing takes place beyond changes and modification in the combination of these elements.”

By 1775, Lavoisier and friends already possessed a virtual encyclopedia of various invisible “airs,” and compounds, calcinations, reductions, etc. However, such an ‘encyclopedia’ did not provide conceptual closure. Lavoisier’s notebook of this period shows that he was continually working to conceptualize a thoroughly consistent {lattice work}, starting from hypothesized first principles, attempting to order a growing body of closely observed phenomena and conceptual fragments. Our difficulties, dear reader, in following this story of scientific discovery, pale by comparison!

Fleet-Footed Mercury

The closer study of a liquid metal would turn out to be a key. It had been known since alchemical times that by heating liquid mercury one could convert it into a red powder, from which, by further heating, one could convert again to liquid mercury. A number of “physicians” – as scientists were called – were studying this anomalous substance, and the nature of the unseen processes involved. Was the red powder, so produced in the intermediary step, merely a new form of mercury, or was it a true calx, which was then “reduced” back to liquid mercury? (It is worth bearing in mind that the steps involved, in these mercury experiments, took a week or more of continuous heating, maintained around the clock, at stable, sustained temperatures!)

Joseph Priestley took up the anomalous behavior of mercury, from the vantage point of his mastery of techniques which isolated the invisible airs. What airs, Priestley asked, might be involved in the anomalous transformations of mercury? In October, 1774, Joseph Priestley revealed that he had recovered a “new air,” as he heated the red powder and transformed it back into liquid mercury. Lavoisier, intrigued, repeated Priestley’s experiment with the red powder mercury precipitate.

Priestley pushed ahead. Early in 1775, Priestley determined that he had actually produced a new “species of air” from {mercurius calcinatus per se}, under controlled conditions. Repeating again his earlier experiment, utilizing his pneumatic trough to capture the air recovered from heating the mercury calx, he applied his nitrous air test. Once again, he found that the air, derived from the heating of the red precipitate, was diminished to one-fifth less than its original volume, when a measured amount of nitrous air was added, as was common air. On a whim however, Priestley reports that he decided to add a second measure of nitrous air. To his surprise, the volume of air decreased further! More nitrous air was added. Applying other tests, such as the lit candle, Priestley discovered that his new species of air was “five or six times better than common air, for the purpose of respiration, inflammation, and I believe, every other use of common atmospheric air.” He termed this new air, “dephlogisticated air.”

Learning then of Priestley’s new findings in December, through an advance copy of portions of the second volume of Priestley’s {Experiments and Observations of Different Kinds of Air}, Lavoisier proceeded to again replicate Priestley’s experiment. Lavoisier needed nitrous air, for testing the air.

For Priestley’s grand scale, nitrous air could be “easily” produced by dissolving mercury in nitrous acid. Lavoisier went right to work, heating the combination, deciding to collect the air given off, over time, as separate portions. At a certain point, the vapor began to turn reddish and he could see that some of the air was being absorbed, even as it was produced. Lavoisier realized that “common air or dephlogisticated air” – one or the other – was being given off, and he captured these, again over time, in separate glass cylinders.

Lavoisier was surprised. When he tested fractions six though nine, inserting the “nitrous air” which he had just otherwise produced, he noted that, “This air was much better than that of the atmosphere…”, finding that prodigious amounts of nitrous air could be added. With the ninth fraction, he starting with “four parts” of each air, and ended up adding a total of seven parts of nitrous air, while reducing the volume by 7/8. He thus confirmed that this ‘secondary air,’ produced while making nitrous air, was itself the “deplogisticated air” of Priestley!

Now, it was Lavoisier who leaped {conceptually} ahead. It would be natural to infer that the air, which the liquid mercury had originally absorbed in being heated and transformed into a calx (calcination}, was identical to the “dephlogisticated air,” which Priestley had found was produced when the red powder (calx) was converted (reduced) to liquid mercury. However, that simple explanation had proven wrong before, in earlier calcinations and reductions, especially involving charcoal. Conceptually though, from the standpoint of his ‘conservation of matter’ hypothesis, Lavoisier should be able to ‘invert’ the process: If one assumes that “dephlogisticated air” was absorbed, out of the common air, in the heating process which produced the red powder (calx of mercury) in the first place, then adding dephlogisticated air to the portion of air remaining after calcination of liquid mercury, should recompose the original common air. To five parts of the air remaining after the calcined mercury had absorbed one-sixth of the air, Lavoisier now added one part of the dephlogisicated air. The air then behaved exactly as ordinary air!

Consider: Lavoisier had carried out the decomposition and re-composition of the atmosphere.

Almost simultaneously, Lavoisier proceeded to prove that the process of creating nitrous air from nitrous acid could also be ‘inverted,’ this in a demonstration before the Academy. Nitrous air (to test the properties of airs) was produced by combining nitrous acid and mercury (a calcination). Lavoisier now combined measured amounts of nitrous air and “dephlogisticated air,” disengaged while heating the calx of mercury (a reduction), and re-composed predicted amounts of nitrous acid.

The very next page of Lavoisier’s notebook shows that he rushed to next experimentally decompose the atmosphere by respiration, and recompose it with the “dephlogisticated air” derived from the reduction of the mercury calx, proving to himself, “that respiration in absorbing air, renders a portion vitiated,” and can then be restored. Lavoiser, it might be said, had discovered, and now was exploring, the ‘well-tempered’ nature of God’s chemical domain!

Following this trail of discoveries and experiments, you might realize that Lavoisier (and Priestley) had identified that which Lavoisier would name oxygen, after its acidifying quality. More precisely, Lavoisier termed it, in 1780, “the oxygen principle,” first wishing to rigorously clarify what an element was – and was not. The reader can surmise that it is this “oxygen principle,” as an air, which is being absorbed in calcinations, combustion, and respiration. Like Lavoisier, you are conceptualizing what you cannot see. Some of Lavoisier’s further work resulted in his discover of azote, now termed nitrogen, which together with oxygen predominate in the earth’s atmosphere. The oxygen-carbon dioxide cycle and the nitrogen cycle are both essential to life.

Conceptually exploring chemical processes as occurring within an hypothesized harmonic domain, allowed for the emergence of lawfully created dissonances, the basis for new (invisible) discoveries.(2)Apparent “elements,” including water, were discovered to be specific compounds, as measurable amounts of an alleged element disappeared on the ‘other side of the equation.’ Nor were all chemical processes so simply ‘inverted,’ as in those requiring a catalyst. Lavoisier, as can be seen with the mind’s eye, had to work very hard to be a “systemic thinker”!

To be continued …………….

(1) Part I of “Living Chemistry” appeared in the Friday, 10/12/01 briefing. Part II appeared in the Saturday, 10/19/01 briefing. They can otherwise be found as a1415BLZ001 and a1426BLZ001.

(2) The reader might be struck by a parallel to Bruce’s recent pedagogicals on Gauss, where what appear, in the form of natural numbers, as an open series, or, in the case of “powers,” as open, growing cycles, turn out to be periodic, closed cycles with respect to a modulus. From where does this periodicity arise?”

Living Chemistry, Part IV

THE CHEMISTRY OF THE MIND*

“Lavoisier, the putative father of all the discoveries that are talked about; as he has no ideas of his own, he seizes those of others; but scarcely ever knowing how to appreciate them, he abandons them as lightly as he took them up, and changes his views as he changes his shoes…”

– M. Marat, from his pamphlet,{Modern Charlatans, or Letters on Academic Charlatanism, published by M. Marat, the friend of the People}, 1791

Last week we re-discovered the harmonic domain of chemistry, with Antoine Lavoisier. Lavoisier’s continued his work, despite extraordinary demands.

In 1783, Cavendish reported that the burning of “inflammable air” had produced water. Lavoiser, repeating the experiment and inverted the process, quickly determined that water was composed of “dephlogisticated air” and “inflammable air” – oxygen and hydrogen. “Inflammable air,” which we have only mentioned in passing and had already been isolated, was the then-current name for hydrogen.

To shake off the cobwebs that so quickly occupy any unused corner of you mind, ask yourself: Did Lavosier ever see or touch or hear these “airs”? We have to almost shake ourselves, to let go of these airs as “things,” and realize that they are rigorously proven {concepts}, the results of fruit of discoveries, not Sarpi’s [facts].

Antoine Lavoisier had never seen any of these “airs,” and he had only determined their existence indirectly.

Elementary?

So,what of these “elements”? A common chemistry textbook will credit Lavoisier with the discovery of nitrogen, and with producing the first table of elements, for his introduction to chemistry, {Elements of Chemistry}.

Here, Edgar Allen Poe’s character, August Dupan, is required. Worthy of note is the easily overlooked fact that the English language title of Lavoisier’s textbook is itself misleading, as it implies to the casual reader that it is a book about {elements}. Compare to the title, in the original French, {Traite’ e’le’mentaire de Chimie} and you grasp the difference. So, how did Lavoisier {think} about what we today classify as elements? You may already have some ideas, from following Lavoisier on his voyage of discovery, over the past weeks. Let us hear from Lavoisier himself, and compare our thinking to his. The following is from the preface to his {Traite’}, as translated in the 1790 English language edition:

“It will, no doubt, be a matter of surprise, that in a treatise upon the elements of chemistry, there should be no chapter on the constituent and elementary parts of matter; but I shall take occasion, in this place, to remark, that the fondness of reducing all the bodies to three or four elements, proceeds from a prejudice which has descended to us from the Greek Philosophers…

“It is very remarkable, that, notwithstanding of the number of philosophical chemists who have supported the doctrine of the four elements, the is not one who has not been led by the evidence of facts to admit a greater number of elements into their theory…All these chemists were carried along by the influence of the genius of the age in which they lived, which contented itself with assertions without proofs; or, at least, often admitted as proofs the slightest degrees of probability, unsupported by that strictly rigorous analysis required by modern philosophy.

“All that can be said upon the number and nature of elements is, in my opinion, confined to discussions entirely of a metaphysical nature. The subject only furnishes us with indefinite problems, which may be solved in a thousand different ways, not one of which, in all probability, is consistent with nature. It shall therefore only add upon this subject, that if, by the term {elements} we mean to express those simple and indivisible atoms of which matter is composed, it is extremely probably we know nothing at all about them; but, if we apply the term {elements} or {principle of bodies}, to express our idea of the last point which analysis is capable of reaching, we must admit, as elements, all the substances into which we are capable, by any means, to reduce bodies by decomposition. Not that we are entitled to affirm, that there substances we consider as simple may not be compounded of two, or even of a greater number of principles; but, since there principles cannot be separated, or rather since we have not hitherto discovered the means of separating them, they act with regard to us as simple substances, and we ought never to suppose them compounded until experiment and observation has proved them to be so.”

Certainly a surprise! Note Lavoisier’s emphasis on an “element” being the “principle of bodies… which analysis is capable of reaching.” (Here we see the caution he had already expressed, when he named “the oxygen principle,” as we pointed out, in the last pedagogical.) Here, we have a concept of elements, drawn methodologically from Liebniz’s “Monadology.” Certainly a healthy dose of “learned ignorance”! It can be more quickly agreed that Lavoisier’s conception of element is not the reductionist, “atomist” conception of matter, usually presented as a British (i.e. Venetian) bloodline of horses’ asses, running from Boyle, through Galileo, Hobbes, Bacon, Newton, and so forth. Do not read too much into his off-handed comment regarding discussions “of a metaphysical nature.” Metaphysical, in the sense of Socratic universal conceptions, is exactly what Lavoisier was all about!

Algebra and Heat

Let us return to the first of Lavoisier’s original axioms. By the time Lavoisier is writing his {Traite elementarie}, his early axiom – that all matter can be, in principle can be converted from one of three states of matter to the others, by altering the relative heat and pressure – has been reduced to a “corollary” of his “matter of heat” or “caloric.” He defined this caloric as, “…a real and material substance, or very subtile fluid, which, insinuating itself between the particles of bodies, separates them from each other.”

It is often overlooked that Lavoisier’s conception of caloric, a form of the hypothesized “aether” entertained by the likes of Hugygens and Mendeleyev, precluded a “blackboard” interpretation of his “law” of the Conservation of Matter. No Venetian double-entry book keeping here! Consider: Lavoisier’s “caloric” does not enter into his “equations” of chemical reactions! Indeed, here we see the flexibility of Lavoisier’s own ‘harmonic’ concept, which, among other things, duly noted the limits of his apparatus to measure exactly the phenomena that might be in question. It is often argued, by academics, that Lavoisier reached correct conclusions through erroneous results, as for example in the fermentation tables of his {Traite elementarie}. Let us not bother with the details of their sniping. Let us rather quote Lavoiser, on his “algebriac” scientific method.

“I can regard the matter submitted to fermentation, and the result obtained after the fermentation, as an algebraic equation; and by considering each one of the elements of this equation successively as the unknown, I can deduce a value, and thereby correct the experiment through the calculation, and the calculation through the experiment. I have often profited by this method in order to correct the first results of my experiments, and to guide me in the precautions to take in order to repeat them.”

No blackboard mathematics here! His equations were not meant as {verification} of the principle, that the material present before the operation is equal to the material afterward. Lavoisier was studying what he could not see, and often was measuring indirectly. Instead, the hypothesis is verified by his effectiveness in producing results.

Now lets explore, in our final pedagogical, Lavoisier’s work on heat.

Respiration and Work

Already in 1776, Priestley had already jumped ahead of Lavoisier with new evidence on the nature of the changes in blood. Coagulated sheep blood, he showed, became “black” and “red”, as it was transferred back and forth, between fixed air and deplogisticated air. Priestly showed the he got a similar response when the blood was enclosed within a bladder which separated it form the air, demonstrating that the lungs too could communicate the phlogiston to the air through the membranes. Lavoisier, following up on these promising results, suddenly {discovered} that there were “two causes tangled in one” – that together with the absorption of a portion of the air, that air which had already served for respiration “approaches the state of fixed air.”

Let us quote from his memoir, co-credited to Seguin, presented late in 1789:

“Starting from acquired knowledge, and confining ourselves to simple ideas which everyone can readily grasp, we would say to begin with, in general that respiration is only a slow combustion of carbon and hydrogen, which is similar in every way to what takes place in a lamp or illuminated candle; and that from this point of view animals that respire are true combustible bodies which burn and consume themselves.

“In respiration, as in combustion, it is the air of the atmosphere which furnished the oxygen and the caloric; but in respiration, it is the very substance of the animal, it is the blood, which furnishes the combustible; if animals do not regularly replenish through nourishment what they lose by respiration, the lamp will soon lack its oil; and the animal will perish, as a lamp is extinguished when it lacks nourishment.

“The proofs of this identity between the effects of respiration and of combustion can be adduced immediately from experiments. In fact, the air which has served for respiration no longer contains the same quantity of oxygen when it leaves the lungs; it includes not only carbonic acid gas, but, in addition, much more water than it contained before being inspired. Now, since vital air can be converted into carbonic acid gas only by an addition of carbon; and it can be converted into water only by the addition of hydrogen; and this double combination can take place only if the vital air loses a part of its specific caloric; it follows from this that the effect of respiration is to extract from the blood a portion of carbon and of hydrogen, and to deposit in its place a portion of its specific caloric, which, during the circulation, is distributed to all parts of the animal economy, and maintains that nearly constant temperature which one observes in all animals that respire.”

It is impossible to deny the influence of Liebniz on the work of Lavoisier.

Lavoisier extended his research and experimentation on respiration, to develop the outlines of a concept of a work function, related to respiration, and thus the atmosphere. Lavoisier, in 1790, posed two important postulates, based on detailed measurements taken during his collaborator’s physical exertions. (The drawings of the experiments survive and, like those done for the {Traite’ Elementaire}, were done by his Madam Lavoisier.)

Mssr. Lavoisier derived two important postulates: that the pulse rate increased in direct proportion to the total weight which a person lifted to a given height; and that the vital air consumed was directly proportional to the product of the pulse rate and the frequency of breathing, arguing that one could calculate the “weight lifted to a given height which would be equivalent to the sum of the efforts he has made.” This is so close to Leibniz’s concept of vis viva that it must give us pause. Antoine Lavoisier may have known Lazare Carnot, ten years his junior. Carnot’s interest in the subject of heat and its utilization in powering machinery was to last through his entire life. In 1783, Carnot had restated Liebniz’s concept of {vis viva} as “the moment of activity exerted by a force” or MgH, where M =the total mass of a system, g = the force of gravity, and H = the height of rise or fall. This, it is reported, is the initial seed crystal for Carnot’s concept of “work.” Lavoisier at one point equated the “weight lifted to a given height” with the “sum of the efforts,” language closely resembling Carnot.

Addendum: The political life of Antoine Lavoisier

Marat, clearly on orders of Jeremey Bentham, made Lavoisier one of his first targets. We should not lose sight of Lavoisier’s nation-building efforts, as this is a necessary part of any pedagogical dealing with driving force behind real discoveries in “the hard sciences.” Let Marat tell us about Lavoisier’s role, from his {Ami du Peuple}, of January, 1791: “I denounce to you the Coryphaeus – the leader of the chous – of charlatans, Sieur lavoisier, son of a land-grabber, apprentice-chemist, pupil of the Genevan stockjobber [Necker] Farmer-General, Commissioner for Gunpowder and Saltpeter, Governor of the Discount Bank, Secretary to the King, Member of the Academy of Sciences…”

Lavoisier, was a friend and collaborator of Bailly, and was a member of the ’89 Club (later supplanted by the Jacobin Club), with Monge, Bailly and others. As we see, he had been appointed to numerous national committees by the King, and continued to serve during Bailly’s period of leadership, including in the Treasury. While Bailly was Mayor of Paris, and the Marquis de Lafayette commanded the National Guard, Lavoisier not only continued to hold his crucial position of the Gunpowder Commission, which had the life-and-death responsibility of producing sufficient supplies of gunpowder for embattled France, but continued as the resident of the Arsenal, where he also continued his scientific research. It is recorded that, on one occasion, Bailly and his wife personally, physically intervened, to rescue Lavoisier and his wife from a threatening mob.

Midst the crisis of these times, Lavoisier presented a reasoned proposal for the reorganization of the national debt, in 1790, and presented to the National Assembly his long-prepared work, {The Territorial Wealth of the Realm of France}, to be the basis of a rational reorganization of the French tax system. In 1793, even after the execution of the King, Lavoisier presented to the National Convention as proposal for national education, with the aim of educating the whole nation, and all mankind. Lavoisier was executed, for counter-revolutionary activity due to his role in the tax farm system, on May 8, 1794, his body thrown into a nameless grave.

———-

Unused notes

Lavoisier now also had the basis for answering one of Priestly’s anomalous findings: Priestly had found that inflammable air could be made respirable by “continued agitation in a trough of water, deprived of its air.” He then ascertained that “this process has never failed to restore any kinds of noxious air on which I have tried it.” Priestley has asked, how could there be such a uniform outcome?

Black’s fixed air, which was released from most forms of reduction, we know today as burning dephlogisticated air (oxygen) with carbon – carbon dioxide, as Lavoisier quickly determined. The phlogiston theory explained the former by arguing that phlogiston, the “principle of inflamability,” existed in charbon, alchohol, etc and was released as heat and light.

Priestley’s “nitrous air” is nitrogen oxide. Lavoisier proceeded to isolate “the unbreathable part” of common air, naming this unbreathable part, which killed animals “on the spot,” termed this azote. Azote was later renamed nitrogen, after its common association with nitre.

We can identify here aspects of a revolution in man’s knowledge, the transformation of the entire lattice work, based on the change of axioms, which Lavoisier was working.

The Angular Determination Of The Great Pyramid

by Pierre Beaudry

In ancient Egypt, an astronomer once asked an architect: “If you were an astronomer, how would you start building an astronomical observatory, which would be perfectly in line with a meridian circle, from which one could observe and teach young people how to determine the transit of all of the stars in the heavens?” In the Morning Briefing of Sunday, January 25, 2004, Lyndon LaRouche answered that question by saying: “You’d build a deep pit, a deep well, and if the well is narrowly fixed, you can actually see stars during the daytime, and particularly in areas which are fairly arid. And that’s when a lot of astronomy was done. They had the nighttime sky, which they were able to survey this way, and also the daytime sky. Motions of the planets and so forth, they could see, in the dusk.”

Continue reading The Angular Determination Of The Great Pyramid

The Unseen World Behind The Compass Needle

by Judy Hodgkiss

The great scientists of the 19th Century, at the inspiration of Alexander von Humboldt, coalesced around the work of the “Magnetischer Verein,” the Magnetic Union, globally coordinating their studies of the varied effects produced by the earth’s magnetic field. Two American presidents enthusiastically supported the effort. This grand project to comprehend the wondrous phenomena called terrestrial magnetism, or “geomagnetism” as it is known today, proved to be the science driver of that century, as the study of electrical phenomena had been for Ben Franklin’s era. But the Verein project died by the end of the century, and is waiting to be taken up again.

John Quincy Adams, in a debate in the Congress over the establishment of the Smithsonian Institute, argued that the promotion of geomagnetic science should be one of the Smithsonian’s primary goals:

“What an unknown world of mind is yet teeming in the womb of time, to be revealed in tracing the causes of the sympathy between magnet and the pole–that unseen, immaterial spirit, which walks with us through the most entangled forests, over the most interminable wilderness, and across every region of the pathless deep, by day, by night, in the calm serene of a cloudless sky, and in the howling of the hurricane or the typhoon. Who can witness the movements of that tremulous needle, poised upon its center, still tending to the polar star, without feeling a thrill of amazement approaching to superstition?”

Later, President Abraham Lincoln spent many happy hours with America’s foremost scientist of the 19th Century, Joseph Henry, the first Secretary of the Smithsonian, participating in the geomagnetic studies and other experiments carried out by Henry at the Smithsonian, conveniently located near the White House.

Alexander von Humboldt, the world’s foremost naturalist, and a member of Friedrich Schiller’s circles in Germany, wrote, of his best-selling book on his 1804 travels to Spanish America:

“The observations on the variations of terrestrial magnetism which I have carried out during a period of 32 years in America, Europe and Asia and with comparable instruments, cover in both hemispheres…a space of 188 degrees of longitude, from 60 degrees northern latitude to 12 degrees S. I have considered the law of the decrease of the magnetic forces from the pole to the equator as the most important result of my American journey.”

Between 1829 and 1834, a young Joseph Henry completed, with the aid of Prof. James Renwick of Columbia University and the the British naval captain who had discovered geomagnetic North in the Artic, Edward Sabine, the first comprehensive magnetic survey of an American city, Albany, N.Y.; “comprehensive” meaning: documentation of the variation in declination, inclination, and intensity through the magnetic needle readings in the area, over time.

In 1834, Humboldt and Karl Gauss established the Magnetischer Verein, the Magnetic Union, to coordinate systematic studies of terrestrial magnetism globally. Three years later, the great-grandson of Ben Franklin, Alexander Dallas Bache, met with Gauss in Germany, and returned to the U.S. with the precision instruments Gauss had prepared for his use in the U.S.

So, what was all this hub-bub about?

And why did this hub-bub die out, as the above mentioned went to their graves, one by one?

The many and wondrous anomalies posed by the geomagnetic phenomena excite the most fundamental questions in the mind of the researcher. Reason enough for the oligarchy to wish to bury the subject.

In 1492, when Christopher Columbus crossed the Atlantic Ocean, somewhere near mid-way, he recorded that his ship’s compass passed from slightly to the west of true North (true North is the axis of rotation North), over to {east} of true North! Did he expect that to happen? I don’t know. There is a similar spot on the other side of the earth where the opposite occurs. Perhaps mariners in the Pacific knew of such phenomena, since China had the use of the compass since at least 1,000 AD.

This deviation of the compass needle from true North is called the “declination” of the compass; and not only does this declination go from East to West at that point noted by Columbus, but that very line of demarcation he crossed in the Atlantic, itself, shifts, wiggles and oscillates, while at the same time, over centuries, it noticably migrates in a westerly direction, at an average of .04 degrees per year. At different longitudes around the globe, lines of equal declination from geomagnetic North to geomagnetic South do not conform with lines of longitude, and are often much further off from true North than just the few degrees experienced by Columbus (from coast to coast for him, something like 10 degrees west to 10 degrees east).

(See http://geomag.usgs.gov/movies.html for “movies” of such patterns, as presented by the U.S. Geological Survey. The movie for magnetic declination flashes by, decades at at time. You can click either the fully animated or the manually-controlled action movie.)

Also, there is a point on the globe midway between North and South where the compass needle will orient to the southern magnetic pole (the poles are not aligned exactly, but approximately, opposite to each other). Humboldt was the first to record this, in northern Peru.

In addition to the above, every million years or so, the north and south poles reverse themselves! Again, it was Humboldt who was the first to discover the “magnet fossils” demonstrating this, i.e., certain magnetized rocks, such as those in the German Fichtel Mountains, and later, he found in the Peruvian Andes, which, when approached with the compass, demonstrate a magnetism in the reverse direction from other magnetized rock formations in the area. Such rocks were originally formed by molten lava, which solidified with an internal magnetism aligned at a time (compared to lava flows above or below it) when the magnetic poles had a reverse polarity.

Then there is the “inclination” of the needle, measured by a “dip” needle, where your compass is designed to move vertically, swinging up and down, instead of left to right. Held over either magnetic pole, the needle would swing to an extreme, vertical position, while at the equator it would rest horizontally. A map of this phenomena shows the lines of equal inclination oscillating north-south over years, but not wiggling and swirling as much as the lines of declination do.

(Again, see the action movie at the USGS website.)

But the geomagnetic phenomena which demonstrates the most awesome anomalies, is that found in the readings of the “intensity” of the magnetic needle. The needles used by Humboldt for these measurements were large, 1 to 2 feet long, and suspended from a torsionless thread and carrying ivory scales on its two end faces. Joseph Henry described in his laboratory papers in 1833:

“Make a needle in the form of a tube, adjust glasses, suspend it by silk, and look up through the glasses as a telescope at a distant board placed at right angles to the magnetic meridian with divisions on it corresponding to the seconds and minutes of a degree. And in this way notice the variations of the needle daily and hourly.”

The number of oscillations of this free floating needle over a given time then measures an array of irregular and regular variations in the “intensity” of the magnetic force. (Again, see the movie–the one called “total intensity,” since there are sequels called “horizontal intensity,” etc., but those are irrelevant for our purposes here. Also, take note, that the system used in the movies of delineating lines of equal declination, inclination and intensity, called “isogenic,” “isocline” and “isodynamic” lines, are concepts and terms invented by Humboldt.)

Since detailed records have been kept, over the last 150 years, there has been a significant decline in the intensity of the dipole magnetic field (besides the polar magnetism, there are other more complicated field patterns, but those can be sorted out from the main dipole field). In fact, the dipole field intensity is actually decreasing at a rate of about 8 percent per century! In a few thousand years it could go to zero (as measured in gauss or tesla units); or, it could reverse itself, and start to go back up at any time. The “magnetic fossil” record, which can capture declination, inclination and intensity evidence, indicates that this kind of fluctuation, even all the way to zero, may be a frequent occurrence (frequent, as measured in thousands of years). In fact, there are indications that somewhere around 4,000-5,000 B.C., the dipole field disappeared, perhaps for 1,000 years. One might then ask: Is it possible that a global maritime culture, at the time dependent on compass navigation for ocean crossings, might have literally lost its moorings? Thereby stranding what became the American Indian, etc., in outlying areas?

These are the kinds of questions into which a new Magnetischer Verein must delve, with all the joy of a Humboldt, Gauss, Bache, or Henry–or a JQA or Abraham Lincoln.

Dirichlet and the Multiply-Connected History of Humans: The Mendelssohn Youth Movement

by David Shavin

When Lejeune Dirichlet, at 23 years of age, worked with Alexander von Humboldt in making microscopic measurements of the motions of a suspended bar-magnet in a specially-built hut in Abraham Mendelssohn’s garden, he could hear, nearby in the garden-house, the Mendelssohn youth movement working through the voicing of J. S. Bach’s “St. Matthew’s Passion.” Felix and Fanny Mendelssohn, 19 and 23, were the leaders of a group of sixteen friends that would meet every Saturday night in 1828 to explore this `dead’ work, unperformed since its debut a century earlier by Bach.[1]

The two simultaneous projects in the Mendelssohn garden at 3 Leipziger Strasse (in Berlin) are a beautiful example of Plato’s classical education necessary for the leaders of a republic: The astronomer’s eyes and the musician’s ears worked in counterpoint for the higher purpose of uniquely posing to the human mind {how the mind itself worked}. As described in the {Republic}, Book 7, the paradoxes of each `field’ – paradoxes (such as the ‘diabolus’) that, considered separately, tied up in knots the ‘professionals’ in each field – taken together would triangulate for the future statesman the type of problems uniquely designed to properly exercise the human mind. After all, such a mind would have to master more than astronomy and music, simply to bring before the mind the series of paradoxes, so as to be made capable for dealing with the much more complicated dealings of a human society. To oversimplify, since the mind does not come equipped with a training manual, the composer of the universe created the harmonies of the heavens and of music as, e.g., a mobile above a baby’s crib.

In that hut, Dirichlet would have been making microscopic measurements as part of making a geo-magnetic map of the earth. The audacity in thinking that these miniscule motions of the suspended bar-magnet could capture such unseen properties, posed certain appropriate questions to Dirichlet. (Gauss’ geodetic surveying a decade earlier was paradigmatic of the sort of project that mined such riches out of the ostensibly simple affair, e.g., of determining where one actually was! But this also applies to locating oneself in the process of a proper daily political-intelligence briefing.) Similarly, the sixteen youth working to solve amongst themselves the complicated inter-relationships of Bach’s setting of the “Passion” story, as related by St. Matthew, would have forced their grappling with the scientific problem of ascertaining what our Maker would have in store for us, in their attempt to map their own souls. (Just for starters, regarding their `performance’ questions: How does Jesus intone what he says? How does the chorus/audience respond to Jesus, and sometimes to each other? etc.) The following historical sketch is offered as a few measurements, but instead of using a suspended magnetic bar, we’ll use a few years of Dirichlet’s life, and thereby try to triangulate some of the important characteristics for a map of the culture that created the world that, today, we are challenged to master. Humboldts and Mendelssohns

Dirichlet’s patron, Alexander von Humboldt, along with his brother Wilhelm, had studied in the 1780’s with a host of pro-American Revolution leaders in Europe, notably including the Mendelssohn’s famous grandfather, Moses. (These particular studies can be investigated by reading Moses Mendelssohn’s Leibnizian work, {Morgenstunden}, or {Morning-Studies}, which describe the lessons that Moses gave to his son Joseph, and to the Humboldt youth.) Later, two of Moses’ sons, Joseph and Abraham, ran the Mendelssohn Bank, which financed many of Alexander von Humboldt’s scientific expeditions and projects. Abraham Mendelssohn, the father of Fanny, Felix, Rebecca and Paul, had set up, in his garden at 3 Leipziger Strasse, a special magnetically-neutral observation hut for Humboldt to measure minute magnetic fluctuations. Humboldt brought Dirichlet to Berlin in 1828, where he was one of the five or six who shared observational duties with Humboldt, in their mapping of the actual geo-magnetic shape and potential of the earth.

In 1827/8, Humboldt gave public lectures at the Singakademie Hall on physical geography – deliberately open to both men and women. Fanny Mendelssohn commented (in a letter to her friend Klingemann): “[T]he course is infinitely interesting. Gentlemen may laugh at us as much as they will; it is wonderful in this day and age for us to have an opportunity to hear something sensible, for once I must further inform you that we are attending a second lecture series, given by a foreigner on experimental physics. This course, too, is being attended mainly by women.” Humboldt’s presentations on his investigations of the earth were special public versions of his lecture-course at Berlin’s famous Friedrich Wilhelm University (established in the previous decade by his brother, Wilhelm von Humboldt).

Felix Mendelssohn attended the University at the same time that a collaborator of Humboldt at the University, Phillip August Boeckh, the great philologist, lived as a tenant in the Mendelssohn home. (Years later, Felix would compose music for the staging of Boeckh’s German translation of Sophocles’ play, “Antigone.”)

Humboldt also organized the Berlin scientific congress of August/September, 1828 – a conference that Metternich would find most dangerous. For the several weeks that Gauss stayed at Humboldt’s home, they could discuss the implications of the geodetic and geo-magnetic projects. Finally, the representative from England, Charles Babbage, the noted promoter of Leibniz’s analytic methods, found the conference to be historic, but found the highlight of Berlin to be the culturally-optimistic Mendelssohn household. It was at this time and in such circumstances that Dirichlet entered into the Mendelssohn youth movement. The Mendelssohn Youth Movement

Fanny’s reports on the scene (in a 12/27/1828 letter to Klingemann): “Christmas-eve was most animated and pleasant. You know that in our house there must always be a sort of `jeune garde’ (‘young guard’) and the presence of my brothers and the constant flow of young life exercise an ever attractive influence. I must mention Dirichlet, professor of mathematics, a very handsome and amiable man, as full of fun and spirits as a student, and very learned.” Fanny’s sister, and Dirichlet’s future wife, Rebecca, was also at that Christmas party. We may assume that some or all of the sixteen-member `Saturday-night chorus’ were there.

Also in attendance was Fanny’s longtime love, Wilhelm Hensel, back in Berlin for two months now. He had just returned from five years of study of Renaissance art in Italy. Wilhelm, now 33, and a talented artist, had fought as a young man in the German Liberation Wars against Napoleon. Now, he had returned to Berlin to win Fanny as his wife (which somehow involved conquering Fanny’s mother, Lea). A month later, the engagement was announced.

Fanny also mentions three of the suitors of Rebecca (who would all lose out to Dirichlet): * Professor Eduard Gans – “We see him very often, and he has a great friendship for Rebecca, upon whom he has even forced a Greek lesson, in which these two learned persons read Plato. It stands to reason that gossip will translate this Platonic union into a real one…” (Gans was the Jewish student of Hegel, covered in Steve Meyer’s “Fidelio” article on the Haskalah.) Gans had been active in Jewish causes early on, but he converted in 1825 so that he could become a professor. * Johann Gustav Droysen, historian and philologist – Though only 19 years old, Fanny recognized in him “a pure, poetic spirit and a healthy amiable mind.” Droysen published a translation of Aeschylus and the famous work on Alexander the Great, both before he was twenty-five. * Heinrich Heine, poet – “Heine is here… [H]is {Reisebilder} contain[s] delightful things; and though for ten times you may be inclined to despise him, the eleventh time you cannot help confessing that he is a poet, a true poet!” Once, he sent, via his close friend Droysen, his greetings to the 18-year-old Rebecca: “As for chubby Rebecka, yes, please greet her for me too, the dear child she is, so charming and kind, and every pound of her an angel.” It seems that Heinrich Heine’s brand of courtship of Rebecca was little different from his treatment of everything else in life. “St. Matthew’s Passion”

Now picture Dirichlet in the observation hut in the garden at 3 Leipziger Strasse. Close by is the summer house, where Felix and Fanny worked out, with four hands at the piano, the voicing and composition of Bach’s “St. Matthew’s Passion” – left unperformed since Bach premiered it in 1729. In January, 1829, soon after Dirichlet had arrived on the scene in the Mendelssohn youth movement, Eduard Devrient and Felix Mendelssohn decided upon an historic March public performance, despite the discouragement of the music authorities. They knew that they had to defy the professional advice. As described years later by Fanny’s son, the appropriately-named Sebastian Hensel: “Only just then the most intelligent musical people began to comprehend that something must be done to bring this treasure to daylight, and that this was in a musical point of view the greatest task of the period.”

After hiring a hall, and with a performance six weeks away, the chorus swelled from 16 to 400 members, and the initial group had the ‘Monge brigade’ project of rapidly educating all the new-comers. Fanny described this rare and sublime process: “People were speechless with admiration, and faces grew long with astonishment at the idea that such a work could have existed unbeknownst to them… Once they grasped that fact, they began studying the work with warm and veritable interest. The enthusiasm of the singers, from the first rehearsal on – how they poured their heart and soul into the work; how everyone’s love of this music and pleasure in performing it grew with each rehearsal – kept renewing the general wonder and astonishment.” This process created “so lively and detailed an interest that all the tickets were sold the day after the announcement of the concert, and they had to refuse entrance to more than a thousand people… [At the concert itself,] I was sitting in the corner [of the massive chorus] so as to see Felix well, and I had arranged the strongest alto voices near me. The choruses were impassioned with extraordinary strength tempered with a touching tenderness, as I had never heard them before… [A] peculiar spirit and general higher interest pervaded the concert, that everybody did his duty to the utmost of his powers, and many did more…”

And, after the sublime, the ridiculous! At least one Berliner seemed to remain untouched: After the concert, at a celebratory dinner, Devrient’s wife, Therese, sat between Felix and an obnoxious professor, who kept trying to get her drunk: “He clutched my wide lace sleeve in an unrelenting grip… to protect it, he said! And would every so often turn toward me; in short, he so plagued me with his gallantries that I leaned over to Felix and asked: `Tell me, who is this idiot beside me?’ Felix held his handkerchief over his mouth for a moment – then he whispered: ‘The idiot beside you is the celebrated philosopher Hegel!'”

Such were the circumstances of Dirichlet’s first year in Berlin. By 1831, Dirichlet and Rebecca Mendelssohn were engaged, and by 1832, married. They were considered to be, in the extended Mendelssohn family discussions and debates, the most revolutionary. The couple had four children. Rebecca died late in 1858, age 47 (evidently of a similar type of stroke as had felled her older sister, 43, and brother, 39, a decade earlier). Dirichlet’s compromised health declined further, and he followed her to the grave five months later, May 5, 1859. Dirichlet’s Republican background and LaFayette’s July 1830 Revolution As a youth of 17, Dirichlet was studying Gauss’ {Disquisitione Arithmeticae), when he was sent to study in Paris. According to his nephew, Sebastian Hensel, Dirichlet was introduced there to General Foy by a republican associate of Dirichlet’s parents, one Larchet de Charmont.[2] Foy employed Dirichlet as a tutor in his household from the summer of 1823 until Foy’s death in November, 1825. Foy was in France’s chamber of deputies, and was the leader of the opposition to the royalist restoration of the 1815 Congress of Vienna. Dirichlet thrived in this environment: “… [I]t was very important for his whole life that General Foy’s house – frequented by the first notabilities in art and science as well as by the most illustrious members of the chambers – gave him an opportunity of looking on life in a larger field, and of hearing the great political questions discussed that led to the July Revolution of 1830, and created in him such a vivid interest.” (Hensel’s {The Mendelssohn Family}, Vol. I, page 312.)

The July Revolution of 1830 was led by LaFayette, and was at best a mixed affair. It overthrew the reactionary arrangements of the Congress of Vienna, and set up a tenuous arrangement whereby Louis Phillippe, the “Citizen King,” would be a constitutional monarch. LaFayette gambled that this might work, as the “Citizen King” had pledged to be subservient to the written constitution. Two items of note reflect Foy’s connections to the 1830 Revolution: In October, 1825, a few weeks before his death, Foy had troubled himself to write to LaFayette; and in 1823, Foy had sent from his care the 21-year-old Alexandre Dumas (three years Dirichlet’s senior) to be Foy’s agent in the household of Louis Phillippe. Later, in 1830, Dumas would serve as a captain in LaFayette’s National Guard. Dumas had sought Foy’s guidance, as Foy himself had earlier, in the 1790’s, looked to Dumas’ father, General Alexander Davy Dumas, as his military and political leader. General Dumas was first a hero of the French army, who then became an early opponent of Napoleon’s imperial ambitions. He was part of the 1798 invasion of Egypt, but was imprisoned by Napoleon from 1799 to 1801 for publicly opposing Napoleon’s imperial turn. (Similarly, Beethoven at this time also had hopes for Napoleon that he quickly recognized were greatly mistaken.) After the imprisonment, Napoleon’s harsh treatment of General Dumas led to his early death at age 44 in 1806, leaving behind his four-year-old son.

After Foy died in November, 1825, there was a competition between Alexander Humboldt and Fourier for Dirichlet’s services. Fourier, according to Hensel, “tried to avail himself of Larchet de Charmont’s influence, to induce him [Dirichlet] to return to Paris, where he felt sure it was his vocation to occupy a high position at the Academy.” Humboldt arranged for Dirichlet, then 21, to teach at Breslau, 1826-1828, and then brought him to Berlin in 1828, where he was the professor of Mathematics at the Berlin Military Academy, and where he joined the Mendelssohn youth movement. LaFayette, Dumas, Galois, Poe and Heine

Alexander von Humboldt returned to Paris in 1830 because of the ripened political situation. Cauchy – the Emperor of mathematics – had to flee Paris in July, 1830, when his King was deposed. For a short period, LaFayette thought that they could control the new “Citizen King.” However, within a few months the financiers moved in and gained the upper hand in running the king, Louis Phillippe. In December, 1830, they succeeded in the arrest of the nineteen leaders of LaFayette’s republican National Guard, the key defenders of the constitution. LaFayette testified at the March, 1831 trial; and the jury found them all not guilty.

At the celebratory dinner for the “19” were, among others, LaFayette, Dumas and another brilliant student of Gauss’s work, Evariste Galois. (The latter had been, along with Neils Abel, a victim of Cauchy’s ham-handed skullduggery at the head of the French Academy of Science.) At the dinner, Galois evidently made a notorious toast to Louis Phillippe’s health, while putting his other hand on his sword, and adding that the king had better not fail in his duty to the constitution. Dumas reports that at that point, several of the attendees, including himself, jumped out of the windows of the hall, fearing, accurately, that the spies at the event would bring the police.[3] Galois was arrested, tried and released, when the jury refused to convict him.

He was re-arrested that summer, 1831, by the police prefect, Gisquet, for wearing a republican guard uniform in public. Gisquet avoided the path of the previously unsuccessful trials, and instead kept him in jail with no trial until the next spring – when his release, and the setup of his fatal `duel’, fell hard one upon the other. When Galois’ suspicious death roused a crowd to come to his funeral, and a public accounting was threatened, Gisquet carried out, the night before the funeral, pre-emptive arrests of Galois’ friends.

Which of these events in Galois’ last year, 1831/2, were witnessed by Edgar Allan Poe is unclear, but clearly Poe’s “The Purloined Letter” skewers Gisquet (the “prefect G -“), and, by inference, celebrates the “poet-mathematician,” Galois. While Poe does also refer, and explicitly so, to the mathematician, Charles Auguste Dupin (the historical figure that was, literally, a member of the Monge brigade, having been taught by Monge), Poe’s “poet-mathematician” image does not need to be `reduced’ to one individual. However, the politically-sensitive case of Galois at the time of Poe’s visit to Paris, and the reference to the “prefect G-“, makes it clear that the Galois case would have been understood by astute readers of Poe’s time. Regardless, Poe’s “poet-mathematician” image would appropriately apply to any of the leading (1820’s) students of Gauss: Galois, Abel or Dirichlet. Poe’s “poet-mathematician” would have been fully at home in the Mendelssohn garden at 3 Leipziger Strasse. Finally, Heine, upon the news of the July Revolution, decided to leave Berlin for Paris. He would have been there, with Alexander von Humboldt, during these events. His early work in Paris during this period may be examined in his {The Romantic School}, where he diagnosed for the French and the Germans, the evil medievalism of the cultural string-pullers that had deliberately set out to murder the Germany of Moses Mendelssohn, Lessing and Schiller. No successful European revolution could proceed without dealing with these skeletons. And none did.

The rapid sketch, above, is only a beginning suggestion as to the interplay of: Gauss’s {Disquisitiones Arithmeticae}; the healthy benefits of opposing evil (e.g., the imperial beastman, Napoleon); the children and grandchildren both of Moses Mendelssohn and of the American Revolution in Europe; and the passion of magnetic measurements and the revival of Bach’s “St. Matthew’s Passion.” Much more can, and should be, covered in this specific period, regarding the activities of J. F. Cooper, J. Q. Adams, LaFayette, Friedrich List, Poe, etc. But this abbreviated historic sketch, centered around Dirichlet, should take us back to the Gauss/Dirichlet/Riemann dialogue somewhat refreshed.

 

[1] J.S. Bach had composed and performed this work in Leipzig, 1729. The manuscript was given to Felix by his aunt Sarah Itzig Levy, a proponent of Bach. (Otherwise, one could say that it was fortunate that Felix Mendelssohn had exactly sixteen friends to cover the four quartets of soprano/alto/tenor/bass – but it were more likely that the orbit defined the planet; that is, the Bach project cemented the potential friendships.)

[2] Larchet is unknown to this author. Since it is thought that Dirichlet’s parents were active republicans who had to leave Napoleonic France years before, and since Larchet de Charmont was a friend both of Foy and of Dirichlet’s parents, it were likely that they shared their anti-Napoleonic republicanism.

[3] Recall that Dumas is also the one who made the knowing allusion, as part of Dumas’ typically `factitious’ fiction, to Poe’s stay in Paris. This is the reference that Allen Salisbury reported on years ago in his “Campaigner” article on Poe.

Understanding Nuclear Power, #3

THE DISCOVERY OF RADIOACTIVITY AND – – THE TRANSMUTATION OF THE ELEMENTS – by Larry Hecht May 12, 2006

[Figures available at www.wlym.com/~bruce/radioactive.zip]

The discovery of radioactivity and its properties in the period from 1896-1903 created a crisis in physical chemistry. The phenomena seemed to challenge several fundamental axioms of science. These were (1) Carnot’s principle describing the relationship of heat and work, and (2) the principle which had guided all chemical investigations since Lavoisier that no new element was created or destroyed in a chemical transformation–a principle sometimes known as the indestructibility of matter. In the usual textbook approach, these paradoxes are passed over quickly, and the problems “solved” by the modern theory of radioactive decay and nuclear transformation. It is much more fun to look at the real papers from the period, to puzzle over the mystery, and work through the process of hypothesis formation and experiment by which the paradoxes are resolved. That is the only way to get any real understanding of what nuclear science is about. Here we will try to summarize some of the basic material which is to be mastered.

In the French scientific journal {Comptes Rendus} of December 1898, a note co-authored by Pierre and Marie Curie, and G. Bemont describes the properties of a new and strongly radioactive substance extracted from the ore of pitchblende. The new substance possessed many analogous properties to barium, and the team had made considerable effort to be sure it was not some unique form of the element barium. They called this new substance {radium.} In an earlier note the same year, this team of collaborators had described another radioactive substance separated from the same ore, this one sharing similar properties with the metal bismuth. They called it {polonium.} [fn 1] “Radioactivity” had been discovered just two years earlier by Henri Becquerel. The curious emissions from uranium ore which he discovered, while looking for something else, were first called Becquerel rays. Marie Curie first used the term “radioactivity” in 1898 when she discovered that minerals containing the element thorium also showed these properties. Becquerel had been studying phosphorescence, a property of certain materials which glow in the dark after exposure to light. He had been curious if the phenomena of phosphorescence might in some way be related to the peculiar x-rays which had just been discovered in 1895. As these curious things come up again in our story we will pause here to briefly explain them. X-rays were first discovered in a simple apparatus called a Geissler or cathode ray tube. A tube of glass is formed with a metal electrode inserted into each end. The air in the tube is pumped out by a vaccum pump, until only a small amount remains inside; or, other gases are introduced in very small amounts. When a voltage is applied across the electrodes, the interior of the tube begins to glow, its color dependendent on the gas contained. The neon lights in signs are a familiar example of such a device. The behavior of gases in apparatus such as these had been under study since the 1840s by Auguste de la Rive, a collaborator of Ampere. Studies of the tubes were made in Germany in the 1850s, and they received the name Geissler tubes after the Bonn instrument maker Johann Geissler. The alternate name of cathode ray tube, came about after Eugen Goldstein discovered in 1876 that a faint ray could be seen propagating from the negative electrode (cathode) to the positive anode. With a high voltage, it was noticed that the glass of the tube also develops a glow. Experimenting with such devices in 1895, Wilhelm Conrad Roentgen observed something really unusual. A faint green light which developed at the wall of his tube was passing through nearby materials, including paper, a book, and some wood. As he tried putting other materials in front of the tube, he saw the bones of his hand projected on the wall! He described the phenomenon in a paper in 1896 calling them “Radiation X,” or X-rays. They are also known as Roentgen-rays.

– Radioactivity – Reports of this exciting discovery spread quickly, and Becquerel wondered if the phenomenon of phosphorescence he was investigating might be related to this radiation X. One of his experiments had been to place each of the mineral samples which showed phosphorescence over a photographic plate wrapped in black paper and left in the dark. All results were negative, until he tried minerals containing the element uranium. (The element uranium had been discovered by Martin Klaproth in 1789 in ores containing the mineral pitchblende; at the time it was primarily used as an additive in the glassmaking process for giving color to glass.) Becquerel’s uranium samples caused the photographic plate to darken. The darkening occurred even if the uranium had not been previously exposed to light, so it was clear the phenomenon was not due to phosphorescence. The radiation was passing through black paper and exposing the photographic plate. Perhaps it was the radiation X?

Pierre and Marie Curie soon began experiments with samples of uranium ore, most of them obtained from mines in Bohemia, then part of Austria. While still supposing that the effect might be due to the radiation X, their work led to the discovery of a very important anomaly. The work began with the creation of a device for measuring the activity of the sample more accurately than could be done with a photographic plate. It had been found that these substances had the property of making the air around them conductive. To measure how much, the sample was ground into a powder and placed on the lower of two parallel metal plates (B). (See Figure 1). This plate was attached to a set of batteries producing a potential usually around 50 or 100 volts. The upper plate (A) was attached through a switch to ground. A radioactive substance would cause the air to become conductive allowing a current to flow through plate A to ground, when the switch was closed. When the switch was opened, the upper plate developed a charge whose value could be determined by the electrometer (E in the figure). The quantity of charge produced was considered a measure of the radioactivity of the substance. A device developed by Pierre Curie from his studies of the piezoelectric properties of crystals, the quartz piezoelectric balance, greatly improved the accuracy of the electrometer. (See Denise Ham article in {21st Century,} Winter 2002.)

Being accomplished chemists, the Curies tried experiments to remove the uranium from the pitchblende ore. By subjecting samples of the ore to acid, they could cause much of the uranium to precipitate out as a salt. When these samples of ore with much of the uranium removed were placed in the measuring device a remarkable thing happened. They showed more radioactivity than the ore samples containing uranium. The Curies then isolated pure uranium metal from the ore and compared its activity. The ore samples they had from several Austrian mines showed a radioactivity three to four times greater than the pure uranium. They became convinced that a new element, many times more active than uranium, must be present in the ore. They began a process of chemical separatiom. Aided by theri precision device for measuring radioactivity, they were able to separate out the portions of the ore which showed greater radioactivity. By June 1898, they had separated a substance with 300 times the radioactivity of uranium. They supposed they had found a new element which they named {polonium,} after Marie Sklodowska Curie’s embattled Poland. There was still some doubt as to whether it was an element. It had not been isolated yet, but always appeared with the already known element bismuth. By December of 1898, the Curies had separated another product from the Bohemian ores which showed strong radioactive properties. This one appeared in combination with the known element barium, and behaved chemically much like barium. Again it had not been isolated in a pure form, and there was uncertainty as to whether it was a distinct element. Spectral analysis showed mostly the spectral lines characteristic of barium, but their friend, the skilled spectroscopist Demarcay, had detected a very faint indication of another line not seen before. [fn. 2] On the basis of the chemical and spectral evidence and the power of its radioactivity, the Curies supposed it to be a new element, which fit in the empty space in the second column (Group II) of Mendeleyev’s periodic table, below barium. They named it {radium.}

The Curies now dedicated themselves to obtaining pure samples of these new elements. It took four years of dedicated labor, working heroically under extremely difficult conditions to isolate the first sample of pure radium. Polonium proved more difficult. While they were engaged in this effort, research was under way in other locations, sparked by the earlier papers of Becquerel and the Curies announcement of two new radioactive elements.

One of the most important lines of development led to the discovery that there was more than one type of radiation coming from the radioactive substances. Becquerel had already reported from his early experiments with uranium that he suspected this to be the case. In 1898 Ernest Rutherford, a young New Zealander working at the Cavendish Laboratory in England, used an apparatus based on the Curie’s radiation detector to examine the radiation from uranium in a slightly different way. He placed powdered uranium compounds on the lower metallic plate of the Curie apparatus described above, and covered it with layers of aluminum or other metal foils. It was found that most of the radiation as measured by the charge collected on the upper plate was stopped by a single thin layer of foil. But some of it got through and was only stopped after a considerable number of layers had been added. The conclusion, already suggested by earlier work of Becquerel, was that there were at least two different types of radiation, to which Rutherford gave the name {alpha rays} for the less penetrating, and {beta rays} for those which were stoped only by more layers of foil.

In 1899, three different groups of experimenters (Becquerel in France, Stefan Meyer and E. von Sweidler, and Friedrich Giesel in Germany) found that the radioactive radiations could be deflected by a magnetic field. A sample of the substance was placed in a lead container with a narrow mouth, so that radiation could only escape in one direction. The container was placed between the poles of a powerful electromagnet, and it was found that the emerging radiation was curving in the same direction as had been observed with the cathode rays mentioned above (Figure 3). It had been recently demonstrated that these cathode rays were electrical particles of negative charge, to which G. Johnstone Stoney had given the name {electron.} Thus, it was supposed that radioactive susbtances were probably giving off electrons.

More careful experiments by Pierre and Marie Curie in 1900, showed that only a part of the radiation was deflected by the magnet. Marie Curie then showed that the undeflected part of the radiation had a lesser penetrating power. It was thus likely that the rays which behaved like electrons were what Rutherford had named beta radiation, and the other part the so-called alpha radiation. It was to take a few more years before these were identified. Under a stronger magnetic field, these more massive alpha particles could be deflected by a smaller amount in the opposite direction of the beta rays, indicating that they were more massive and positively charged.

A laboratory anecdote recounted by Marie Curie in her doctoral thesis provides a striking illustration of the identity of the radiation from radium with electricity. In preparation for opening a sealed glass vial containing a solution of radium salt, Pierre scored a circle around the glass vial with a glass cutter. He immediately recieved a considerable shock. The sharp edge made by the glass cutter had permitted the sudden discharging of the electrical charge accumulated on the container, according to a simple principle which readers of Benjamin Franklin’s writings on the lightning rod will recognize. [fn. 3]

– Induced Radioactivity and Transmutation – One other paradoxical phenomena first observed by the Curies is important to the next step in the understanding of radioactivity. In their work with radium, the Curies had noted that every substance which remains for a time in the vicinity of a radium salt (usually radium chloride) became radioactive. The radioactivity disappeared some time after the substance was removed from the presence of the radium. They called this new phenomenon {induced radioactivity.} Careful studies of the rate of decay of the radioactivity showed that it declined according to an asymptotic law. The effect was independent of the substance put in the vicinity of the radium; glass, paper and metals all acquired the same degree of induced radioactivity. The induced radioactivity was greater in closed spaces, and could even be communicated to a substance through narrow capillary tubes. The air or other gas surrounding the radium was found to be radioactive, and if captured and isolated it would remain active for some time.

Many things suggested that the induced radioactivity might be due to a new gaseous element. But the Curies carried out spectral analysis of the gas found around radium, and found no evidence of the presence of a new element. A peculiar experiment carried out in 1900 by a very peculiar English scientist, Sir William Crookes, set the stage for the next big step in the understanding of radioactivity. Crookes added ammounium carbonate to a solution of uranium nitrate in water, causing a precipitate to form and to redissolve leaving a small quantity of a residue which resembled a tuft of wool. He found the residue to be very radioactive, as determined by its effect on a photographic plate, while the remaining solution was virtually inactive. Crookes concluded that this new substance, which he gave the name uranium X, was the radioactive component of uranium, and that Becquerel and the Curies were mistaken in supposing that radiactivity was an inherent property of the element uranium.

Becquerel tried a similar experiment, precipitating barium sulfate from a solution of uranium. He found that the barium sulfate precipitate was radioactive, while the solution, which still contained all of the uranium was not. However, he could not accept Crookes’ conclusion, arguing that “the fact that the radioactivity of a given salt of uranium obtained commercially, is the same, irrespective of the source of the metal, or of the treatment it has previously undergone, makes the hypothesis not very probable. Since the radioactivty can be decreased it must be concluded that in time the salts of uranium recover their activity.” [fn 4]

To prove his supposition that the uranium would recover its activity, Becquerel set aside some of the inactive uranium solution and its radioactive barium sulfate precipitate for a period of 18 months. Late in 1901, he found that the uranium had completely regained its activity, whereas the barium sulfate precipitate had become completely inactive. Becquerel wrote: “The loss of activity … shows that the barium has not removed the essentially active and permanent part of the uranium. This fact constitutes, then, a strong presumption in favor of the existence of an activity peculiar to uranium, although it is not proved that the metal be not intimately united with another very active product.” [fn 5]

Relocated to McGill University in Montreal, Ernest Rutherford, working with the young Oxford chemist Frederick Soddy, took the next crucial step in resolving the paradox. Instead of uranium X, they created a radioactive residue from a precipitate of thorium which they called thorium X. Like Crookes’ uranium X, the residue showed all the radioacitivity, whereas the thorium which remained in solution appeared inactive. But the activity of the substances was such that after only a few days they observed what Becquerel had seen after 18 months. The thorium X lost some of its radioactivity, while the thorium from which it had been obtained, which was kept a considerable distance away, regained some its activity. A quantititave study of the rate of decay and recovery of the activity by the two substances showed that the rates of decay and recovery were the same, about one month. The famous chart depicting their relative activity is pictured in Figure 4. Rutherford and Soddy repeated the observations using uranium X, and found the same effect occurring over a longer time span, about six months. These observations were considered together with the anomalous phenomenon of induced radioactivity discovered by the Curies. Rutherford had carried out his own investigations and concluded in 1900 that the induced radioactivity was due to a radioactive gas, which he called an emanation. The work with thorium X showed evidence of an emanation, which we know today as the radioactive gas radon.

Rutherford and Soddy now drew a radical conclusion from these results. They posited that the atoms of the radioactive elements were undergoing a spontaneous disintegration. By the emission of an alpha or beta particle they were changing to form a new element, and they posited that this process continues in a series, at a different rate for each step. They summarized the viewpoint in the introduction to their first paper on the subject, in 1902:

“Radioactivity is shown to be accompanied by chemical changes in which new types of matter are being continuously produced. These reaction products are at first radioactive, the activity dinminishing regularly from the moment of formation. Their continuous production maintains the radioactivity of the matter producing them at a definite equilibrium-value. The conclusion is drawn that these chemical changes must be sub-atomic in character.” [fn 6]

As later developments were to show, Rutherford and Soddy were fully correct in their general statements, even if some of the details required further elaboration. It could be argued, as the Curies and Becquerel did, that there was not sufficient evidence to support the hypothesis with certainty when put forward in 1902. I am not sure at what point they became fully convinced. In 1903, when the Curies and Henri Becquerel gave their Nobel prize acceptance speeches, they were still cautious about the Rutherford-Soddy hypothesis. One reason for the caution was that chemistry since the time of Lavoisier had relied on the assumption of the stability of the elements. Transmutation was associated with the unscientific practices of alchemy. An assumption underlying all of Lavoisier’s experiments was that in the course of a chemical reaction, the weight and elemental identity of the products would not change. Mendeleyev underlined this point in the preface to the Seventh Russian edition of his textbook Principles of Chemistry, written in St. Petersburg in November 1902. By the dating, one suspects that Mendeleyev may have been adding his voice to the skepticism concerning the Rutherford-Soddy hypothesis. [fn 7]

Today it is well understood that the radioactive elements uranium, thorium, and plutonium pass through a decay series by which they are transformed successively down the periodic table until arriving at a stable form of lead (atomic number 82). There are four known decay series, that of uranium-238, uranium-235, throrium, and plutonium. Without any interference by man, all of the elements above lead are continuously undergoing such transmutation in the Earth. Elements such as radium, polonium and radon are steps on this path, appearing temporarily and then decaying to pass over on to other elements.

In 1903 Soddy with Willam Ramsay established the identity of the alpha particle with helium. Later the alpha particle was understood to be the ionized (positively charged) nucleus of helium with its two electrons stripped off. As we understand it today, when an element emits an alpha particle it is transformed two steps down the periodic table. But before this could be fully grasped, two important new concepts had to emerge: the notion of atomic number, which describes the number of positive charges or protons in the nucleus, and the existence of isotopes–nuclei of the same charge but different atomic weights. These conceptions, along with the picture of the atom as consisting of a compact, positively charged nucleus surrounded by distant electrons, emerged in the period about 1909-1913. With the addition of one more conception, the neutron, which was first proposed in the early 1920s by Robert J. Moon’s teacher, William Draper Harkins, and experimentally established in 1932 by Chadwick, it became posible to explain the radioactive decay series with precision. So, for example, when the abundant isotope of uranium, U-238, emits an alpha particle, it transmutes two atomic numbers down to become 90-thorium-234. Now, thorium-234 is a beta emitter. We view the beta emission as resulting from the decay of a neutron in the nucleus. Harkins first conceived the neutron as an electron condensed on a proton. (When it was detected experimentally, the neutron was found to be a neutral particle with a mass almost exactly equal to the sum of the masses of the electron and proton.) When it decays, the neutron throws off the very light electron and leaves the more massive proton behind, increasing the charge of the nucleus by plus one. Thus beta decay causes the atomic number to increase by one, without increasing the atomic weight. 90-thorium-234 becomes 91-protactinium-234. This is also a beta emitter which thus decays to 92-uranium-234. (Notice that we have gone two steps down and two steps back up, but we are at a much lighter isotope of uranium. From here the U-234 emits an alpha particle to become 90-thorium-230. This emits an alpha particle to become 88-radium-226, which emits an alpha particle to become 86-radon-222 (see Figures 5a,b). To add to the fun, each of these decay products has its own rate of decay which is measured as a half-life, the time it takes for one half the mass of the substance to disappear. For some substances in the decay chain, this is quite fast–3.82 days for radon-222, for example, and 0.00016 seconds for polonium-214. Others give off their radiation at a much slower rate–uranium-238, for example, takes 4.5 billion years to lose half its mass. When Becquerel, the Curies, and the other early experimenters were detecting the radioactivity of uranium, for example, most of the emissions they detected were not from the uranium, but rather from the decay products mixed in with the uranium. Crookes’s creation of uranium-X was thus actually the chemical separation of the decay product, thorium-234, from the uranium. As the half-life of thorium-234 is just 24.1 days, it was emitting radiation millions of times faster than the uranium. Actually the uranium itself was a mixture of the slow decaying U-238 (4.5 billion years), U-235 (half life = 713 million years), and the decay product, U-234 (half life = 248,000 years). This is why the uranium-X sample at first showed such a high activity, while the remaining uranium seemed inactive. Over time, the uranium-X lost its activity by decay, while the mixture of uranium isotopes slowly built back up their decay products, thus increasing the measurable activity of that portion. It was not the uranium emission that was increasing, but the emission from its faster decaying products. The radon gas which was also a part of the decay chain was what Rutherford had called the {emanation.} Part of the difficulty of detecting it was its short half-life. Rutherford’s thorium-X, was what is now known as radium-224. It decays with a half-life of 3.64 days, by alpha particle emission, to Radon-220, the emanation.

By extrapolating the rate of decay of natural uranium, we can determine that about 4.5 billion years ago there was twice the amount of uranium-238 in the Earth as today. Half of it has undergone a transmutation in that time span, which is thought to be about equal to the age of the Earth. Radium, polonium, radon gas, and the other elements above lead on the periodic table, are all temporary appearances on their way to becoming something else. It is not out of the question that all the 92 elements are undergoing natural transmutation, and that those we call stable are simply decaying on a time scale longer than we have been able to observe. In any case, by artificial means, such as collision with a charged particle from an accelerator, and with enough expenditure of energy, we can today transmute virtually any element into any other. The alchemists’ dream of transmuting base metals into gold is thus acheivable, and has been demonstrated in the laboratory. This however can be only be accomplished in very small amounts, and at a high cost, so that even with Weimar rates of hyperinflation, laboratory transmutatiion is not presently a viable means of producing the metals we need.

So we see, that even this non-living domain within the biosphere is not quite dead either. It is undergoing constant change of a very radical sort. Even the stable elements, whether or not they ever change their identity, are in a state of constant and very rapid internal motion and, as I believe, of continuous and very rapid re-creation on a nonlinear time scale.

– Notes –

1. {Comptes Rendus,} vol. 127, pp. 1215-1217 (1898) http://web.lemoyne.edu~GIUNTA/curiesra.html The earlier discovery of polonium is described in {Comptes Rendus,} vol. 127, pp. 175-178, http://web.lemoyne.edu~GIUNTA/curiespo.html

2. We shall have more to do with spectroscopy later. Upon heating, each chemical element shows a characteristic color. Most people have seen the green color produced in a flame by a copper-bottomed pot. If the light produced when the element is heated be passed through a prism, it is dispersed into a band of color, just as sunlight passing through a prism forms a rainbow. Within the colorful band, known as a spectrum, certain sharp and diffuse lines appear. Bunsen and Kirchoff began work in 1858 which established a means for identifying each element by its flame spectrum (Figure 2).

3. We mention in passing one other anomaly associated with the discovery of radium: its production of light and heat with no apparent source for the energy. We will have more to say on this in coming installments. In the 1898 paper cited above, Curie, Curie and Bemont noted:

“The rays emitted by the compounds of polonium and radium make barium platinocyande flouorescent. Their action from this point of view is analogous to that of Roentgen rays [x-rays], but considerably weaker. To make the experiment, one places on the active substance a very thin leaf of aluminum, upon which a thin film of barium platinocyanide is spread; in the dark the platinocyanide appears weakly luminous in front of the active substance.”

This property of the radioactive substances of producing light (and, it was later noted, considerable heat) without any apparent source of energy was quite paradoxical and caused the team to note at the end of the second paper of 1898: “Thus one constructs a source of light, a very weak one to tell the truth, but one that functions without a source of energy. There is a contradiction, or an apparent one at the very least, with Carnot’s principle.”

Later in her 1903 doctoral thesis, Curie noted that samples of radium are also much warmer than the surrounding air. Calorimetric measurements were able to quantify the heat produced.

Sadi Carnot’s principle, derived from his study of steam engines, stated that the work gained by use of steam depended upon the difference in the heat of the steam coming from the boiler, and the heat of the water vapor after it had done its work in expanding against a piston. Work could only be gained by transfer from a warmer to a colder body. This is the beautifully adduced principle of the operation of heat engines, which Rudolf Clausius attempted to make into a universal principle of amorality by arguing that all processes progress to a state of increasing disorder (“entropy strives toward a maximum.”) What was the source of power for the light and heat produced by these radioactive substances? In noting the apparent contradiction with Carnot’s principle, Marie Curie, the probable author of the jointly signed note, had put her finger on a new principle of power. It was to take another several decades, and the work of many teams of investigators to begin to unravel the puzzle. The answer in short, was the existence of a new domain within the microcosm, the atomic nucleus, in which processes of enormously greater raw power than could be observed on the macroscopic or chemical scale took place.

4. cited in Samuel Glasstone, {Sourcebook on Atomic Energy,} (Princeton: Van Nostrand, 1958) p. 121]

5. cited in Glasstone, op cit, p.121

6. Rutherford and Soddy, Philosophical Magazine, 4 (1902), 370-396, and web.lemoyne.edu/~giunta/ruthsod.html

7. I have examined the circumstances surrounding the Rutherford-Soddy paper with some care. The question on my mind was how, given the evident epistemological weakness of the British school, so much of the progress in atomic science during several decades beginning about 1900 could have taken place there. A subsidiary question was how Rutherford, who by the 1920s had become such an obstacle to new ideas in atomic theory, according to the testimony of Dr. Moon and his teacher Harkins, should have taken such a bold step in 1902. I found it useful to think of the question in two aspects, both of which are clarified by examining it in the historical context.

First, at the time of Rutherford’s discovery, the British were carrying out a buildup for world war, and feared the German pre-eminence in science. For a brief window of time, a general unleashing of scientific progress was permitted. Rutherford and Soddy were both outsiders in the British class system, the one a colonial, and the other the son of a shopkeeper, permitted to carry out their work in the outpost of Montreal. Later, by the 1920s and after the great war, Rutherford had become a part of the insider establishment, which was already asserting a kind of non-proliferation doctrine. H.G. Wells’s adoption of Soddy’s work, as in his popularization of an ultimate weapon to control populations by one-world government (“The World Set Free,” 1914), exemplifies this general aspect of the problem. The later achievement of nuclear fission put nuclear science even more tightly under the control of a military-industrial elite of Wellsian predilection well known to us.

Second is the unfortunate fact that the hegemony of British empiricism, dating approximately to the death of Leibniz, has meant that progress in science has been forced to proceed largely through the resolution of experimental paradox, without benefit of the superior method of metaphysics–as Leibniz called it. We know some very few but notable exceptions, among which Riemann stands out. Otherwise, the better scientists have developed a use of the creative method, as if by instinct, drawn from cultural traditions which are not necessarily evident to them. The general demoralization which followed the First World War tended to wipe out much of the epistemological advantage which had remained in some German and French scientific practice from the respective Kepler-Leibniz and Ecole Polytechnique traditions. A figure such as Dr. Moon represented a countercultural trend, in the good sense of the word, embodying in his deepest moral-philosophical outlook the better aspects of the American Leibnizian tradition, even where that might not be explicitly enunciated. Moon’s creative reaction to LaRouche and Kepler in his 1986 formulation of his nuclear space-time hypothesis conclusively demonstrate that point.

Understanding Nuclear Power, #2: THE PERIODICITY OF THE ELEMENTS

Larry Hecht April 21, 2006

[Figures for this pedagogical can be accessed at: www.wlym.com/~bruce/periodic.zip]

Dmitri Mendeleyev discovered the concept of the periodicity of the elements in 1869 while he was in the midst of writing a textbook on inorganic chemistry. The crucial new idea, as he describes it, was that when the elements are arranged in ascending order of their atomic weights, rather than simply increasing in some power or quality, he found periodically recurring properties. Mendeleyev noted explicitly that this discovery led to a conception of mass quite different from that in the physics of Galileo and Newton, where mass is considered merely a scalar property (such as F = ma). Mendeleyev believed that a new understanding of physics would come out of his chemical discovery. It did, in part, in the developments that led into the mastery of nuclear processes, even if the flawed foundations of the anti-Leibnizian conceptions injected by British imperial hegemony were never fully remedied. The development of the sort of conception connected with Dr. Robert Moon’s nuclear model will help to fulfill Mendeleyev’s insight on this account.

There are just 92 naturally occurring elements in the universe. Their existence and organization in the periodic table discovered by Mendeleyev is the most fundamental fact of modern physical science. We will soon see how the discovery of radioactivity and nuclear power, among so many other things, would not have been possible without the prior achievement of Mendeleyev. Let us first get a general idea of what the periodic table is, and then examine some of the considerations which led Mendeleyev to his formulation.

The periodic table systematizes the 92 elements in several ways (Figure 1). The horizontal rows are known as {periods} or {series}), and the vertical columns as {groups}. The simplest of the organizing principles is that the properties of the elements in a group are similar. Among the many properties which elements in a group share: Their crystals, and the crystals of the compounds which they form with like substances, usually have similar shapes. Elements in the same group tend to combine with similar substances, and do so in the same proportions. Their compounds then often have similar properties. Thus sodium chloride (NaCl) which is table salt and potassium chloride (KCl) combine in the same 1:1 proportion, and show similar chemical and physical properties. Partly because they tend to make the same chemical combinations, the members of a group and sometimes adjacent groups, are often found together in ore deposits in the Earth. For example, copper usualy occurs in ores with zinc and lead, or with nickel and traces of platinum. If you look at a periodic table, you will see these elements in nearby adjacent columns. Or for another example, when lead is smelted, trace amounts of copper, silver, and gold (which occupy a nearby column to the left), and arsenic (in the adjacent column to the right) are found. We will look at more of these sorts of relationships shortly.

(To prevent confusion, we should interject this note of warning. When the periodic table is taught in the schools today, it is usually presented as an ordering principle for the electron shells which are thought to surround the nuclei of atoms. The modern explanation of chemical reactions invokes the interaction of the outer electrons in these shells. It is important to understand that at the time of Mendeleyev’s discovery, no chemist had any idea of the existence of an atomic nucleus nor electrons. The electron was considered as a theoretical entity in the electrodynamic work of Wilhelm Weber (1804-1891), but this had little to do with chemical thinking at the time. The first approximate measure of the mass of the electron came in the first decade of the 20th century, and the validation of its wave properties came in 1926. In the prevailing view of the atom at the opening of the 20th century, there was no central nucleus, but rather a homogeneous spread of charges. Thus, to understand how Mendeleyev came to his discovery of the periodic table in 1869, we must discard most of what we might have learned of the subject from modern textbooks. If we feel a slight pang of remorse in giving up what little we think we know of the subject, we shall soon find that we are rewarded by a far greater pleasure in discovering how these discoveries really came about. We shall then also be at the great advantage of knowing where the assumptions lie which will surely need correcting to meet the challenges of Earth’s next 50 years.)

By arranging the elements in increasing order of their atomic weights, Mendeleyev found that they fell into periods which repeated themselves in such a way that elements possessing analogous properties would fall into columns one below the other. Within the periods, many properties, including the valences (defining the small whole number proportions in which the elements combine with each other), the melting and boiling points, and the atomic volumes (which we shall discuss further on) showed a progessive increase and decrease which was analogous for each period.

By examining these periodic properties, it was also possible to see that there were gaps in the table. Some viewed those gaps as a weakness in Mendeleyev’s hypothesis. But Mendeleyev was convinced the conception was right, and that the gaps represented elements still to be discovered. He worked out the probable properties of some of these unknown elements on the basis of their analogy to the surrounding elements. Within a few decades of Mendeleyev’s publication of his periodic concept, several of these missing elements were discovered.

For example, in the Fourth Group (the 14th in the enlarged numbering system adopted in 1984), below the column containing carbon and silicon, Mendeleyev saw that there must exist an element which was unknown at the time. He called it {eka-silicon,} the prefix {eka-} meaning {one} in Sanskrit. By looking at the properties of silicon above and of tin (Sn) below, and also of zinc and arsenic surrounding it, he could guess such properties as its atomic weight, the probable boiling point of some of its compounds, and its specific gravity. In 1886, C. Winkler from the famous mining center of Freiberg in Saxony found the new element in a mineral from the Himmelsfurt mine and called it Germanium. Its actual properties were found to correspond entirely with those forecast by Mendeleyev. There had also been a gap in the Third Group (the 13th in the new system) in the position just under the elements boron and aluminum. In 1871 Mendelyev had named this still unknown element {eka-aluminum.} In 1875, Lecoq de Boisbaudran, using techniques of spectrum analysis, discovered a new metal in a zinc blende ore from the Pyrenees. He named it Gallium. At first it semed to differ considerably from the density Mendeleyev had predicted it would have if it was indeed eka-aluminum. But as observations proceeded the new element was found to possess the density, atomic weight and chemical properties which Mendeleyev had forecast.

That is the essential concept of periodicity. In order for Mendeleyev to arrive at it a great deal of prior chemical investigation was required. Perhaps the most important prerequisite had been the discovery of new elements. The ancients knew 10 of the substances we call elements today, most of them metals. These were iron, copper, lead, tin, antimony, mercury, silver, gold, carbon, and sulfur.[fn 1] All but two of the rest were discovered in the modern era. Between 1735 and 1803, 13 new metals and four gaseous elements were discovered. In 1808 six new elements from the alkali and alkali metal groups (Group 1 and II) were discovered. [fn 2] And the discoveries continued through the 19th century, capped by Marie Curie’s isolation of radium in 1898. In 1869 when Mendeleyev conceived the idea of periodicity about two thirds of the 92 naturally occurring elements were known. Still a few more remained to be discovered in the 20th century. And then came the synthesis of the artificial elements beyond the 92 naturally occurring ones, beginning with neptunium and plutonium.

What do we mean by an element? Chemistry deals primarily with homogeneous substances, not differing in their parts. But the fact that a substance is the same in all its parts does not distinguish it as an element. Sulfur which we consider an element is a yellow powder or cake, but many compounds such as chromium salts can take on a similar appearance. Table salt is uniform and crystalline, but not an element. We consider hydrogen gas an element but carbon dioxide gas a compound. Sometimes elements are described as the elementary building blocks from which more complex substances are formed. But a better definition is the one Lavoisier gave, which describes an element as the result of an action, as that which cannot be further separated by chemical procedures:

“[I]f by the term {elements} we mean to express those simple and indivisible atoms of which matter is composed, it is extremely probable we know nothing at all about them; but, if we apply the term {elements,} or {principles of bodies,} to express our idea of the last point which analysis is capable of reaching, we must admit, as elements, all the substances into which we are capable, by any means to reduce bodies by decomposition. Not that we are entitled to affirm that these substances we consider as simple may not be compounded of two, or even of a greater number of principles; but, since these principles cannot be separated, or rather since we have not hitherto discovered th means of separating them, they act with regard to us as simple substances, and we ought never to suppose them compounded until experiment and observation has proved them to be so.” [fn 3] Lavoisier’s warning remains applicable today. By heeding it, we do not fall into the trap of supposing we are dealing with irreducible elementarities, for the history of scientific progress has shown that increasing mastery over nature always permits us to delve deeper into the microcosm. For chemical technology, the element was the irreducible substance. But later developments allowed us to reach down to the electron, the nucleus, and to subnuclear particles.

It was necessary to perform chemical operations on substances to know if they were elements or compounds. Many things that were once considered elementary were later found to be composite. Lavoisier’s study of the separation of water into hydrogen and oxygen gas, and their reconstitution as water is exemplary. Similarly, his demonstration that the atmospheric air consists primarily of oxygen and nitrogen gas. The metals that were discovered in the 18th century were mostly separated from their ores by processes of chemical reaction, distillation, and physical separation.

At the time Mendeleyev was writing his textbook experimenters had accumulated an enormous store of information concerning the properties of elements and their compounds. Especially of note were the many analogous properties among the elements and their respective compounds. For example, lithium and barium behaved in some respects to sodium and postassium, but in other respects to magnesium and calcium. Looking at such analogies as markers of an underlying ordering principle, Mendeleyev suspected that there must be a way to find quantitative, measurable properties by which to compare the elements. There were four different types of measurable properties of the elements and their compounds, which he took into consideration in formulating his concept of periodicity. He identifies these in Chapter 15 of his textbook as:

(a) isomorphism, or the analogy of crystalline forms; (b) the relations between the “atomic” volumes of analogous compounds of the elements; (c) the composition of their saline compounds; (d) the relations of the atomic weights of the elements.

Think of each of these types of properties as different means of “seeing” into the microcosm. Let us begin with the first, crystal isomorphism. When a compound is dissolved in water or some other solvent, and the water removed by evaporation or other means, it can usually be made to crystallize. All of the familiar gemstones and many rocks are crystals that have been formed under conditions present within or at the surface of the Earth. Table salt and sugar are familiar crystals. Most metals and alloys cool and harden in characteristic crystalline forms. Organic compounds, even living things like proteins, can be made to crystallize for purpose of analyzing their structure. With the development of chemistry following Lavoisier, the crystalline form began to receive more attention, and close study eventually showed that every compound crystallizes in a unique form. Many of these forms are quite similar, but careful measurement of the the facial angles and the proportional lengths of their principal axes will always show some slight difference. Crystallography thus became a means of chemical analysis, and by the 1890s there existed catalogues of the crystallographic properties of nearly 100,000 compounds. [fn 4]

Despite these very fine differences, the general forms of crystals fit into certain classifiable groups. Their shapes include the cube and octahedron, hexagonal and other prisms, and a great number of variations on the Archimedian solids, their duals, and many unusual combination forms. The German chemist Eilhard Mitscherlich first demonstrated in 1819 that many compounds which have similar chemical properties and the same number of atoms in their molecules also show a resemblance of crystalline forms. He called such substances isomorphous. He found that the salts formed from arsenic acid, (H3AsO4) and phosphoric acid (H3PO4), exhibited a close resemblance in their crystalline forms. When the two salts were mixed in solution, they could form crystals containing a mixture of the two compounds. Mitscherlich thus described the elements arsenic and phsophorous as isomorphous.

Following Mitscherlich a great number of other elements exhibiting crystal isomorphism were found. For example, the sulphates of potassium, rubidium and cesium (KSO4, RbSO4, CsSO4) were found to be isomorphic; the nitrates of the same elements were also isomorphic with each other. The compounds of the alkali metals (lithium, sodium, postassium, rubidium) with the halogens (fluorine, chlorine, bromine and iodine) all formed crystals which belonged to the cubic system, appearing as cubes or octahedra. The cubic form of sodium chloride (table salt) crystals is an example, as one can verify with a magnifying glass.

This was the first of the clues which suggested the concept of periodicity. When Mendeleyev arranged the elements in order of increasing atomic weights, the isomorphic substances were found to form one above the next in a single column. Thus arsenic and phosphorous were part of Group V (15, in the modern nomenclature). The alkali metals fell under Group I; the halogens became Group VII (17 in the modern nomenclature). Not only this, but the elements of the same groups combined with one another in the same proportions. Thanks to the work of Gerhardt and Cannizzaro in establishing a uniform system of atomic weights, it had become a simple matter to determine the chemical formula for a great number of substances, once the proportion by weight of the component elements had been determined. It thus turned out that the elements of the first group (designated R) combined with the elements of the seventh group (designated X) in the proportion RX, as in NaCl. The elements of the second group combined with those of the seventh group in the proportion RX2, as in CaCl2, and so forth. If the combinations with oxygen were considered (the oxides being very prevalent), the first group produced RO2, the second group RO, the thrid group R2O3, and so forth. This is what Mendeleyev is describing in the periodic chart we show in Figure 2. We shall save the fascinating question of the investigation of the atomic volumes and many other properties of the elements which prove to be periodic for another time, and end this exercise for now.

NOTES

1. Mining and metallurgy was clearly a part of ancient science, though the thinking and discovery process is mostly lost to us. Heinrich Schliemann, the discoverer of Troy, suggests that the word “metal” came from Greek roots (met’ alla) meaning to search for things, or research. Archaeological remains indicate an ordering of discovery of the metals and the ability to work them, with copper and its alloys preceding iron for example. Ironworking is associated with the Hittite and Etruscan seafaring cultures of Anatolia and north central Italy, who spoke a common language related to Punic or Phoenician.

2. The four gaseous elements were hydrogen (Henry Cavendish, 1766); nitrogen (Daniel Rutherford, 1772); oxygen (Carl Scheele, Joseph Priestley, 1772); chlorine (Scheeele, 1774). Among the metals discovered in the 18th century were:

Platinum (Antonio de Ulloa, 1735); Cobalt (Georg Brandt, 1735); Zinc (Andreas Marggraf, 1746); Nickel (Axel Cronstedt, 1751); Bismuth (Geoffroy, 1753) Molybdenum (Carl Scheele, 1778); Zirconium (Martin Klaproth, 1778); Tellurium (Muller, 1782); Tungsten (Juan and Fausto d’Elhuyar, 1788); Uranium (Klaproth, 1789), Titanium (William Gregor, 1791), Chromium (Louis Vauquelin, 1797); Beryllium (Vauquelin, 1798)

In 1803, William Hyde Wollaston and Smithson Tennant found the elements rhodium, palladium, osmium and irridium in platinum ore. In 1808, Humphry Davy isolated the alkali elements sodium, potassium, magnesium, calcium, strontium, and barium by electrolysis of their molten salts.

3. Antoine Laurent Lavoisier, {Elements of Chemistry,} translated by Robert Kerr, in {Great Books of the Western World,} (Chicago: Encyclopedia Briitannica, 1952) p. 3.

4. In the history of physical chemistry, the study of crystals provided one of the first means of access to the microcosm. It continues to be of importance today.This is great fun because Kepler’s playful work {The Six-cornered Snowflake,} is actually the founding document of modern crystallography. The student must take advantage of this, for the topic, as presented in the usual textbooks, is a confusion of mathematical formalisms and systems of classification. In Kepler, we see that the question is really very simple: why is the snowflake six-sided? why is the beehive made from cutoff rhombic dodecahedra? How shall we get an answer? It can only be by attempting to shape our imagination in conformity with the mind of the creator. If we do not get the complete answer, we see, nonetheless, that it is through the playful exercise of the mind in advancing and pursuing hypothesis that we come closer to it.

Among the many discoveries presented in that small work, Kepler introduces the concept that the study of the close-packing of spheres, which copy the space-filling property of rhombic dodecahedra, can help to explain the mineral crystals, all of which exhibit the characteristic hexagonal symmetries. Kepler thus suggested the existence of an atomic or molecular structure within the abiotic domain. Kepler’s insights were carried forward in the study of mineral crystals especially by the work of the Abbe Hauy (1743-1822) in France, who was followed by a great number of other investigators.

Understanding Nuclear Power, #1:

AVOGADRO’S HYPOTHESIS AND ATOMIC WEIGHT (WHEN 2 + 1 = 2)

by Larry Hecht March 29, 2005

(This is the first in a series of pedagogicals which will address the scientific basis of nuclear power from a conceptual, historical standpoint. The figures can be accessed at www.wlym.com/~bruce/atomicweight.zip).

Our modern understanding of the atom and the microscopic domain has its origin in two parallel lines of development in experimental science which date to the period from approximately 1785-1869. This experimental work closely overlaps the developments in mathematics which we have been studying respecting the Gauss-Riemann complex domain, and a patient and not-too-literal approach to its study will lead to many beautiful realizations of the conceptual connections. One track is the Ampere-Gauss-Weber electrodynamics upon which the greater part of modern experimental physics practice rests. The other is the development of the science of chemistry, from the work of Antoine Laurent Lavoisier (1743-1794) to Dmitri Mendeleeff’s 1869 formulation of the periodicity of the elements as arranged by atomic weights. Modern physical chemistry, including nuclear chemistry and the Pasteur-Vernadsky tradition of modern biogeochemistry, owes its existence to these latter developments.

We shall focus here on the second of these two important lines of development.

Most people have heard the term “atomic weight.” What does it mean? To believe in such a notion, we must first accept the existence of a very small, invisible thing called an atom; we must further suppose the existence of common species of atoms, and that each exemplar will exhibit the same properties as any other; finally, we must imagine that we might find some means of weighing this almost non-existent entity. Not only has this proven possible, but, strange as it might seem, the concept of atomic weight lies at the foundation of nearly all the breakthroughs of modern science and technology. Dmitri Mendeleeff’s discovery of the Periodicity of the Elements rests upon this principle, as do all the developments which have allowed us to harness power from the atomic nucleus. The curious anomaly in the atomic weight of helium–that it is less than the sum of the weight of its constituent particles–was the basis for the recognition that fusion energy would be possible. (The possibility of realizing energy from this anomaly, which became known as the “mass defect,” was described in a 1914 paper by William Draper Harkins, the teacher of our friend Dr. Robert Moon). The nuclear chemistry which is the basis for heavy-element fission, the source of power for nuclear reactors, also rests on the concept of atomic weight. Thus, given its importance to the progress of all modern science, we have decided to devote this exercise to an outline of how the concept of atomic weight came about, anticipating that readers will find a way to pursue the further study and experimentation required for a deeper understanding.

A doctrine of atomism–that everything is made up of tiny and indivisible particles–existed since ancient times, its most famous proponent being the Eleatic philospher, Democritus. But the atom of chemistry was not conceived by the followers of Democritus nor his modern reviver Gassendi, but rather by thinkers who tended toward the tradition of Plato, Cusa and Leibniz. The chemical atomic theory, upon which the concept of atomic weight is based, developed in the first decade of the 19th Century out of work centered around the Ecole Polytechnique in Paris. The clear development of the concept of atomic weight took place over the course of several decades following that, led by the inspirer of Mendeleeff, Charles Frederic Gerhardt (1826-1845). Its acceptance was not achieved until a famous international congress of chemists in Karlsruhe in 1860, where an intellectual batttle led by the Italian Stanislao Cannizzarro (1826-1910) finally settled the question.

The experimental development of the concept of atomic weight begins with the study of gases. By the first decade of the 19th century chemists had produced and identified a variety of common gases, including hydrogen, oxygen, nitrogen, chlorine, ammonia, hydrogen chloride, among others. Through techniques pioneered by Lavoisier and refined by subsequent investigators, it was possible to measure quite precisely the volume and weight of gases. When water was decomposed, it could easily be shown that two gases were produced (named by Lavoisier hydrogen and oxygen), in the proportion of two volumes to one (Figure 1–Mendeleeff, Fig. 19, p. 114). By decomposing ammonia by the action of an electric spark, nitrogen and hydrogen gas was produced in the proportion of three volumes to one. Through a great number of experiments with different gases, Joseph-Louis Gay-Lussac (1778-1850) came to the recognition known as his First Law: That the amounts of substances entering into chemical reaction occupy under similar physical conditions in a gaseous or vaporous state, equal or simple multiple volumes. This is also known as the law of combining volumes or the law of multiple proportions. It is a curious result, and hardly an obvious one, if you think about it.

It was also possible to weigh the gases,<fn. 1> and as hydrogen was by far the lightest, it was convenient to compare the weight of a gas to an equal volume of hydrogen. This ratio became known as the vapor density. For oxygen it was 16, for nitrogen 14, for ammonia 8.5, and for water vapour 9. These numbers, too, contained a paradox, for why should a quantity of ammonia, which contains nitrogen, and a quantity of water which contains oxygen, weigh less in proportion to hydrogen than an equivalent volume of the gases they contain? To answer the question, one must have a hypothesis about what a gas is.

Daniel Bernoulli, the son of Leibniz’s collaborator Johann, had proposed an idea in his 1738 book {Hydrodynamics,} which synthesized the research on atmospheric pressure and the pressure and volume relationships of gases that had been carried out by predecesors including Pascal, Torricelli, Mariotte, and Boyle. Bernoulli supposed that a gas, or elastic fluid as he called it, consisted of a great number of tiny, invisible particles which became agitated upon heating, and produced pressure by striking against the walls of its container. Bernoulli’s kinetic theory of gases became enormously important for physical chemistry in the 19th century, even though it was formalized by Clausius, Maxwell, and others into a doctrine (entropy) which was the very opposite of the thinking of its originator. <fn. 2>

Another paradox about the weight and volume of gases arises when we consider the composition of water. We noted that it can be easily shown that in the decomposition of water, two volumes of hydrogen are produced for every volume of oxygen. These gases may be brought back together into a mixture known as detonating gas. When this mixture of two parts hydrogen and one part oxygen is ignited by a spark, an explosion occurs and the product is water. If that quantity of water be vaporized and brought back to the same temperature and pressure of the original gaseous ingredients, it is found that the two volumes of hydrogen plus one volume of oxygen have become two volumes of water vapor! Apparently, 2 + 1 = 2 in the world of water.

[In <fn. 3> I supply Mendeleeff’s description of the apparatus for carrying out this experiment. It is quite detailed, so continue reading, and return to it after you have finished.]

Whether this is paradoxical or not depends not only upon what we think is in the volume of gas, but how much. If we accept Bernoulli’s hypothesis that a gas consists of a great number of tiny, invisible particles, we have still not said how many they are. The first quantitative formulation on this account was proposed by the Italian chemist Count Amedeo Avogadro in a paper published in 1811. Looking at Gay-Lussac’s law and data from his own chemical researches, Avogadro hypothesized that at the same temperature and pressure, equal volumes of any two gases would contain the same number of particles, or molecules as they were coming to be called. His idea was not received with much interest. The only figure of note to embrace Avogadro’s idea, at first, was Andre-Marie Ampere, who was not very well known at the time. <fn. 4> In the 1840s, the French chemist Charles Frederic Gerhardt adopted Avogadro’s hypothesis and labored unceasingly to disprove all doubts concerning its truth. It finally won acceptance at the Karlsruhe Congress in 1860. By 1865, the key to finding the number of atoms or molecules contained in a cubic centimeter of any gas was determined by Josef Loschmidt, then an Austrian high school teacher. <fn. 5>

Employing Avogadro’s hypothesis, we can resolve the paradox of the composition of water, and discover another strange feature of the universe. If the composition of water, as suggested by the volumes obtained upon decomposition, be H20, then one volume of oxygen gas should unite with two volumes of hydrogen gas to produce one volume of the combined gas–H2O. Yet experiment shows the quantity of water vapor produced to be equal to two volumes. The paradox could only be resolved by assuming that the particles of both the hydrogen and the oxygen gas were twins, each consisting of two particles of hydrogen or oxygen. Thus, instead of H and O, the constitutent parts of the two gases must be H2 and O2. Avogadro called them compound molecules; today they are called diatomic molecules. Avogadro found that hydrogen, oxygen, nitrogen and chlorine were of this type and suspected there might be more. Upon detonation, the O2 molecules must break asunder each of the two pieces combining with two hydrogen atoms. Then the description of the formation of two volumes of water from a mixture of detonating gas containing two volumes of hydrogen and one volume of oxygen becomes: 2 H2 + O2 –> 2 H2O.

Avogadro’s conception of a diatomic molecule also served to unravel the paradox noted above concerning the vapor densities. The standard of reference for the vapour density was hydrogen gas, which we have come to see consists of diatomic molecules. Thus, a given volume of hydrogen gas will weigh twice what we would have expected, assuming the constitutents to be single atoms. Oxygen gas, as we have seen from the case of water, is also a diatomic molecule. Thus, the ratio of the weights of equal volumes of oxygen to hydrogen (the vapor density) will correspond to the true ratio of the weights of the atoms, if we assume with Avogadro that equal volumes of gases contain an equal number of constitutent molecules. We can see now, how important it was to establish Avogadro’s hypothesis. Once established, it allows us to infer the relative weights of tiny invisible atoms from the measured weights of large volumes of gases. If we assign the weight of 1 to an atom of hydrogen, we now know that an atom of oxygen will weigh 16. The weight of a molecule of water (H2O) is then 18. But a volume of water vapor weighs only 9 times as much as an equal volume of hydrogen gas, because the hydrogen is diatomic. Nitrogen, it turns out, is also a diatomic gas. Its vapor density of 14 thus denotes its true atomic weight. Ammonia (NH3) has a vapor density of 8.5 and not 17 because it is being compared to diatomic hydrogen gas.

Using Avogadro’s hypothesis, which was rigorously established by Gerhardt in the 1840s, it became possible to establish the atomic weights of a great number of substances, both naturally occurring gases and those substances which could be vaporized.<fn. 6> The unification of the practicing chemists of Europe around the Avogadro hypothesis, which was achieved by Cannizzaro at Karlsruhe in 1860, meant that all the data related to atomic weight could be systematized under one conception, and therefore under one system of measurement. One of the happy results of this achievement was the Periodic Table of the Elements devised by Mendeleeff as an investigation of the peculiar properties of the atomic weights, a topic we shall take up in a future treatment.

– Suggestions on Further Reading: –

I have found the best success in approach to these topics by beginning with a reading of Lavoisier’s {Principles of Chemistry} (available in a Dover paperback edition and as Vol. 45 of the Britannica {Great Books}). In a small weekly telephone meeting, begun about a year and-a-half ago with some interested youth, we completed the Lavoisier text in about 3 to 4 months; some independent experimentation was also carried out during the time. We followed that with a reading of Mendeleeff’s much longer textbook (cited below–parts being scanned in LA for greater access). This reading project is still ongoing. After an initial attempt to jump ahead to Chapter XV, which presents his discovery of the Periodic Table, we returned to page one, taking Mendeleeff’s own advice that a proper appreciation of his discovery requires a grounding in the descriptive and historical aspects of the subject.

NOTES:

(With apologies to Rachel Douglas, I employ the old-style transliteration (Mendeleeff) for consistency with the bibliographic references).

1. Mendeleeff writes in the introduction to his {Principles of Chemistry}: “Gases, like all other substances, may be weighed, but, owing to their extreme lightness and the difficulty of dealing with them in large masses, they can only be weighed on very sensitive balances; that is, on such as, with a considerable load, indicate a very small change in the weight–for example, a milligram in a load of 1,000 grams. In order to weigh a gas, a glass globe furnished with a tight-fitting stop-cock is first of all exhausted of air by an air-pump (a Sprengel pump is the best), after which the stop-cock is closed, and the exhausted globe weighed. If the gas to be weighed is then let into the globe, its weight can be determined from the increase in the weight of the globe. It is necessary, however, that the temperature and presure of the air about the balance should remain the same for both weighings, as the weight of the globe in air varies (according to the laws of hydrostatics) with the density of the latter. The volume of the air displaced, and its weight, must therefore be determined by observing the temperature, density, and moisture of the atmosphere during the time of the experiment. This will be partly explained later, but may be studied more in detail by physics. Owing to the complexity of all these operations, the mass of a gas is usually determined from its volume and its density, i.e. the weight of unit volume.” [D. Mendeleeff, {The Principles of Chemistry,} Third English Edition, translated from the Russian (Seventh Edition) by George Kamensky (London: Longmans Green, 1905) and (New York: Kraus Reprint, 1969), p. 10, note 17.]

2. D. Mendeleeff, op cit., pp. 346-348. Daniel Bernouli, extract from {Hydrodynamica} in Wm. Francis Magie, {A Sourcebook in Physics,} (Harvard Univ. Press, 1963) pp. 247-251. Taken together with Avogadro’s Law (to be explained shortly), the Bernoulli theory leads to the conclusion that under similar conditions of temperature and pressure, gas particles of different mass would each contain the same {vis viva} –the living force of Leibniz, which is measured as one half the product of mass into the square of velocity. The gaseous separation of isotopes, which is used to enrich uranium, makes use of this extension of Leibniz’s original discovery. Refined uranium, which consists of isotopes of two different weights, U-238 and U-235, is combined with fluorine into the gas uranium hexafluoride (UF6). As fluorine has an atomic weight of approximately 19, the hexafluoride gas must contain particles of two different masses, approximately (238 + (6 x 19)) and (235 + (6 x 19)). As the {vis viva} of the particles will be the same at a given temperature and pressure, the U-235-hexafluoride particles must move slightly faster than those of U-238. By pumping the gas through a membrane, a slightly greater concentration of the faster U-235-hexafluoride particles will pass through, and by repeating the process numerous times, separation is achieved.

3. Mendeleeff provides us a description of the apparatus for showing that 2 + 1 = 2 in the world of water. An understanding of the effect of atmospheric pressure on a column of liquid, as established by Pascal and Torricelli, will be necessary to fully comprehend this and all gas volume experiments. The student should be able to master these elementary concepts through self study. Reproducing Dr. Moon’s favorite experiment with atmospheric pressure, as pictured on page 44 of the Fall 2004 “Robert Moon” issue” of {21st Century Science,} will go a long way toward comprehension. Mendeleeff describes the apparatus as follows: “[T]he volume occupied by water, formed by two volumes of hydrogen and one volume of oxygen, may be determined by the aid of the apparatus shown in fig. 64 (Figure 2–from Mendeleef p. 325). The long glass tube is closed at the top and open at the bottom, which is immersed in a cylinder containing mercury. The closed end is furnished with wires like a eudiometer. The tube is filled with mercury, and then a certain volume of detonating gas is introduced. [The gas displaces the mercury being held up in the tube by the atmospheric pressure–LH.] This gas is obtained from the decomposition of water, and therefore, in every three volumes, contains two volumes of hydrogen and one volume of oxygen. The tube is surrounded by a second and wider glass tube, and the vapour of a substance boiling above 100 degrees–that is, whose boiling-point is higher than that of water–is passed through the annular space between them. Amyl alcohol, whose boiling-point is 132 degrees, may be taken for this purpose. The amyl alcohol is boiled in the vessel to the right hand and its vapour passed between the walls of the two tubes. In the case of amyl alcohol the outer glass tube should be connected with a condenser to prevent the escape into the air of the unpleasant-smelling vapour. [In the apparatus pictured the outer glass tube is not connected with a condenser; thus, the puff at the top of the tube is not steam as unfortunately suggested by the caption–LH.] The detonating gas is thus heated up to a temperature of 132 degrees. When its volume becomes constant it is measured, the height of the column of mercury in the tube above the level of the mercury in the cylinder being noted. Let this volume equal {v}; it will therefore contain 1/3 {v} of oxygen and 2/3 {v} of hydrogen. The current of vapour is then stopped and the gas exploded; water is formed, which condenses into a liquid. The volume occupied by the vapour of the water formed has now to be determined. For this purpose the vapour of the amyl alcohol is again passed between the tubes, and thus the whole of the water formed is converted into vapour at the same temperature as that at which the detonating gas was measured; and the cylinder of mercury being raised until the column of mercury in the tube stands at the same height above the surface of the mercury in the cylinder as it did before the explosion [that is, the atmospheric presure in the tube is now the same as before–LH] it is found that the volume of the water formed is equal to 2/3 {v,} that is, it is equal to the volume of the hydrogen contained in it. Consequently the volumetric composition of water is expressed in the following terms: Two volumes of hydrogen combine with one volume of oxygen to form two volumes of aqueous vapour.” [Mendeleeff, op. cit., pp. 325-326]

4. Amedeo Avogadro, “Essay on a Manner of Determining the Relative Masses of the Elementary Molecules of Bodies, and the Proportions in Which They Enter into These Compounds,” {Journal de Physique} 73, 58-76 (1811) [Alembic Club Reprint No. 4] http://web.lemoyne.edu/~GIUNTA/avogadro.html

Andre-Marie Ampere “Lettre de M. Ampere a M. le comte Berthollet, sur la determination des proportions dans lesquelles les corps se combinent d’apres le nombre et la disposition respective des molecules dont leurs particules integrantes sont composees,” {Annales de Chimie,} Tome 90, (30 April 1814) pp. 43-86; 2 planches. Ampere suggests a new series of tetrahedral-based polyhedra which, he suggests, would be the shapes taken by definite compounds.

5. Avogadro’s Number is the number of atoms or molecules of a gas contained in a volume of 22.4 liters at standard temperature and pressure; this volume is used as a reference because it is the volume of a container of hydrogen gas weighing 2 grams. The number of molecules of any gas fitting into such a container at a standard temperature and pressure was determined to be 6.02 x 10 to the 23rd power. This is 602 sextillion molecules, using the American system for naming large numbers, and quite a few by anybody’s count. In a high vacuum of one billionth of an atmosphere, achievable in a laboratory, there remain more than ten billion molecules. Even the so-called vacuum of space is never empty–only less densely populated than other places.

6. Following is the description by Mendeleeff of a method of determining the vapour density of substances which are liquid or solid at ordinary temperature. The method implies knowledge of the relationship of pressure, volume and temperature of gases. Study of the description and diagrams will help the reader to conceptualize the experimental process in working with gases. A study of Lavoisier’s work will help to make the subject clear: “The method by weight is the most trustworthy and historically important. Dumas’ method is typical. An ordinary spherical glass or porcelain vessel, like those shown respectively in figs. 60 and 61 (Figure 3–Mendeleeff p. 321), is taken, and an excess of the substance to be experimented upon is introduced into it. The vessel is heated to a temperature {t} degrees, higher than the boiling-point of the liquid; this gives a vapour which displaces the air, and fills the spherical space. When the air and vapour cease escaping from the sphere, the latter is fused up or closed by some means; and when cool, the weight of the vapour remaining in the sphere is determined (either by direct weighing of the vessel with the vapour and introducing the necessary corrections for the weight of the air and of the vapour itself, or by determining the weight of the volatilised substance by chemical methods), and the volume of the vapour at {t} and at the barometric pressure {h} are then calculated.” [Mendeleeff, op. cit., p. 323 n.]

SCHILLER INSTITUTE Partial List of Pedagogical Articles on www.schillerinstitute.org

On the Schiller Institute  pedagogical page you will find a partial listing of some pedagogical exercises, designed to help break through the handicap of “sense-certainty”, the unfortunate perspective from which most people view the world these days. If you would like more information on the classes and discussions in your area about this, please call, or email us at the address below.

You will find related material if you read the original translations of the works of great thinkers. Please check the full listing of FIDELIO articles, for articles on many other topics not listed below.

Click or scroll down to Schiller Institute translations of the poems, Archimedes and the Student. and Human Knowledge by Friedrich Schiller. Below that you will find a short excerpt from Lyndon LaRouche on the relationship between Music and Science.

Find all this by following this link: Schiller Institute

A Note: Why Modern Mathematicians Can’t Understand Archytas

A Note: Why Modern Mathematicians Can’t Understand Archytas

by Jonathan Tennenbaum

“As for me, I cherish mathematics only because I find there the traces of the Art of Invention in general, and it seems to me I have discovered, in the end, that Descartes himself did not yet penetrate into the mystery of this great science. I remember he once stated, that the excellence of his method, which appears only probable in terms of his physics, is proven in his geometry. But I must say, that it is precisely in Descartes’ geometry that I recognized the principle imperfection of his method… I claim that there is an entirely different method of geometrical analysis, than that of Vieta and Descartes (i.e. algebra), who did not go far enough, because the most important problems do not depend at all upon the equations, to which Descartes reduces his geometry.” (Leibniz, Letter to Princess Elisabeth, late 1678)

For example: the catenary, which requires {physical substance} for its generation, could not exist in the world of Descartes, Lagrange and Euler!

Looking through recent, standard presentations of Archytas’ famous construction for doubling the cube, demonstrates how far modern mathematics has fallen below the level of thinking that prevailed in Plato’s circles over 2300 years ago! Typical is a discussion of the doubling of the cube, on a webpage authored by J.J. O’Connor and E.F. Robertson (footnote 1). Although the text includes some interesting quotes and references, when the authors get to Archytas’ actual construction, they shamelessly revert to the school-boy routine of “using coordinate geometry to check that Archytas is correct”. Imposing a Cartesian coordinate system, they write down algebraic equations in x,y,z for each of the three intersecting surfaces (cone, cylinder and torus), and combine the equations to show that the desired proportionalities “somehow come out”. Magic! Readers, foolish enough to engage in this meaningless exercise, will not only have learned less than nothing about Archytas’ actual discovery; worse, the cognitive processes of minds will have been “turned off” altogether.

The stunning sophistication of Archytas’ synthetic-geometrical approach, when viewed in terms of standard accounts of ancient Greek mathematics, suffices to demonstrate, that those standard accounts are grossly inadequate, and that the actual physical conceptions underlying his work have been suppressed. In fact, most of the crucial original documents of Greek science have been lost or destroyed, while the living continuity of Greek science was broken off through the “dark age” imposed under the Roman Empire. As an included result of that process, the surviving version of Euclid’s famous “Elements” — a compendium whose axiomatic-deductive mode of presentation buries the essential ideas and historical process of development of Greek science — subsequently became, or was made into, the nearly exclusive source for classical Greek geometry, as well as the model for elementary mathematics education for many many centuries. Among other things, Euclid’s “Elements” obfuscated the natural ordering of development, even in visual geometry, by beginning with {plane geometry} and the supposedly self-evident concepts of “point” and “straight line” as irreducible entities, proceeding only in the final chapters to the constructions of so-called solid geometry. Whereas, in the first and most “elementary” visual geometry is not “flat” {plane} geometry at all, but rather {spherical} geometry — the form of geometry associated with astronomy as Man’s oldest science.

These and related circumstances, explain why the greatest scientific thinkers, from the Renaissance through to Kepler and Leibniz, directed much of their efforts to reconstituting the actual method and “soul” of classical Greek mathematics, which could at best only be read “between the lines” of Euclid and certain other, mostly fragmentary surviving texts, and for which the surviving dialogs of Plato were the single most important source.

Crucial to this process, was the Renaissance revival of the isoperimetric principle, of circular and spherical geometry, and the significance of the five regular solids. Exemplary, in one respect, was the way Pacioli and Leonardo Da Vinci’s “Divina Proportione” in effect “turned Euclid on his head” by emphasizing the primacy of Euclid’s famous Thirteenth Book. Kepler carried the polemic further, developing a first approximation to a true physical geometry from the standpoint of the crucial evidence of the regular solids. This led directly into Fermat’s, Pascal’s and Leibniz’s reworking of such items as Apollonius’ Treatise on conic sections, in the context of a growing focus on the conception of higher-order, multiply-connected manifolds, which evidently lay at the center of the discussion among Plato’s scientific collaborators. Thus, there is a direct line from Archytas and Apollonius, into the work of Gauss and Riemann.

From this standpoint I propose, for those eager to dig into the matter in some detail, the following observations on Archytas’ construction for the doubling of the cube. Although my observations are somewhat technical, and do not aim at a full representation of his discovery, they should help put us on a fruitful track, repairing some of the damage caused by modern misrepresentations.

I assume, in the following, that the reader already has some familiarity with Archytas’ construction, from previous discussions by Bruce Director and others, including the relationship between doubling the cube, and the general problem of constructing two “mean proportionals” between given lengths a and b. (footnote 2)

The Geometry of Physical Events

Note, firstly, that by deriving the solution by means of an intersection of a torus, cone and cylinder Archytas situates the problem explicitly in the domain of multiply-connected, “polyphonic” circular action. Observe the emphasis on {verbal action}, reflected in the classical account of Archytas’ construction, by the geometer Eudemus:

“Let the two given lines be OA [= a] and b; it is required to construct two mean proportionals between a and b. Draw the circle OBA having OA as diameter where OA is the greater [of the two]; and inscribe OB [as a chord on the circle] of length b, and prolong it to meet at C the tangent to the circle at A. … Imagine a half-cylinder which rises perpendicularly on the semicircle OBA, and that on OA is raised a perpendicular semicircle standing on the [base] of the half-cylinder. When this semicircle is moved from A to B, the extremity O of the diameter remaining fixed, it will cut the cylindrical surface in making its movement and will trace on it a certain bold curve. [The latter motion generates a section of a torus–JT.] Then, if OA remains fixed, and if the triangle OCA pivots about OA with a movement opposite to that of the semicircle, it will produce a conical surface by means of the line OC, which, in the course of its movement, will meet the curve drawn on the cylinder at a particular point P….”

What a contrast between the indicated polyphonic conception of geometry, and today’s mind-deadening “set theory”! In Archytas’ construction, P arise not as an intersection of static “point sets”, but as the locus of a physical {event}, whose process of generation involves three (or actually, six) simultaneous degrees of action. Achytas designs the process in such a way, that the event, so generated, will possess exactly the required “projective” relationships. In particular, the required “two mean proportionals” are OQ and OP, where Q is the projection of the point P, constructed as above, onto the plane of the original circle OBA.

But, before attempting to derive Archytas’ construction by ourselves, let us look at the simpler case of the relationship between the geometric mean and circular action. This gives us a suitable jump-off point for tackling the problem solved by Archytas.

Harmonic Proportions and Circular Action

Circular rotation provides the simplest, characteristic case for the generation of harmonic proportions among what are ostensibly scalar magnitudes (line segments, for example), as a “projected” result of higher-order action.

Construct a circle with a given diameter OA. A point P, moving along the circle between O and A, gives rise to an array of invariant harmonic proportions, in the following manner.

Connecting P with the endpoints of the diameter, O and A, produces a triangle OPA, whose shape changes with P’s position, but whose angle at P is always a right angle. Now project P perpendicularly to the line OA, calling the point of projection “Q”. Evidently, the triangle OPQ is also a right triangle (right angle at Q), and it shares a common angle at O with the original right triangle OPA. The two triangles are thus constantly similar, throughout P’s motion, and the corresponding ratios of the sides will be equal. In particular, OQ:OP = OP:OA. This amounts to saying, that the length OP is the {geometric mean} between OQ and OA. By inverting the order of construction, we can generate the geometric mean of any two given lengths OQ and OA, using the circle. Just project Q onto the circumference of the circle, to get the point P.

The geometric mean was also known in ancient Greek times as a “single mean between two extremes”. Doubling a {square}, requires constructing such a single (geometrical) mean between 1 and 2. To double a cube, however, we need {two} means between 1 and 2, or in other words a series of simultaneous proportions of the form: OB:OQ = OQ:OP = OP:OA where OB and OA have lengths 1 and 2, respectively.

Now, since the above construction already generates “half” the required proportion, namely OQ:OP = OP:OA, the following strategy immediately suggests itself:

“Introduce a {second} degree of rotation, generating the “other half” of the double proportion, namely: OB:OQ = OQ:OP.

“Thus, all we need to do is to somehow combine the two circular actions, in order to generate an {event}, at which both conditions are realized {simultaneously}; this will give us the required double mean: OB:OQ = OQ:OP and at the same time OQ:OP = OP:OA.”

Carrying out this strategy does lead to a construction for the double mean, albeit one that is open to certain criticisms. I present it briefly, because it already points in the direction of multiply-connected action.

A Preliminary Thrust

In fact, to get the proportion OB:OQ = OQ:OP in the indicated manner, we need to construct a {second} circle, of diameter OP, such that (i) the point Q (P’s projection on the diameter of the first circle) also lies on the second circle, and (ii) Q projects to a point B on the second circle’s diameter OP, such that the distance OB has the required length 1.

A bit of geometry, shows that requirement (i) is actually satisfied for {all} positions of P on the first circle; requirement (ii), however, is fulfilled only for {one} position of P (and its symmetrical image). How might we generate that locus as a constructible {event}?

Simple, in principle! Imagine, that for each position of P, as P moves along the first circle, a corresponding circle is constructed around OP as diameter. This process produces a continuous {family} of circles, whose diameters OP are changing angle and length as P moves (footnote 3). Now mark off, on each diameter OP, a point B’, such that OB’ has length 1; and let Q’ be the corresponding point on the circumference of the corresponding circle, so that B’ is the projection of Q’ onto OP. (Of the two possible choices for Q’, choose the one lying inside the original, first circle.) The points Q’, so determined, describe a {curve} inside the first circle. Looking at various positions of P, we can easily see that the curve has points on both sides of the first circle’s diameter OA, and must therefore {cross} it at some point.

That crossing is the required event! At that moment, the points Q and Q’ coincide, and both parts of the indicated double proportion hold simultaneously. OQ is the side of the cube, whose volume is twice that of the cube with unit length.

Now one might, with some justification, object, that no method was presented above, for how to actually {draw} the curve defined as the locus of points Q’. It is obviously not enough to simply demand: “Mark, on each one of the infinite family of circles, a corresponding point Q'”. For, if we were to begin to mark circles and points one at a time, we would never have anything more than a discrete set, and would never arrive at a continuous curve. (footnote 4)

On the other hand, it is quite possible, with a bit of ingenuity, to design a relatively simple {physical mechanism} that traces the required curve as a product of the motion of P along the circumference of the original circle. The resulting method is akin to the tactic of Nichomedes, who used a mechanically-generated curve called the {conchoid}, to double a cube.

Back to Archytas

From this standpoint we may now better appreciate the singular breakthrough of Archytas, who went far beyond the above “ad hoc” methods, to discover a higher-order approach to the problem which anticipates Gauss’ 1799 grounding of the complex domain, by over two millenia!

By applying a new degree of rotation to the first circle, to generate a {torus}, Archytas takes us, by implication, into an {entirely new universe}. Instead of trying to build the solution “from the bottom up”, as before, we can now proceed more “from the top down”.

The torus in question, is obtained by rotating the original circle first into the vertical plane (with O fixed) and then rotating the resulting circle around the vertical axis through O. For any point P on the torus, the vertical cross section through the axis of the torus, is a circle of diameter OA (of length 2); if Q denotes P’s projection onto the horizontal diameter of that circle, the proportion OQ:OP = OP:OA will hold — now as an invariant relationship for the {entire surface} of the torus.

Now it is easy to introduce additional degrees of circular action, generating additional harmonic relations. If, for example, we cut the torus by a vertical cylinder which contains the vertical axis (and the point O), then any point P, lying on the intersection of the two surfaces, automatically belongs to {two} circles: (i) the vertical section of the torus through P, as already described, giving rise to the relation OQ:OP = OP:OA, and (ii) the vertical section of the cylinder through P. In the vertical projection of that circular section onto the horizontal plane, P projects to the already-mentioned point Q. Lying on the projected circle, Q generates a {second} set of harmonic relations, of the form: OR:OQ = OQ:OD, where OD is the diameter of the cylinder (and the projected circle) and R is Q’s “lateral” projection onto the diameter OD. So far the length of OD (the diameter of the vertical cylinder) is variable; we can chose any value we want. Archytas chooses it to be equal to OA, in which case the projected circle coincides with the original one. But in principle many other choices would be possible.

In any case, we have room for introducing still a {third} principle. Remember, that our immediate object is to generate an event at which, in addition to the invariant relation OQ:OP = OP:OA, a second relation OB:OQ = OQ:OP, or 1:OQ = OQ:OP (since OB = 1) holds. Compare this with the relation we just generated using the cylinder: OR:OQ = OQ:OA.

A bit of reflection, shows that the latter relationship is equivalent to: 1:OQ = OQ:(OA x OR)

Thus, to get the relationship we are looking for, namely 1:OQ = OQ:OP, all we need to do, is generate an event, where OA x OR = OP. Since OA has length 2, this amounts to saying, that OR should be 1/2 the length of OP.

How are OP and OR related? Very simply, as one can see: R is the direct, perpendicular projection of P onto the axis OA, i.e. the point at which the vertical plane through P, drawn perpendicular to the axis OA, intersects that axis. Taken by itself (leaving aside the other constraints on P), the requirement that OR be equal to 1/2 OP, is equivalent to saying, that P lies on a certain {cone} with apex O and axis OA. The required cone can easily be constructed; this, indeed, is the preliminary step which Eudemus describes.

By this road — admittedly a bit bumpy in places –, we arrive at Archytas’ construction. This time not to verify it, but to derive it by ourselves. — — ——————————

1. See www.history.mcs.standrews.ac.uk/history/HistTopics/ Doubling_the_cube

[Try http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Doubling_the_cube.html]

2. Briefly: by “two mean proportionals” are understood two magnitudes x and y, between given magnitudes a and b (a assumed larger than b) such that b:x = x:y = y:a. Doubling the cube corresponds to the special case a = 2 and b = 1. The first of the two mean proportionals, x, corresponds to the side of a cube having double the area of the unit cube. The second mean, y, corresponds to the area of a side of the new cube.)

3. Alert readers will note here the traces of {conical action}, which becomes more explicit in Archytas, and finally emerges in full clarity in Gauss’ complex domain.

4. For related reasons, mathematics as generally conceived cannot represent true continuity, but at best describe certain {results} of continuous action. Only the direction of development of mathematics, set forth by Leibniz in his original conception of the calculus, and continued by Riemann, provides a pathway of development of mathematics in the direction of ever more adequately representing the reality of continuous action in the Universe. The case of the catenary is exemplary.