Transfinite Principle of Light, Part I: Prologue

by Jonathan Tennenbaum

Last week my esteemed colleague Bruce Director poked into a real hornets’ nest, when he asked: What makes people so susceptible to the kinds of frauds now perpetrated routinely by the mass media? Is there something {sinister} involved, a vulnerability inside the minds of our fellow-citizens, that leads them to desire a world {uncomplicated} by the primacy of {nonlinear curvature in the small}?

What, {sinister}? You surely don’t mean the ordinary, simple folk, do you? The poor innocent people who are being lied to, abused, ripped off, tormented, destroyed by the oligarchy? The ones who are “just trying get along and raise their families?” The “noble savages” of modern times, those honest, unassuming folk who nobly desire nothing more than to eat and sleep and watch their favorite TV sports, undisturbed by the world’s problems — the which, after all, they did not create? Aren’t they so homely and nice? Don’t they have legitimate grievances? Their lives are dull, boring, oppressive, even unbearable. And yet if you try to organize them, if you try to {change} them, you find they can become {very unpleasant}, very nasty indeed! Beneath their anarchistic, individualist exteriors, they are often pathologically, fanatically attached to their identity as “simple-minded, ordinary folk.” Their minds seem to repel the effort at thinking outside the tight circles of so-called “practical life.”

“Explain it in terms I can understand.” “Give me the bottom line.” “Don’t make things complicated.” “Don’t bother me with history and all that other fancy stuff.” “I know what you are saying. But don’t you realize I have to make a living?”

And yet, after hundreds of millenia of human development, can there be any excuse to remain “simple folk”? To be ignorant of the work of past thinkers, to be indifferent to the great drama of history and the fate of entire civilizations, nations and cultures?

A beautiful thing is, that oligarchism is {doomed}. Why doomed? Because oligarchism is implicitly a type of {physics}; and as physics, oligarchism is {demonstrably false}. The demonstration is at the same time proof of the anti-entropic character of our Universe, a Universe which has no more place for inert “hard balls” of Newton’s fancy, than it could long tolerate such abominations as the “sleepy South” where “each person knows his place” and “it’s always been like this and always will be.”

The following series is designed quite literally to cast light on this problem. We shall focus on a celebrated experimental discovery by Ampre’s closest friend and collaborator, Auguste Fresnel, which overthrew once and for all the attempts by LaPlace and others to impose Newtonianism on all of natural science. Fresnel demonstrated that the propagation of light, while strictly lawful, is not “simple” at all. Following Huygens and anticipating Ampre’s closely-related demonstration of the so-called “angular force” in electrodynamics, Fresnel showed conclusively that the notion of a straight-line propagation of light breaks down in the “very small” — at the level of definite, irreducible wavelengths of the order of thousandths of a millimeter. In fact, there is no smooth, “straight-line” action anywhere to be found in the propagation of light! Behind the gross appearance of (approximately) straight light-rays, is a multiply-connected, spherically-bounded rotational process which is everywhere dense in singularities. What a wealth of activity, concealed beneath a “simple” exterior!

Fresnel’s demonstrations at the same time became the basis for a revolution in machine-tool design. In anticipation of what we shall rediscover in the following couple of weeks, the reader should ponder the following question, for example: How is it possible, using instruments machined to a precision of, say, millimeters, to carry out precise measurements at scales more than a thousand times smaller? Not in a linear Universe!

By juxtaposing Fresnel’s work to the preceeding optical discoveries of Leonardo, Kepler, Fermat and Huygens, we obtain a glimpse of the transfinite nature of physical action — a nature which is incomprehensible to the simple-minded, because it embodies not only already-discovered physical principles, but also those which are yet-to-be-discovered and yet in a sense already “present”. Those principles are not predicates of light as an isolated, supposed “objective” physical entity, but pertain to Man’s relationship with the Universe as a whole.

And so our study may illuminate some secrets of the human mind itself, and suggest joyful means by which “simple folk” might be uplifted from oligarchical darkness.

The Transfinite Principle of Light, Part II – The Saga of the “Poisson spot”

by Jonathan Tennenbaum

We are in Paris, at the highpoint of the oligarchical restoration in Europe, the period leading up to and following the infamous, mass-syphilitic Congress of Vienna. Under the control of LaPlace, the educational curriculum of the famous Ecole Polytechnique is being turned upside-down, virtually eliminating the geometrical-experimental method cultivated by Gaspard Monge and Lazard Carnot and emphasizing mathematical formalism in its place. The political campaign to crush what remained of the republican faction at the Ecole Polytechnique reaches its highpoint with the appointment of the royalist Auguste Cauchy in 1816, but the methodological war had been raging since the early days of the Ecole.

With Napoleon’s rise to power and the ensuing militarization of the Ecole in 1799, Laplace’s power in the Ecole was greatly strengthened. At the same time, Laplace consolidated a system of patronage with which he and his friends could exercize increasing control over the scientific community. An important instrument was created with the Societe d’Arcueil, which was founded in 1803 by Laplace and his friend Berthollet and financed in significant part from the pair’s own private fortunes. Although the Societe d’Arcueil supported some useful scientific work, and its members included Chaptal, Arago, Humboldt and others in addition to Laplace and his immediate collaborators (such as Poisson and Biot), Laplace made it the center of an effort to perfect a neo-Newtonian form of mathematical physics in direct opposition to the tradition of Fermat, Huygens and Leibniz. In contrast to the British followers of Newton, whose efforts were crippled by their own stubborn rejection of Leibniz’ calculus, Laplace and his friends chose a more tricky, delphic tactic: use the superior mathematics developed from Leibniz and the Bernoullis, to “make Newtonianism work.”

Poisson, whose appointment to the Ecole Polytechnique had been sponsored by Laplace and Lagrange, worked as a kind of mathematical lackey in support of this program. He was totally unfamiliar with experimental research, and had been judged incompetent as a draftsman in the Ecole Polytechnique. But he possessed considerable virtuosity in mathematics, and there is a famous quote attributed to him: “Life is good for only two things: doing mathematics and teaching it.” An 1840 eulogy of Poisson gives a relevant glimpse of his personality:

“Poisson never wished to occupy himself with two things at the same time; when, in the course of his labors, a research project crossed his mind that did not form any immediate connection with what he was doing at the time, he contented himself with writing a few words in his little wallet. The persons to whom he used to communicate his scientific ideas know that as soon as he had finished one memoir, he passed without interruption to another subject, and that he customarily selected from his wallet the questions with which he should occupy himself.”

In the context of Laplace’s program, Poisson was put to work to elaborate a comprehensive mathematical theory of electricity on the model of Newton’s Principia. Coulomb had already proposed to adapt Newton’s “inverse square law” to the interaction of hypothetical “electrical particles”, adding only the modification, that like charges repel and opposite charges attract — the scheme which is preserved in today’s physics textbook as “the Coulomb law of electrostatics”. Poisson’s 1812 Memoire on the distribution of electricity in conducting bodies, was hailed as a great triumph for Laplace’s program and a model for related efforts in optics.

Indeed, between 1805 and 1815 Laplace, Biot and (in part) Malus created an elaborate mathematical theory of light, based on the notion that light rays are streams of particles that interact with the particles of matter by short-range forces. By suitably modifing Newton’s original “emission theory” of light and applying superior mathematical methods, they were able to “explain” most of the known optical phenomena, including the effect of double refraction which had been the focus of Huygen’s work. In 1817, expecting to soon celebrate the “final triumph” of their neo-Newtonian optics, Laplace and Biot arranged for the physics prize of French Academy of Science to be proposed for the best work on the theme of <diffraction> — the apparent bending of light rays at the boundaries between different media.

In the meantime, however, Augustin Fresnel, supported by his close friend Ampere, had enriched Huygens’ conception of the propagation of light by the addition of a <new physical principle>. Guided by that principle — which we shall discover in due course –, Fresnel reworked Huygens’ envelop construction for the self-propagation of light, taking account of distinct <phases> within each wavelength of propagational action, and the everywhere-dense interaction (“interference”) of different phases at each locus of the propagation process.

In 1818, on the occasion of Fresnel’s defense of his thesis submitted for the Academy prize, a celebrated “show-down” occurred between Fresnel and the Laplacians. Poisson got up to raise a seemingly devastating objection to Fresnel’s construction: If that construction were valid, a <bright spot> would have to appear in the middle of the shadow cast by a spherical or disk-shaped object, when illuminated by a suitable light source. But such a result is completely absurd and unimaginable. Therefore Fresnel’s theory must be wrong!

Soon after the tumultuous meeting, however, one of the judges, Francois Arago, actually did the experiment. And there it was — the “impossible” bright spot in the middle of the shadow! Much to the dismay of Laplace, Biot and Poisson, Fresnel was awarded the prize in the competition. The subsequent work of Fresnel and Ampere sealed the fate of Laplace’s neo-Newtonian program once and for all. The phenomenon confirmed by Arago goes down in history with the name “Poisson’s spot,” like a curse.

We shall work through the essentials of these matters in subsequent pedagogical discussions and demonstrations. But before proceeding further it is necessary to insist on some deeper points, which some may find uncomfortable or even shocking. Without attending to those deeper matters, most readers are bound to misunderstand everything we have said and intend to say.

It is difficult or even virtually impossible, in today’s dominant culture, to relive a scientific discovery, without first clearing away the cognitive obstacles reflected in the tendency to reject, or run away from, the essential <subjectivity> of science. Accordingly, as a “cognitive IQ test” in the spirit of Lyn’s recent provocations on economics, challenge yourself with the following interconnected questions:

1) Identify the devastating, fundamental fallacies behind the following, typical textbook account:

“There were two different opinions about the nature of light: the particle theory and wave theory. Fresnel and others carried out experiments which proved that the particle theory was wrong and the wave theory was right.”

2) Asked to explain the meaning of “hypothesis” a student responds:

“An hypothesis is a kind of guess we make in trying to explain something whose actual cause we do not know.”

Is this your concept? Is it right?

3) What is the difference between what we think of as a property of some object, and a physical principle? Why must a physical principle, insofar as it has any claim to validity, necessarily apply to all processes in the Universe, <without exception>?

If you encounter any difficulty in answering the above, reread Lyn’s “Project A.”

Next week: Leonardo and the paradox of the “camera oscura.”

Transfinite Principle of Light, Part III: The Phantom of Linearity

By Jonathan Tennenbaum

Look at Leonardo’s drawings of rays of light reflected in a curved mirror. Leonardo draws the incoming rays as parallel straight lines. Reflected off the mirror, the rays form an envelope — a curve that Leibniz’s friend Tschirnhaus later called a {caustic}. Looking at the drawing, we might think to ourselves: “Here Leonardo has shown how the complex is generated by the simple. See how this beautiful curve, the caustic, is created from the simple, straight-line rays, which are the natural, the elementary form of light propagation.”

But, stop to think: Did Leonardo really think that way? Did he believe that straight-line action is primary, and curved forms are secondary? Was Leonardo a Newtonian?

Or have we gotten it backwards? That Leonardo saw, in the production of the caustic, a characteristic manifestation of the {fundamentally non-linear, high-order process} underlying light, and which generates the appearance of straight-line rays as a mere {effect}?

Looking more carefully at Leonardo’s manuscripts with our mind’s eye wide open, the evidence jumps out at us. Indeed, Leonardo even states it explicitly: The propagation of water waves, sound and light alike are based on a {common principle of action}; that principle is not straight-line action, but curved, (to a first approximation) circular action!

Leonardo implies, in fact — as he demonstrated for the case of water waves — that the {action} which generates the outward propagation of light from a source, is {not} basically directed in the “forward” direction, i.e., outward from the source, but essentially perpendicularly, {transverse} to the apparent direction of propagation!

Now let’s turn to the contrary, so-called “emission theory” which is commonly attributed to Newton (although much older), and which he elaborated in Book III of his famous “Opticks”. Newton writes, for example: “Are not the rays of light (streams of) very small bodies emitted from luminous substances? For such bodies will pass through uniform media in straight lines without bending into the shadow, which is the nature of rays of light.” Newton adds many other arguments, which I shall not reproduce here.

Doesn’t this picture indeed seem very agreeable to our naive imagination? Indeed, someone might plausibly argue that: 1) since light evidently moves outward from the source in straight lines and 2) since no motion is possible without some material bodies which are moving, therefore 3) the light rays must consist either of material particles (photons?) or maybe a continuous fluid emitted from the source and moving outward from it.

And how to account for the {bending} or change of direction (diffraction) of light rays, when they pass from one medium to another (i.e., from air to water) or through a medium of changing density? Simple! Since the “natural” or elementary motion is straight-line motion, the bending of the trajectories of the particles forming the rays, must be due to some “forces”, which are pulling the rays (or the particles making up the rays) out of that straight motion, into curved trajectories. What could be more self-evident than that?

Newton actually provides a program for elaborating this emission theory more and more: By studying the laws of diffraction of light rays, and other aspects of their behavior in passing through various materials, we should {deduce}, by mathematics, the microscopic forces which must be acting upon the light particles in interaction with the medium. And then from those “force laws”, once established, we will in turn be able to calculate the behavior of light rays under arbitrary conditions.

Newton puts his own work on gravitation and planetary motion forward as the model for this, stating, in the famous “General Scholium” from Philophiae Naturalis Principia Mathematica:

“Hitherto we have explained the phenomena of the heavens and of our sea by the power of gravity, but we have not yet assigned the cause of this power…. I have not been able to discover the cause of those properties of gravity from phenomena, and I frame no hypotheses; for whatever is not deduced from the phenomena is to be called a hypothesis, and hypotheses, whether metaphysical or physical, whether occult qualities or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena and afterwards rendered general by induction. Thus it was that the impenetrability, the mobility and the impulsive force of bodies, and the laws of motion and of gravitation, were discovered. And to us it is enough that gravity does really exist and act according to the laws that we have explained…”

This same argument was repeated by the Marquis de Laplace, the self-proclaimed high priest of Napoleon’s “orthodox Newtonianism”, in an 1815 attack on the early work of Fresnel. Laplace said that in view of the “success” of Newton’s emission theory, he greatly regretted that anyone would presume

“to substitute for it another, purely hypothetical one, and which, so to speak, can be manipulated at will: that of Huygens’ ondulations. One must limit oneself to repeating and varying experiments and deducing laws from them, that is, coordinating facts, and avoid any undemonstrated hypothesis.”

But did you pick up the “big lie” which Newton told in the passage cited above? Don’t let him get away with it!

Newton claims, among other things, that his law of gravitation was “deduced from the phenomena”, without the use of hypothesis. That is a bald-faced lie. As even Laplace admits, Newton obtained his “force law” by inverting Kepler’s construction for the elliptical orbital motion of the planets. But Kepler’s construction was by no means deduced from the visible motion of the planets; indeed, what could anyone “deduce” from the wild, tangled mass of looping motions of the planets, as seen from the Earth? Rather, Kepler arrived at his results step-by-step through a series of {creative hypotheses} — by cognition! — as documented by Kepler himself in his works, from the Mysterium Cosmographicum through to the New Astronomy. Even Newton’s so-called force law is no deduction from Kepler’s work, but was obtained only by imposing a whole array of {arbitrary assumptions} which are neither in Kepler, nor “deduced from the phenomena”, nor otherwise demonstrated in any way. So, for example, the hypothesis that space has the form of a simple Cartesian manifold, and that straight-line action is elementary.

Now, step back from the specifics of this “big lie” and ask yourself: Why are so many people, even scientists, fooled so much of the time? Could it be, because the supposed elementarity of straight-line action is merely a lawfully-generated, externalized {image} or artifact of a defective form of mental processes?

Exclude {cognition} from mental processes. What is the typical form of action in the “mental vacuum” so created? The characteristic of deduction, as the “elementary” form of non-cognitive reasoning, is that no cognitive considerations are permitted to disturb the “perfect vacuum” in which the deductive chains of logical premises and conclusions are unfolded. No “leakage” of reality from outside the system, which could call its basic assumptions into question, is permitted to interfere with the growth of the theorem-lattice.

Now look, from this standpoint, at what Riemann had to say about Newton’s famous “First Law of Motion”:

“I find the distinction that Newton makes, between laws of motion, axioms and hypotheses, untenable. The law of inertia is an hypothesis: If a material point were all alone in the Universe, and if it were moving with a certain velocity, then it would keep moving with the same unchanged velocity”.

Now here comes a simple-minded fellow, and says to himself: “Well, isn’t that First Law self-evident? After all, {if there were nothing around in the Universe} to interfere with the particle’s motion, then nothing would change that motion, either in direction or in speed. Since there would be no reason for it to bend in one direction rather than another, or to slow down or speed up, the particle would keep moving at a constant velocity in a straight line.” So, in particular, straight-line motion is elementary!

What happened? With his logical premise of a Universe consisting of nothing but a single particle alone in an infinitely extended empty space, our simple-minded fellow has thrown cognition (and the real Universe!) out of the window. He has put himself into a wildly arbitrary phantasy-world; and now proposes, as Newton did, to make that phantasy-world into his yardstick for the real Universe!

If we dig a bit deeper, our fellow might come up with another logical idea: the simple precedes the complex, so to understand the complicated real Universe, we have to break it down into simple parts, into simple hypothetical situations. Then we can deduce the complex situations from the simpler ones. But what if the supposed “simple parts” don’t exist and could not exist in and of themselves? What if the only “simple” existence were the indivisible unity of the Universe as a whole, a Universe graspable only by cognition? But cognition is not simple in the way our vacuum-headed fellow imagines rational thinking to be.

From this it should be obvious, that the issue fought out by Fresnel and Ampere against Laplace, by Kepler against Galileo, by Leibniz against Newton and so forth, is not one of this or that theory or doctrine. It is emphatically not the so-called wave theory versus the particle theory. The issue, as emphasized in Plato’s Parmenides, is the human mind.

Ask yourself: what is the transverse nature of the action, upon which the physical growth of any economy is based?

Transfinite Principle of Light, Part IV: Least Time

by Jonathan Tennenbaum

In last week’s pedagogical discussion, Phil Rubenstein provoked us with a beautiful glimpse into Leibniz’s notion of physical space-time, observing that:

“[T]he totality of space is altered when an action introduces something incompatible to the previous ordering, and that is what introduces real time as changed space. Thus, all of the space-time is truly changed and the primacy of facts is altered.”

Most of us have been trained or otherwise induced to think of events in terms of an implicitly fixed ordering of the Universe. When an event occurs, we too often only ask ourselves: “Where does this event fit into the scheme of the world as I know it?” or “What category does it belong to?” Whereas Phil (following Leibniz) wanted to get us to look out for the anomalous characteristics of an event, and to ask ourselves, instead: “What is the change in ordering of the world, which this anomaly implies?” Or even better: “How does this event open up a potential flank, by which I might change the current ordering of the world into a better one?”

As Phil also pointed out, the two modes of thought are associated with two very different notions of causality. In the first, we put our noses close to the ground and follow events one at a time, in chains of “cause-and-effect.” So, A causes B, B causes C, C causes D and so on like a chain of dominoes, each falling over and pushing the next one in turn. If someone asks, “Why did event X occur?”, our answer will be: “Because W occurred, and W caused X.” And W occurred because of V, V because of U and so forth ad infinitum (or until we find the guy who pushed over the first domino, Aristotle’s “Prime Mover”!). But the platonic mind would rather ask: “Who arranged the dominoes that way, so that the trajectory of apparent cause-and-effect took that particular form?”

When we raise ourselves to the second, higher level, we look for those crucial actions and events, that define the {total geometry} (i.e. ordering) within which entire ranges of other events occur, take a certain form, and tend toward a pre-determinable array of outcomes. This latter standpoint is congruent with Kepler’s conception of a planetary orbit and brings us to Leibniz’ notion of {sufficient reason}. So, referring in his “Principles of Nature” to the higher (transfinite) ordering of the Universe as a whole, Leibniz said:

“The sufficient reason for the Universe cannot be found in the sequence of contingent events…. Since the present motion of matter comes from the preceeding, and that one from an earlier still, one never comes closer to the answer, however far one goes, because the question always remains. Thus it is necessary that the sufficient reason, which does not require another reason, {lies outside this series of contingent events}, and this must be sought in a substance which is the cause, and is a necessary being … this last reason of things is God.”

A beautiful example for the two conflicting outlooks is provided by Pierre Fermat’s discovery of the Principle of Least Time on the basis of he called “my method of maxima and minima.” [fn1] This example is all the more notable, as Leibniz himself used it repeatedly in his polemics against Descartes and the Cartesians.

To set the stage, I should report that around 1621 the Dutch astronomer Snell (who also made major contributions to geodesy) studied the bending of light rays when passing from one medium (for example, air) into another medium (say, water). In each of the two media, insofar as they are relatively homogenous, the propagation of light appears to occur along straight-line pathways. But it had long been recognized, that light entering from air into water at a certain angle, propagates at a different, much steeper angle inside the water. Now Snell studied the functional relationship between the angle (call it X) which the ray makes to the vertical {before} entering the water, and the angle (Y) which is formed with the vertical by the direction of the ray {after} it has passed into the water. He discovered a very simple relationship, which holds quite precisely within certain limits: namely that the {sines} of the two angles are {proportional} to each other. To make these relationships clear, draw the following “classical” diagram, which Leibniz, Fermat et al. employed in their discussions of these matters.

Let a line segment AB represent the surface of the water and let point C represent the locus on AB where the ray of light enters the water. Draw a circle around C. Mark by “D” the point on the upper half of the circle (the part in the air), at which the light ray enters the circle on the way to C, and mark by “E” the locus at which the ray, now propagating in water, crosses the lower half of the circle. The line segments DC and CE represent the directions of the light ray before and after passing from air into water. Now draw the vertical line L through C. The angle between DC and L, is what we called X above, and the angle between CE and L is Y.

Finally, project D and E horizontally (i.e. perpendicularly) onto L, defining two points F and G which are the projections of D and E, respectively, onto the vertical L. (DF and EG are proportional to the {sines} of the angles X and Y.)

Now imagine we vary the angle at which the ray enters the water, while keeping the entry point C fixed. In other words, D moves along the upper part of the circle and the angle X changes correspondingly. What happens to angle Y and the position of E?

Snell found that in the course of these changes, {the ratio of DF to EG remains constant}. For the case of air and water, it turns out that DF:EG = (approximately) 1.33 : 1. From this, we can determine the angle Y corresponding to any given angle X, by a simple geometrical construction.

But what is the explanation of this relationship, its “sufficient reason”? Leibniz himself was convinced that Snell did not find his law by mere empirical trial-and-error, but that he worked from an {hypothesis} derived from the work of the ancient Greek scientists who had discovered an analogous (but simpler) law for the {reflection} of light over 1500 years earlier. While Snell’s original train of thought seems to have been lost, Rene Descartes later (1637) restated the same law, which he claimed to have discovered by himself, and offered an explanation or “proof” based on his own special notion of physics and the nature of light.

Descartes’ argument, as published for example in his “Dioptrique,” is somewhat muddled and difficult to present in a few words. Essentially, Descartes likened the motion of light to that of a small ball or other object which encounters greater or lesser resistance along the path of its motion. The circumstance, that the light ray is bent toward the vertical direction on passing into the water — i.e. becomes “steeper” in its passage through the water — Descartes took as evidence that the {light moves more easily through the water} and is less retarded in its motion, than in the air. At the point of transition into the “easier” medium of the water, Descartes thought, it is as if the ball (the light) would pick up an extra “kick”, continuing at a steeper direction.

Now, disregarding the vagueness and confusing nature of Descartes’ argument, his thinking is clearly trapped in what we referred to above as the first mode: namely to follow a process from one step to the next within a fixed notion of ordering, which is (in Descartes’ case) essentially the naive housewives’ “common sense” notion of the motion of material bodies.

Now in closing, let us listen to what Fermat has to say, in his “Method for the Research of the Maximum and Minimum”:

“The learned Descartes proposed a law for refractions which is, as he says, in accordance with experience; but in order to demonstrate it he employed a postulate, absolutely indispensible to his reasoning, namely that the propagation of light takes place more easily and faster in more dense media than more rarefied media; however, this postulate seems contrary to natural light.”

[“Natural light” was a common expression for “Reason”. Fermat is poking fun at Descartes. He continues:]

“While seeking to establish the true law of defraction on the basis of the contrary principle — namely that the movement of light is easier and faster in the less dense medium than in the more dense one — we arrived at exactly the law that Descartes had announced. Whether it is possible to arrive at the same truth by two absolutely opposing methods, that is a question we will leave to those geometers to consider, who are subtle enough to resolve it rigorously; for, without entering into vain discussions, it is enough for us to have certain possession of the truth, and we consider that preferable to a further continuation of useless and illusory quarrels.

“Our demonstration is based on the single postulate, that Nature operates by the most easy and convenient methods and pathways — as it is in this way that we think the postulate should be stated, and not, as usually is done, by saying that Nature always operates by the shortest lines … We do not look for the shortest spaces or lines, but rather those that can be traversed in the easiest way, most conveniently and in the shortest time.”

Next week we shall look more closely, through the eyes of Leibniz, at Fermat’s discovery and the error of Descartes.

— ————————————————————

1. Here is a deliberately challenging quote from a 1636 letter by Fermat to Roberval, in which he boasts about the scope of his method:

“On the subject of the method of maxima and minima … you have not seen the most beautiful applications; because I make it work by diversifying it a bit. Firstly, in order to invent propositions similar to that of the (parabolic) conoid which I told you about last; 2) In order to find the tangents of curved lines…; 3) To find the centers of gravity for all sorts of figures…; 4) To solve number theoretic problems … it is in this… that I found an infinity of numbers which do the same thing as 220 and 284, namely that the sum of the divisors of the first equals the second and the sum of the divisors of the second equals the first; and if you want another example to give you a taste of the question, take 17296 and 18416. I am sure you will admit that this question and those of the same sort are very difficult…. And so you see four kinds of questions which my method embraces, which you probably didn’t know about.”

Transfinite Principle of Light, Part V: Time To See the Light

by Bruce Director

Last week, you were introduced to a paradigmatic case of a discovery of a universal principle, Fermat’s principle of “Least Time.” Contrary to textbook-educated commentators, Fermat’s Least Time principle, is not a property of light. Rather, it is a characteristic of the Universe, from which light’s properties unfold. The irony is, that this universal characteristic of Least Time, is discovered in its unfolded form, but only KNOWN as a universal principle. For that reason, it epitomizes the discovery of a principle that corresponds to a change in hypothesis from an n- to an n+1-fold manifold, connected with a corresponding change from an m- to an m+1-fold manifold. Consequently, it deserves your careful attention and study.

To summarize: the Classical Greeks had already discovered a special case of this principle, through the investigation of reflected light (catoptics)/1. The Greeks found that the angle at which light is reflected from a shiny surface, is equal to the angle at which the light strikes that surface. Simply stated, the angle of incidence equals the angle of reflection. The equality of these angles, minimizes the length of the path from the source of the light, to the reflecting surface, to the eye. However, this principle is NOT a property of light. It is a manifestation of a universal characteristic: that nature always acts along the shortest path.

The phenomenon of refraction (the change in direction of light when it travels from one medium to another, such as from air to water), appears, at first, to contradict this universal characteristic, as the change in direction at the boundary between the two media, makes the path of the light longer, than were it to continue in the same direction across the boundary.

More than one and one-half millennia after the Classical Greek period, Willibrod Snell showed that when light is refracted, the change in direction is such that the sine of the angle of incidence and the sine of the angle of refraction are always in constant proportion. (See last week’s pedagogical.) The Greek principle of reflection (in which this proportion is one, as equal angles will have equal sines), can thus be seen as a special case, or boundary, of Snell’s more universal principle. Yet, the length of the path of the light under refraction, is still not the shortest path, as in the case of reflection.

While the details of Snell’s reasoning are not entirely known to us, it had been conjectured that the observed refraction resulted from a change in the velocity when light travels through different media./2 Under this idea, it can be shown, that the different velocities are in the same proportion as the sines of the angles of incidence and refraction. Or, in other words, Snell’s law of refraction is, itself a reflection of a physical principle, that velocity of light changes when traveling in different media. (In his “Treatise on Light,” Huygens has a simple and direct geometrical demonstration of this concept, to which the reader is referred.)

Descartes, believing that light was a stream of particles, adopted the conjecture that such particles would travel faster in denser media. From this, he reformulated Snell’s law and claimed it as his own, a fraud so blatant that even Descartes’ apologists no longer can defend it.

Pierre de Fermat adopted the opposite view, that light traveled slower in denser media. But, much more importantly, Fermat came to this idea, not by conjecturing on the properties of light, as Descartes did, but from the standpoint of a new universal principle that he hypothesized: to wit, that nature always acts according to the least time. That is, that the longer path the light travels when refracted, is actually the path that takes the shortest time. From the standpoint of the earlier Greek discovery of reflection, the universal principle that nature seeks the shortest path in space, has been transformed into the principle of shortest path in space-time. A transformation from a universal hypothesis of n dimensions, to a universal hypothesis of n+1 dimensions. (Hypothesis is used here in the rigorous Socratic terms defined by LaRouche, not the banalized general usage concept more closely equated with the verb “to guess.”)

Or, in the words of Fermat, quoted in last week’s pedagogical discussion:

“Our demonstration is based on the single postulate, that Nature operates by the most easy and convenient methods and pathways — as it is in this way that we think the postulate should be stated, and not, as usually is done, by saying that Nature always operates by the shortest lines…. We do not look for the shortest spaces or lines, but rather those that can be traversed in the easiest way, most conveniently and in the shortest time.”

Leibniz in his “Discourse on Metaphysics,” addresses this question this way:

“The method of final causes is more easy and can often be used to divine important and useful truths, which one would have to seek for a long time by a more physical approach, for which anatomy provides major examples. Thus I believe that Snell, who is the first discoverer of the laws of refraction, would have had to spend a long time finding them, if he had started by first trying to find out how light is formed. Rather, he apparently followed the method which the ancients used in catoptics, which is based on final causes. By looking for the easiest pathway to pass light from a given point to another given point by reflection on a given plane (supposing this is the intention of Nature), the ancients found the equality of the angles of incidence and reflection, as one can see from a little treatise of Heliodor of Lariss, and elsewhere. Which is what Snell as I believe, and after him (although without knowing from his) Fermat applied very ingeniously to refraction…. And the proof which Descartes’ wanted to give for the same theorem, by the method of efficient causes, would need much improvement to be as good. At least, there is reason to suppose that Descartes would never have discovered the law in that way, unless he had learned something in Holland about Snell’s discovery.”

“Descartes thought the opposite of what we think concerning the resistance of various media (to the propagation of light). That is why the very illustrious Spleissius, a man well versed in these matters, has no doubt that Descartes, when he was in Holland, saw Snell’s theorem; and in fact he remarks that Descartes had the habit of omitting mention of authors, and takes as an example the vortices in the Universe which Giordano Bruno and Johannes Kepler pointed to, in such a way that only the word itself was missing in their work. It happens that Descartes, in order to prove his theorem by his own efforts… From which Fermat correctly concluded that Descartes had not given the real reason for his theorem.”

The Cartesians, Galileans, and the whole plethora of Aristotelian-Manichean sects, squealed with rage, at Fermat’s principle of Least Time. How could Fermat say that light sought the shortest time? Why, that would mean that either, light would have to have some “intelligence” by which to “decide” whether its choice of path was using up the shortest time, or, there would have to be some pre-arranged “track,” like Ptolemy’s solid orbs, that guided the light along the shortest path.

These objections are identical to those raised against Kepler, who demonstrated that the elliptical planetary orbits, rather than uniform circular ones, are the pathways that correspond to the universal space-time characteristic of the solar system. Kepler dethroned Ptolemy’s demi-gods and solid orbs, along with the poly-copulating Olympians, from whom Ptolemy and his fellow Bogomils, drew their authority.

Taking up the defense of Fermat’s principle, Leibniz dealt the decisive blow to the Cartesians:

“…Thus we have reduced to pure Geometry all of the laws which confirm experimentally the behavior of light rays, and have established their calculus on the basis of a unique principle, that you can grasp following a specific causality, but providing you consider appropriately the case in point: indeed, neither can the ray coming from C make a decision [1] about how to arrive, by the easiest way possible, at points E, D, or G, nor is this ray self-moving towards them [2]; on the contrary, the Architect of all things created light in such a way that this most beautiful result is born from its very nature. That is the reason why those who, like Descartes, reject the existence of Final Causes in Physics, commit a very big mistake, to say the least; because aside from revealing the wonders of divine wisdom, such final causes make us discover a very beautiful principle, along with the properties of such things whose intimate nature is not yet that clearly perceived by us, that we can have the power to explain them, and make use of their efficient causes, along with their artifacts, such as the Creator employed them in order to produce their results, and to determine their ends. It must be further understood from this that the meditations of the ancients on such matters are not to be taken lightly, as certain people think nowadays.”

Reflect on that, until next week.

1/ The history of these Greek investigations deserves careful study by us, as its development in textbooks is vague and confusing. For pedagogical purposes, and for posterity’s sake, it needs to be pulled together by someone wanting to do a service to humanity.

2/ This is also an area of historical research which is necessary for us to fill out.

Transfinite Principle of Light, Part VI: Passion and Hypothesis

by Jonathan Tennenbaum

There is a tendency for people to misconstrue and banalize ad absurdum the polemic Lyn has developed about the need to change fundamental assumptions. Some think to themselves: “Lyn says that assumptions are bad. So I’ll play it safe. I won’t make any assumptions at all.”

This wimpy attitude, already strong among baby-boomers, is even more pronounced among Generations X and Y. These people have resolved never to commit themselves fully to anything, never to make a strong emotional investment, never to make a decision which might irreversibly change their lives: “No, no I don’t go there” is the motto. Their policy is to “keep all the doors open,” particularly the hind doors through which to escape when the going gets too tough.

Ironically, no behavior demonstrates the influence of hidden ontological assumptions more clearly, than the obsessive, schlmiehl-like behavior of people trying to “play it safe,” hiding behind an illusion of “objectivity,” “sticking to the facts,” and “playing according to the rules.” Whereas today the very survival of the world depends on {strong hypotheses} — hypotheses discovered, transmitted, and executed with the most impassioned quality of moral commitment.

So, Schiller said, he who would not give up his life, will not gain it. It is impossible to make or relive a scientific or equivalent quality of creative discovery without risk, without sacrificing some cherished thing inside oneself and even confronting something akin to the fear of death.

As an example, let us listen to Brahms’ student Gustav Jenner, as he describes how Brahms forced him through the agonizing process of knowing, as opposed to superficial learning. Jenner recounts his first encounter with Brahms. Personally, Brahms was very kind and friendly to the budding young composer. But when it came to criticizing the compositions Jenner had put in front of him — naturally the ones Jenner was most proud of — Brahms’ remarks were devastating:

“After it was all over, I felt like someone who, after wandering long on a false path, thinks his goal is near, but suddenly realizes his error and now sees his goal vanish into the distance…. Despite the mercilessly strict judgement which my labors elicited from him, not a single ironical or even an angry word fell from his lips…. He simply demonstrated to me, relentlessly and without brooking any contradiction, that I didn’t know how to do anything … After a stringent examination concerning what I had been doing with my life up to then, Brahms said: `You see, in music you have not yet learned anything in an orderly fashion; for, everything you’ve been telling me about the theory of harmony, your attempts to compose, instrumentation, and so forth, I count as nothing.'”

That was only the beginning. After Jenner had moved to Vienna to study under Brahms, the old master became more still more strict and rigorous with him than before.

“I never again heard from Brahms an encouraging word — let alone praise — about my works…. It took a long time before I truly learned how to work … Only a full year later did Brahms say to me on one occasion, `You will never hear a word of praise from me; if you cannot tolerate that, then everything within you is only of value by virtue of the fact that it will fail.'”

But what did Brahms teach Jenner? For that I advise everyone to read all of Jenner’s short book. Here I just want to quote from one passage, especially relevant to the point at hand:

“I learned the most not by him pointing out my mistakes per se, but by his revealing to me how they had come about in the first place…. From his experience he told me: `Whenever ideas come to you, go take a walk; then you’ll find that what you had thought was a finished idea, was only the beginnings of one.’ He would repeatedly seek to sharpen my distrust in my own ideas. I have often had the experience that precisely such thoughts which become lodged (in the mind) like an idee fixe, pose a natural barrier to creativity, because one has fallen in love with them and, instead of mastering them, has become their slave. `Pens exist not only to write, but also to cross things out,’ said Brahms, `but be careful, because once something has been set down, it is hard to take it away again. But once you realize, that, good though it (a passage) may be in itself, it is not appropriate here (at a given place), don’t mull it over any longer, but simply cross it out.’ And how often we do we not try to save a passage, only to ruin the whole!… When Brahms, with his impartial criticism, reproached me for precisely those passages, I felt surprised and hurt at the beginning, because these had been my favorite passages — until I saw that I hadn’t found the disrupting element because I had unconciously proceeded from the idea, that this passage must stay in, no matter what. I have had to feel the bite of those pronouncements by Brahms in my own flesh; they are the result of his long experience and unbending self-criticism.”

Helped by Brahms to become aware of and correct his own weaknesses of thinking, Jenner wages a war against his own tendency toward superficiality, his frequent infatuation with his own “pet” ideas at the expense of truth, his tendency to be distracted by unimportant particularities instead of concentrating on what is really essential. Does that sound familiar to anyone?

But is the conclusion from this teaching, to avoid having ideas, to not risk putting forward hypotheses, for fear they might turn out to be wrong? Hardly! Nothing could be more boring, more totally useless, than a composer who writes “according to the rules,” and who is unwilling to “live dangerously” by making bold and daring (but true!) hypotheses.

The difficulty Jenner describes — to overcome one’s attachment to strongly-held ideas and habits of thought in a rigorous search for truth — arises in essentially identical fashion in science and every other field of creative endeavor.

But in this regard, unfortunately, people in our organization sometimes fall into a trap: Our ideas are (generally speaking) far superior to those predominating in society nowadays; and thus it appears very easy (or should be) to attack and ridicule the “obviously” silly ideas of ordinary people, without feeling the need to go through {in ourselves} the agony Jenner experienced. Yet, Brahms’ authority as a teacher came from {exactly that}: from Brahms’ own agonizing struggle for rigor and truth vis-a-vis his own mind, and not merely from his superior ideas, knowledge and experience as a composer.

Thus, the main points of reference for ridiculing and refuting wrong or “silly” ideas and habits in others, are the successes one has had in confronting and overcoming one’s own imperfections. That includes insight into the {lawful nature} of human imperfections and the powerful attachments people often form to them. Thereby, one can put one’s own past errors and imperfections to good use, demonstrating once more Leibniz’s profound principle of “the best of all possible worlds.”

Turning now to physical science proper, it is too cheap, and we cheat ourselves if we would do this, to merely ridicule as “obviously wrong” the theories and hypotheses which a given discovery refutes, overthrows, or supersedes. True, in history to date, science has hardly existed except in a constant state of war against oligarchism; and as we have repeatedly documented (as in the case of Fresnel and Ampere), the oligarchical faction (embodying a “{negative} higher hypothesis”) is commonly the active promoter of the inferior hypotheses against which significant discoveries were explicitly or implicitly directed, as means to overcome what had been transformed into the “prevailing public opinion” among scientists and others.

However, to the extent we might tend, too quickly and cheaply, to divide ideas and hypotheses into {self-evidently} good and true on the one hand, and {self-evidently} false and bad on the other, we trivialize the struggle inside the mind of the creative scientist and cheat ourselves out of the possibility of really reliving a discovery. For, the oligarchical element lies not in the inferior idea pe se, but in the deliberate clinging to it, in the satanic {assertion} of backwardness and regression as a {principle} opposed to the principle of perfection. An animal is not an evil thing; but a man who behaves like an animal, is.

The immediate point I wish to stress, is this: the strength of belief in certain assumptions and hypotheses, which the creative scientist must confront in the process of discovery, is (in many if not most cases) not {simply} a product of oligarchical tampering. To a greater or lesser extent those assumptions and hypotheses arose as the product of earlier discoveries, and their relative adequacy was supported by vast arrays of corroborating evidence and by the positive economic impact (increase in Man’s per-capita power over Nature) of technological developments based upon them. In the light of such impressive, even overwhelming grounds to believe in the validity of the relevant assumptions and theories, the psychological difficulty facing the discoverer is qualitatively greater than that of merely refuting an “obviously wrong” idea.

Think of a classical tragedy where the final curtain falls on a stage littered with dead bodies. If the audience had developed no strong and justified engagement, admiration for, or sympathy with the tragic hero or others among the characters whose lives thus ended, what would happen to the tragic effect of the play? So, in the course of scientific discovery, as in the composition of music and drama, some ideas must “die” in order that higher ideas might be expressed. The greater the apparent attractiveness, validity and comprehensiveness of the ideas successfully superceeded, the greater the power embodied by the creative discovery.

– An Inferior, but Fruitful Hypothesis –

For these reasons, before proceeding further with the discoveries of Fermat, Bernoulli, Leibniz, Huygens, and Fresnel, we should look a bit closer at the notion which these discoveries, culminating with Fresnel, finally refuted: The notion that light propagates in the form of “rays” projected outward from the luminous or illuminated object; and that to a very high degree of precision these rays take, in a uniform medium, the form of straight lines.

Before rushing to reject this notion out-of-hand (i.e. simply because of the occurrence of straight lines), let us for a moment reflect on the theorems which flow from it. We shall find, in fact, that this descriptive notion of light rays is {extremely useful and fruitful}, as Leonardo himself and many others demonstrated in countless ways. Its eventual rejection by Huygens and Fresnel is by no means so easy and self-evident, as might appear after-the-fact.

Among other things, the principles of so-called “ray optics” was the basis of perspective, and (supplemented by Fermat’s principle) for the analysis and development of lenses. It is still employed on a large scale today in the design of optical instruments, even though the notion of “ray” itself — as something supposedly self-evident and elementary — was decisively refuted by Fresnel and superseded by an entirely different principle.

– Ray Optics and the Camera Oscura –

The idea of resolving light propagation into “rays” is a not a self-evident idea simply drawn from sense-perception, but an {hypothesis}. True, Nature sometimes provides rare circumstances, such as sunlight shining through a break in clouds, where we seem to “see” straight-line rays. However, it is a big step to go from that mere spectacle to a general conception, and indeed the gateway to that conception is guarded by many paradoxes. For example: if every point of every illuminated object emits rays of light in all directions, so that the entire space is filled with an infinity of crisscrossing rays, then how can we ever see anything clearly? And won’t the rays constantly be colliding into each other?

Leonardo said every illuminated object “fills space with pictures of itself.” But if we stand in the middle of a room and hold up a piece of blank paper, we certainly don’t see any pictures projected on it! The reason is not hard to imagine: the light arriving at any given location on the paper arrives from all objects and comes from all directions at the same time; it is consequently mixed up and jumbled together, and no image can result.

How, then, are we able to see anything at all? How do our eyes manage to organize and untangle the light? Renaissance experiments with the so-called “camera oscura” provide a preliminary hypothesis. Build a closed chamber without windows (a closed box) whose walls and ceiling are completely opaque to light. Install a screen on one of the inside vertical walls of the room, and make a small hole in the middle of the opposite wall. An observer sitting inside the room will see, projected onto the screne, an image of the world outside the chamber! In fact, the image on the screen corresponds to what the observer would see, if he were to look outside directly through the hole — except that the image on the screen is upside-down!

Do the experiment, or an equivalent one. What is the difference between the two situations: A) holding up a piece of paper in the middle of a room, and finding no image at all B) putting up the same piece of paper on the wall of the “camera oscura” (or equivalently, imposing an opaque barrier with a small hole, between illuminated objects and a screen)?

Evidently, the hole in the wall fulfills the function of a {lens}, organizing the propagation of light in such a way, that the image appears on the screen. But note, that if we move the screen directly up to the hole, the images disappear, and we get nothing but an undifferentiated spot of light. Not the hole itself, but the total arrangement of hole and the screen held at a significant distance away, provides the relevant organizing function.

Now, account for the function of the “camera oscura” as a {theorem} based on the hypothesis, that light propagates in (approximately) straight-line rays. Account also for the circumstance, that the images on the screen are slightly blurred, depending on the size of the hole.

Related to this, derive as a theorem another, apparently anomalous phenomenon known to the Greeks and discussed at length by Leonardo: The shadow of any object placed in the rays of the Sun, and projected onto a screen at a suitable distance, is not simple and sharp, but consists of a dark interior region (the “core shadow”) outside of which the light gradually increases. Determine the geometrical law by which the relative sizes of the core shadow and the “blurred” partial shadow change, as the distance between object and screen is varied. The analysis is brilliantly confirmed by such phenomena as eclipses of the Sun.

Examine, thus, these and other fruitful consequences of “ray optics” without the oligarchical admixture of Newton, et al.

Now begin to appreciate the shocking, jarring impact of Fresnel’s hypothesis, that shadows are produced “holographically”, i.e., by {interference} of active wave processes inside and around the shadow area itself, and not merely through the blocking-out of linear rays of light by the object.

Transfinite Principle of Light, Part VII: From Appearance to Knowledge

by Jonathan Tennenbaum

In the latter section of last week’s discussion, I gave arguments in support of the notion, that light propagates in straight-line rays. Indeed, by imagining to ourselves that light is a “something” which propagates outward from each point of a luminous object, in all directions along straight-line trajectories, we can account very well for the functioning of the “camera oscura,” for the main features of the shadows cast by objects, for the changes in apparent size of objects according to their distance from us (and other laws of perspective), and many other things. Furthermore, this idea seems to conform well to our sense experience. Cover a sunlit window by a black shade, and put some holes in the shade. In the darkened chamber we can “see” the straight-line rays of light coming through the hole, just as we can directly “see” the rays coming out of a movie projector, especially in a smoke or dust-filled room. Let yourself become so accustomed to this way of conceiving the propagation of light, that it seems perfectly self-evident.

Now take this notion as a model for {any} sort of {apparently successful} opinion or belief. What attitude should we take to it? A critical attitude, of course. But shall we simply reject the notion of straight-line rays of light out of hand, because it doesn’t fit with some ideological doctrine or metaphysical prejudice of ours? Shall we deny that Leonardo da Vinci, Brunelleschi, Kepler, and other great men drew rays of light as straight lines, or that thousands of practical activities, such as in surveying, in technical drawing, etc. seem based on this notion? Shall we simply deny or ignore the evidence just cited?

Or should we not rather admit that there {does} exist a very wide-spread phenomenon, an {effect}, which corresponds at least approximately to what we have described as “straight-line propagation of light?” If so, then so what? An effect or phenomenon is one thing; the axiomatic assumptions, in terms of which we interpret and judge the {significance} of a given array of phenomena, is something completely different.

We fall into a trap, when we jump from a mere description of appearances — or a limited, simple hypothesis — to imputing or superimposing upon the phenomena certain fundamental, axiomatic qualities of assumption, which are by no means called for by the phenomena themselves. Watch out when anybody points with his finger and says: “See this? It proves X,Y,Z.” The expression “evidence of the senses” is defective, because in reality a process of {judgment} based on certain assumptions is always implicit, albeit preconsciously, in any report of such “evidence”.

Indeed, it is common experience (we confront it daily!) that different people, put in front of one and the same array of phenomena, draw radically different, even completely opposite, conclusions. Sometimes we can even witness two or more individuals in such a debate, pointing to one and the same phenomenon as “definitive proof” for their mutually contradictory opinions!

These observations suggest a very big question. Somebody comes along and challenges us: “If you say your interpretation of evidence is determined by your axiomatic assumptions, then how could you ever {know} whether those basic assumptions are true? Aren’t you caught in a vicious circle? How can you reject self-evident assumptions on the one hand, and at the same time claim there is no purely `objective’ evidence which does not involve assumptions of some kind? You can’t have your cake and eat it, too. If you want to be consistent, you have to finally make up your mind: either 1) to reject all fundamental axioms and assumptions, and accept only empirical experience (sense perceptions) as real, `objective’ knowledge of fact; or 2) admit that your fundamental axioms and assumptions can never be scientifically tested or proved in terms of evidence — that they must therefore either be self-evident, or based on some sort of faith or belief, as in revealed religion. Or would you agree with my opinion, that fundamental assumptions are ultimately a matter of arbitrary choice, so that conflicts of opinion can ultimately only be resolved by people killing each other?”

Leaving the reader to ponder his or her answer to this paradox, let’s go back to our concrete case, the supposed straight-line propagation of light rays.

One person (Newton, for example) draws a light ray, and thinks of it as a self-evident, axiomatically linear entity, an entity obeying the formal axioms of “Euclidean geometry.” A second person (Leonardo Da Vinci, for example) sees the same ray as the trace of an intrinsically {nonlinear} process. The objective appearance of the phenomenon is the same. How can we decide between the two interpretations, the two ways of thinking? Here we get to the issue that Fresnel and Ampere were addressing, as Fermat and Huygens before them. A unique experiment signifies more than simply evoking a new “objective phenomenon” from the Universe. The problem is to evoke and communicate a true, validated change in how human beings {think} about the Universe.

Let us go back to the time of Fermat. We do not yet have the demonstrations of interference and diffraction, which Fresnel used to finally demolish Newton’s linear theory of light. But we do have an anomaly called {refraction} that was the focus of Fermat’s elaboration of the {principle of least time}.

Note, for example, that the size and appearance of the Sun and Moon, and the apparent angular motions of the stars, are changed when they get near the horizon — a phenomenon which is commonly explained by the notion, that the rays of light coming from these objects, are {bent} as they pass obliquely through atmospheric layers of changing density. Compare this with the bending of light rays in passing from air to water, or vice-versa, which we can demonstrate in any classroom. With the aid of a simple apparatus we can make the sharp change of angle of the rays at the surface of the water clearly visible. With a bit more effort, we can produce media of varying density and show clearly how the rays follow {curved} trajectories. Let’s try to take on a Newtonian with this:

“So you see, light does {not} travel in straight lines!”

“Yes it does, if you do not disturb it. But by interposing matter, an inhomogenous medium, you deflected the rays from their natural, straight-line paths.”

“How do you know that straight-line paths are `natural’?”

“If a light ray were allowed to propagate unhindered, in a pure vacuum or perfectly homogeneous medium, then it would propagate precisely along a straight line. It is just like the motion of material bodies in space according to Newton’s first law: `a material body remains in its state of rest or uniform motion along a straight line, unless compelled by forces acting upon it to change its state.’ No one could deny that.”

“Does a `pure vacuum’ exist anywhere in nature? Does a `perfectly homogenous medium’ exist in nature?”

“Well no, of course. There is always a bit of dirt around, or inhomogeneities that disturb the perfectly straight pathways.”

“So the presence of what you call `dirt’ is natural, right?”


“So then it is natural that light never travels in straight-line paths.”

“Wait a minute. You are mixing everything up. I am talking about the natural propagation of light, quite apart from matter.”

“What do you mean, `quite apart from matter’? Do you assume that the existence of light is something that can be separated from the existence of matter?”

“Yes, certainly. The natural state of light is that of light propagating in a Universe that is completely empty of matter.”

“And a completely empty Universe is a natural thing? Do you claim such a think could ever exist?”

“I could imagine one. Sometimes I get that feeling inside my head.”

“Maybe that is because you are not thinking in the real world.”

“Don’t blame me for that. I am a professional physicist.”

“Well then, fill the vacuum in your mind with the following thought: Light and matter do not exist as separate entities, nor does matter act to bend rays of light from what you imagine in your fantasy-universe to be perfectly straight-line rays. Rather, the existence of what we call matter, the existence of light and the fact that light never propagates in straight lines — except in mere appearance — are both interrelated manifestations of the fundamental curvature of physical space-time, which Fermat began to address with his principle of least time.”

Transfinite Principle of Light, Part VIII: When Long Is Short

by Bruce Director

It is a continuous source of happiness, for men and women who have cultivated a capacity for scientific thinking, that Nature acts along the shortest pathways, and those are always curved. Not so, however, for the petty and small minded. For them, such principles are a constant vexation. There is no better example of this, than Pierre de Fermat’s fight with Descartes.

In 1637, Fermat received a copy of Descartes’ Dioptrics. In that work, Descartes considered light to be an impulse of particles travelling instantaneously. From this conception, Descartes presented a mathematical construct of reflection and refraction, by treating these particles, as if they were hard bodies moving in empty space. This was an obvious absurdity, since refraction is the phenomena that occurs when light travels through two different media, not empty space. Into Galileo’s mathematics of moving bodies, Descartes fitted the observed phenomena of the refraction and reflection of light.

Fermat found the work deeply flawed, and said so to Descartes’ epigone Marin Mersenne. First, Fermat said, Descartes erred by relying solely on mathematical reasoning, which, according to Fermat, could not lead to the discovery of physical truths. Furthermore, Fermat attacked Descartes’ mathematics, “of all the infinite ways of dividing the determination to motion, the author (Descartes) has taken only that one which serves him for his conclusion; he has thereby accommodated his means to his end, and we know as little about the subject as we did before.”

Such insolence from an unknown upstart in Toulouse offended Descartes no end. He wrote to Mersenne, “… I would be happy to know what he will say, both about the letter attached to this one, where I respond to his paper on maxima and minima, and about the one preceding, where I replied to his demonstration against my Dioptrics. For I have written the one and the other for him to see, if you please; I did not even want to name him, so that he will feel less shame at the errors that I have found there and because my intention is not to insult anyone but merely to defend myself. And, because I feel that he will not have failed to vaunt himself to my prejudice in many of his writings, I think it is appropriate that many people also see my defense. That is why I ask you not to send them to him without retaining copies of them. And if, even after this he speaks of wanting to send you still more papers, I beg of you to ask him to think them out more carefully than those preceding, otherwise ask you not to accept the commission of forwarding them to me. For, between you and me, if when he wants to do me the honor of proposing objections, he does not want to take more trouble than he did the first time, I should be ashamed if it were necessary for me to take the trouble to reply to such a small thing, though I could not honestly avoid it if he knew that you had sent them to me.”

There the matter rested for 20 years, until, in 1658, one of Descartes’ zealots, Claude Clerselier, asked Fermat for copies of his earlier correspondence to include in a volume of Descartes letters. In the intervening period, Fermat had done his own original work on light, taking off from the work written by Marin Cureau de la Chambre. In August 1657, Fermat wrote Cureau, “you and I are largely of the same mind, and I venture to assure you in advance that if you will permit me to link a little of my mathematics to your physics, we will achieve by our common effort a work that will immediately put Mr. Descartes and all his friends on the defensive.”

Instead of Descartes’ resort to the mythical hard bodies traveling in empty space, Fermat conceived of light as travelling at a finite velocity, that changed depending on the density of the medium through which it travelled. (This was more than 50 years before Ole Roemer conclusively determined the finite velocity of light, in his observations of the moons of Jupiter.) But, more importantly, Fermat proceeded from the standpoint of a universal physical principle, that nature always acts along the shortest paths. The path, in the case of refraction, was not the simple geometrical length of the path, but the path that covered the distance in the least time. “We must still find the point which accomplishes the process in less time than any other …” Fermat wrote to Cureau in January 1662.

Upon receiving Fermat’s letter, Clerselier. In a letter dated May 1662, (translated here by Irene Beaudry) Clerselier wrote:

“Do not think that I am answering you today because you think you have obtained the objective of troubling the peace of the Cartesians…Permit me just to tell you here the reasons that a zealous Cartesian could allege to preserve the honor and the right of his master, but not to give up his own advantage or to give you the initiative.

“1. The principle that you consider as the foundation of your demonstration, that is, that nature always acts along the shortest and simplest pathways, is nothing but a moral principle and not at all physical, that is, no and could not be the cause of any effect of nature.

“It is not, because it is not this principle that makes nature act, but rather, the secret force and the virtue that is in every thing, that is never determined by such or such an effect of this principle, but by the force that is in all causes that come together into one single action, and by the disposition that is actually found in all bodies upon which this force acts.

“And it could not be otherwise, or else, we would presume nature to have knowledge: and here, by nature, we mean only this order and this law established in the world as it is, which acts without foreknowledge, without choice and by a necessary determination…..”

Clerselier objects not to Fermat’s discovery that light travels the path of shortest time, but to the idea that such a universal principle exists at all. Without a universal principle, there is no shortest path, only the arbitrariness of empty space.

This is a matter that confronts all of us directly each day. If civilization’s survival depends on increasing the quality of human cognition, then the shortest path to that survival is the seemingly long and curved route of curing the population of their insanity through mass outreach. Let the petty Clerselier’s take the short-cuts on that long road of destruction.

Transfinite Principle of Light, Epilogue


by Jonathan Tennenbaum

What was it about Fermat’s “principle of least time” and Leibniz’s generalized “principle of least action” that so upset the Cartesians and Newtonians, and continues to upset people up to this very day? In reaction to the beating Fermat and Leibniz administered to Descartes, in the 18th century a heated and very confused debate was whipped up concerning so-called “teleological principles in Nature” — a debate which reached its pinnacle of absurdity when Maupertuis claimed priority over the long-dead Leibniz in concocting his own, incompetent version of the least action principle! Behind the diversionary antics of the buffoon Maupertuis, Euler and Lagrange launched their more sophisticated attack on Leibniz. Euler and Lagrange worked to eliminate the self-conscious {principle of discovery} which Leibniz placed at the center of his conception of the physical universe, and thereby to drive a wedge between “Naturwissenschaft” and “Geisteswissenschaft.” can find the trace of these events in our minds, in own struggles to grasp the central conception of Leibniz’ Monadology, or even the seemingly simple “principle of least time” put forward by Fermat in the 1630s.

Build a simple apparatus to demonstrate how a beam of light changes its direction when passing from air into water. Note how the rate of change of direction itself changes as you change the angle at which the light beam strikes the surface of the water. When the beam enters the water perpendicularly to the surface, no change is apparent: the beam continues onward in the same, perpendicular direction. But as we gradually tilt the beam away from the perpendicular direction, we find that the beam is “bent” more and more at the water surface; the direction of the beam inside the water is steeper, i.e., its angle to the vertical is smaller than that of the original beam in the air. (Readers must perform the experiment!). How can we account for the shape of the pathway, and in particular for the lawful relationship of the angles which describe the deflection of the beam at the surface of the water?

Now, the Newtonian-Cartesian way of thinking about this problem will appear natural and even self-evident to most people, comparing Fermat’s and Leibniz’s, because the former corresponds to axioms which have become deeply embedded in our culture. Let’s look at it for a moment. What, indeed, could be more self-evident, than the idea that the pathway of the light beam is created by the light itself in propagating out from the source?

Just so, in Newton’s mechanics, the orbit of a planet exists only as an imaginary trace of its successive positions; those positions being created by the planet’s motion. To Newton, the orbit itself doesn’t exist as an efficient physical entity; what {exists}, at any given time, is only the planet, its momentary position, its state of motion and the momentary gravitational force acting upon it from the Sun. So according to Newton, the fact that a planet traces an elliptical pathway in the course of its motion is just a mathematical accident, a derived theorem of the Newtonian theorem-lattice. So, today, the student is taught to say: “When you solve the equations for motion of the planet under the force of gravity, it just happens to come out to an ellipse.”

Imagine the precocious child who, caught with his hand in the cookie jar, explains: “I couldn’t help it. My body was just obeying the laws of motion.”

Similarly, according to this way of thinking, the pathway of the light beam is just the trace of a “something” or large number of tiny “somethings”, which travel through space from one moment to the next and one point to the another. They would “naturally” travel in straight lines, except insofar as some “external forces” deflect them from a straight-line path. Analyzing the bending of a beam of light going from air to water in this manner, we divide the process into three phases: A) the light propagates undisturbed in a straight line through the air, until B) the beam suddenly “collides” with the water surface, where the light particles are acted upon by some unknown force causing them to change their direction of motion, and from that point on C) they continue travelling in the water in a straight line in the new direction. This is exactly the thinking of Descartes, Newton, Laplace, Biot et al.

Not so Fermat! To come onto his footsteps, let us start from the well-grounded assumption, that Fermat followed Kepler in these matters. Kepler, as we know, regarded the system of planetary orbits and the orbits themselves as real and their determination as {primary} relative to the motions of the planets. An orbit is determined by a characteristic “curvature in the infinitesimally small”, such that any however-small interval of planetary motion already expresses the efficient principle which predetermines the future course of the planet in that orbit.

Could we say, then, that the light follows a predetermined {orbit}? Or should we be more cautious and merely propose, that the pathway of the light beam is merely a visible expression or characteristic of an {underlying physical process}, whose course is {predetermined} in the same sense that a planet’s motion is predetermined by its Keplerian orbit. Either way, we cannot avoid the implication, that all {three} phases A, B, C defined above, and the sequence of all three taken together, embody {one and the same} characteristic infinitesimal curvature!

At this point the formalist-minded will freak out:

“A and C are straight lines, not curved at all; whereas B is where the beam is “bent”! So how can you talk about the same curvature?”

Well, maybe you ought to conclude that the straight-line propagation in A and C is only an {apparently} linear envelop of a nonlinear process.

“Don’t make things so complicated. After all, so long as the light is travelling in phase A through the air, before it comes to the surface of the water, there is no force to divert it; the light doesn’t yet “know” it is going to hit the water, so it will travel in a perfect straight line. Or do you suggest, that the light can look ahead to see the approaching surface of the water?”

Our interlocutor here is trapped in the Newtonian-Cartesian assumption, that time is a self-evident, linearly ordered succession of “moments,” where only the preceding moment can influence the “next” one; just as space were a triply-linear ordering of “places.” This insistence on a trivial, linear ordering of a supposedly empty space-time, rejecting the idea of “nonlinearity in the small”, is key to the freak-out which Fermat caused by his principle of least time.

To shed more light on this question, let us modify our experiment slightly: Install a small light source shining in all directions (e.g., a light bulb) at some position O in the air above the surface of the water. Now take an arbitrary position X in the water, which is illuminated by the light. {What is the pathway by which that result was accomplished?}

We might investigate as follows: Find the positions Y, both in air and water, at which an opaque object, placed at Y, causes the illumination of X to be interrupted. (Do the experiment!) We find, in fact, that those positions lie along a clearly-defined pathway going from O to X. That pathway in fact runs in an apparent straight-line from O to a certain location, L, at the surface of the water; and there, abruptly changing its direction, it continues on in an apparently straight trajectory to X. We can also verify, that if we now replace the light bulb at O by a device which produces a directed beam, and point the beam in the direction toward L, then it will continue along the entire pathway we just determined, and illuminate X. If we point the beam in a different direction, then (leaving aside extraneous reflections and so forth), it does {not} arrive at X. Our conclusion: this is the {unique} trajectory, by which light, emitted at O, can and does arrive at X.

Now what do we do, striving to follow Kepler in this matter? Instead of trying to concoct some Newtonian-like “law of motion” by which the light supposedly proceeds blindly, step-by-step from one moment to the next, consider instead at the {space-time process as a whole}. How is it that a unique trajectory (or “tube of trajectories” appearing to our senses as a single one) is determined, among all other conceivable other paths running from O to X, as the one which is actually {realized} by light? What is the sufficient and necessary reason? Evidently, not some property of light in and of itself. Ah! Don’t forget the rest of the Universe! Don’t forget that our experiment is part of the ongoing {history of the Universe}, and what we call “light” is just a localized manifestation of the {entire Universe} acting upon itself in that specific historical interval. If so, then shall we not regard the observed pathway of light as a {projection} of the Universe’s ongoing historical orbit, its “world line”?

Now, perhaps, we can begin to appreciate the significance of the Fermat-Leibniz principle and the freak-out it evoked among the followers of Aristotle.