Wednesday, March 31, 2010

The "Van Allen Hypothesis": Abandoned Science Finds New Life in Pseudoscience?

I have on occasion heard a reference in e-mails from the Electric Universe (EU) supporters about something called the “Van Allen Hypothesis”.  I was able to find a reference to the specific paper in this old thread on the ThunderBolts forum, titled “Question about the Current Powering the Sun”, where they are discussing my document “The Electric Sky: Short-Circuited” (DwCiA: Electric Cosmos).  The reference for the Van Allen hypothesis is
In the paper, Dr. Alexeff describes a presentation by James Van Allen, apparently from the early 1990s, where Van Allen described a linear relationship between the rotational angular momentum and the magnetic moment of celestial bodies, i.e. that the magnetic moment of a spinning object is directly proportional to its angular momentum.  That this relationship appeared to hold across many orders of magnitude for a number of celestial bodies was something of a mystery (it should be noted that the error bars on this 'agreement'  also stretched across two orders of magnitude (a factor of 100) in both magnetic moment and angular momentum measurements.  This means that if U is value of the angular momentum, then U and 100*U, would both be considered as in 'agreement' with the model.  Quite a large range.

Faint bells went ringing at the edge of my consciousness.  I had heard this before.  But where?  I recently discovered the missing piece and decided  to assemble a more complete chronology.

First, there is only one publication I have found in the years prior to Alexeff's reference from the 1990s where Van Allen mentions the idea of rotation connected to magnetic moment.
The reference is somewhat misleading, as Van Allen does not appear to be an original author of the paper, but his contribution is a short two paragraphs appended after the references:
"Although Brown properly cites prior evidence against the validity of the Blackett hypothesis, I continue to feel that our determination of an upper limit to the magnetic moment of Mars has a certain cogency in an astronomical context.  The test on Mars is one which I have aspired to make since I first heard Professor Blackett lecture on this subject some 20 years ago.
In the face of all the negative evidence concerning the validity of the Blackett hypothesis, Brown's suggestion that it may apply to Earth and Jupiter seems untenable.''
The bells at the edge of my consciousness are ringing loudly now.  The Blackett Hypothesis!

So while Alexeff attributes the idea to Van Allen, Van Allen himself attributes it to P.M.S. Blackett.  In addition, Van Allen dismisses the Blackett Hypothesis as not having sufficient experimental and observational support.  Of course, this does not mean that the 'agreement' described above is not interesting, just that it probably does not indicate new fundamental physics as some would like to believe.

But what about Blackett's formula?  Blackett published the idea in 1947.
Blackett's equation can be written
where P is the magnetic moment, U is the angular momentum, G is Newton's gravitational constant, c is the speed of light, and beta was a dimensionless constant with a value that seemed to range between perhaps 0.3 and 1.2.  The relation generated strong interest because it seemed to connect gravitation, by way of Newton's G, and electromagnetism - a possible key to a Unified Field Theory!

Much experimental & theoretical work followed, but the final result was that the relationship did not hold up under detailed examination and interest waned after a few years.  The most troublesome aspect for the idea was that the Earth and solar magnetic fields were known to reverse their polarity over time, a clear problem unless the rotations reversed as well (something we don't observe with the solar magnetic cycle that reverses approximately every eleven years).  In addition, since the magnetic moment and angular momentum are vector quantities (they have a direction as well as a magnitude), there is the question of how this relationship can hold since most of the celestial bodies where we've measured magnetic fields do not have their rotation axis aligned in the same direction as their magnetic axis.  We know this is true for the Earth (where they differ by almost ten degrees in angle) and other planets in our Solar System.

Why was the Blackett formula familiar to me?  The Blackett formula was a key idea behind the operation of the spindizzy (Wikipedia), a fictional interstellar drive used in James Blish's “Cities in Flight”(Wikipedia). I had read this book many years ago.

But the story isn't over.  Was Blackett the first to think of this?  The idea that rotational motion might be fundamentally linked to magnetic fields was actually suggested in papers going back to around 1900!
Many of these papers explored ideas such as charge separation created by gravitation in a massive body.  The body rotation would then generate a current and subsequently a magnetic field.  The idea did not meet with any success, theoretically or experimentally.

So what about the amazing 'agreement' of angular momentum and magnetic moment? 

The Universe is apparently full of interesting numerical 'coincidences', one of the most famous of the 20th Century being Dirac's Large Numbers Hypothesis (Wikipedia).  This idea inspired a number of avenues of inquiry between cosmology and fundamental physics, but none had particular experimental success.

But most likely, the relationship is a consequence of similarity in the underlying mechanisms in the generation of celestial magnetic fields.  Consider Kepler's 3d Law (Wikipedia),
T^2/R^3 = constant
where T is the orbital period in years and R is the radius of the orbit in Astronomical Units, for all the planets of our Solar system.  Kepler's 3rd Law is true to far higher precision than the Blackett relationship for celestial bodies.   Kepler's 3rd Law would eventually be discovered as a consequence of Newton's gravitation and force laws (actually, Newton derived the law of gravity based in part on the Kepler relationship).  Similarly, I suspect the Blackett relationship ties to an underlying aspect of the dynamo mechanism of magnetic field generation, but due to the large range of the 'agreement', it is clearly only an approximate characteristic.

Because there are physical constants that relate to how physical properties are coupled together, a surprising number of properties can be approximated to within orders-of-magnitude without regard to the details of the interaction.  For example, the fundamental timescale of a gravitating system can be approximated by sqrt(G*M/R^3) where M is the mass of the system and R is its radius.  This kind of dimensional analysis trick is why creationist Russ Humphrey's magnetic field model (1984) generated reasonable values for some observables regardless of the huge errors in the details.  See some of examination of this in Tim Thompson's analysis available on Talk.Origins.

One of the problems in science is that successful researchers often explore many ideas and 'hunches' that turn out to be dead ends.  Very often, the researcher does not publish the results of these dead-end inquiries.  This makes it easy for someone to 'discover' or re-discover a previously dismissed idea, many years later, thinking it is new.

[Disclaimer 1: This article is not meant to suggest that Dr. Alexeff is a supporter of Electric Universe claims, merely that this particular paper of his has been used by some EU supporters as evidence of more bizarre EU claims.  In spite of some rather odd assumptions (cylindrical planets, dependency of field on the external plasma), Dr. Alexeff presents the mathematical details of his analysis for examination by the scientific community.  This is the professional behavior expected in the scientific community, but it is generally NOT exhibited by EU supporters.]

[Disclaimer 2: It has been brought to my attention that ideas which appear in the Thunderbolts forums are not necessarily ideas supported by the Electric Universe 'experts'.  Since EU provides no objective standards of testing which anyone can apply, it appears that only those anointed as the official voices of EU may decide what is and is not part of the theory.  I do not know if Dr. Alexeff's work is regarded as a official part of EU.]

Thursday, March 25, 2010

Time to Teach the Controversy...

In going through my backlog of magazines on the nightstand, I found this gem in a recent Skeptic (Volume 16, #2).

It's time to teach the controversy: (external link) Since creationism isn't going away, let's use it in the classroom to teach the difference between Science and Pseudoscience by Christopher Baum

Mr. Baum basically advocates what I have been pushing for over ten years now on my main site (Dealing with Creationism in Astronomy) and here - use the pseudosciences in the classroom as demonstrations of how science determines what hypotheses fail and why.

Saturday, March 20, 2010

More Astrophysics & Quantum Mechanics Connections

While researching my initial response, “Scott Rebuttal III. The Importance of Quantum Mechanics”, I discovered yet another fascinating connection which illustrates how researchers in quantum mechanics developed techniques important in astrophysics and semiconductor physics.

As I mentioned in  “Scott Rebuttal III”, Alan H. Wilson was a physicist who initially explored applying quantum mechanics in nuclear astrophysics before writing the two foundational papers of semiconductor electronics using the same quantum mechanical principles.  Dr. Wilson was a student of Ralph H. Fowler.  

Those familiar with electronics might recognize Ralph Fowler as one of the authors of the Fowler-Nordheim equation.
This paper was a landmark work which solved the problem of understanding cold-cathode emission, sometimes called field-effect emission (Wikipedia).  Cold-cathodes could emit electrons at room temperature through the application of a strong electric field.  In contrast, hot-cathodes emit electrons by heating the metal with a filament.   The cold-cathode mechanism defied explanation by classical electromagnetism for many years.

Prior to this, Dr. Fowler had worked on issues surrounding the effect of the Pauli Principle (Wikipedia), on the mass structure in stellar interiors.  The Pauli Principle states that no two fermions (Wikipedia) can occupy the same quantum state at the same time.  This principle is responsible for the energy-level structure in atoms.   By applying the fundamental principles of hydrostatic pressure combined with the gas laws (Wikipedia), and the Pauli Principle, Fowler demonstrated that  the density in stellar interiors could far exceed that available in Earth laboratories due to the weight (pressure) of overlying mass of the star.  Eventually, the electrons would fill all the available energy states in the stellar core and would become degenerate (Wikipedia). 

Here's some of the papers by Fowler on this topic available through the Astrophysical Data Service.  Some of the full papers are available for free through these links:
Fowler solved these two very different problems, cold-cathode emission in Earth  laboratories, and the other deep in stellar interiors, using the exact same quantum mechanical mathematics!

Electromagnetism by itself was unable to explain the process of cold-cathode emission beyond defining a few simple mathematical relationships from simple experimentss.  It took the intervention of physics and particularly the development of quantum mechanics to connect those simple experiments to more fundamental processes.  Electromagnetism had the same difficulty explaining the photoelectric effect (Wikipedia). 

The development of quantum mechanics turned cold-cathode emission from a mysterious behavior into a well-understood process which enabled others to use the idea in developing more sophisticated technologies such as modern flat-panel displays, etc.  It is the same quantum mechanics that predicts 'exotic' states of matter under extreme conditions such as the center of stars.

Saturday, March 13, 2010

Setterfield & c-Decay: "Data and Creation: The ZPE-Plasma Model" III

Barring any new information, this should be my final post critiquing Barry Setterfield's “Data and Creation: The ZPE-Plasma Model".

One significant difference between Setterfield's earlier merging of c-decay with plasma cosmology and the Electric Universe  is in Setterfield's Section III: “The Origin of the Elements”.  In addition to some apparently distorted Electric Universe claims, Setterfield includes a claim from Ed Boudreaux (CreationWiki), a chemistry professor at the University of New Orleans, to the effect that all the elements at their present day abundances could have been created in 30 minutes at a temperature of 10-20 billion K.  The only proviso was that the plasma composition was such that the ratio of protons, neutrons, electrons and ions was the same as that found in water.

Since Setterfield published little details, I tried searching for additional information but found little that was helpful.  However, from even this short description, the process sounded very similar to a process known as Nuclear Statistical Equilibrium, or NSE.  I'll continue this analysis under that assumption.

What is Nuclear Statistical Equilibrium (NSE)?

Consider a very hot (temperature measured in billions of degrees) plasma consisting of free electrons, protons and neutrons.  At any given temperature and density (number of particles per unit volume), many types of reactions can take place. 
  • Forward reaction: Neutrons can decay into protons and electrons. 
  • Reverse reaction: Electrons and protons can combine to form neutrons (we'll ignore the neutrino to keep the analysis simple).
  • Forward reactions: The free neutrons and protons can combine to form nuclei.
  • Reverse reactions: Those nuclei formed in step 3 can also break back down into free neutrons and protons.
In equilibrium, the forward reactions will take place at the exact same rate as the reverse reactions.  Since these reaction rates will vary depending on the number of protons, neutrons, electrons, and the various nuclei which exist at any instant, an equilibrium distribution of the different nuclei will form.  This distribution for a specific temperature, density, and electron-to-baryon ratio, is called nuclear statistical equilibrium, or NSE.

The web site Cococubed has movies which illustrate how the composition can vary for different values of temperature, density, and electron-to-baryon ratio in NSE.  The WebNucleo site operated by Clemson University has some online tools for doing this calculation, Nuclear Statistical Equilibrium.   I'll use this tool in this analysis.

As a quick test, one can run the NSE calculator for the default values of nuclear partition functions (this uses the binding energies of the nucleus to determine a statistical 'weight' for the analysis) and for a temperature = 11,000,000,000K,  density=2,760,000,000 gram/cubic centimeter, and an electron-to-baryon ratio of 0.462 (this slightly favors the formation of nuclei with more neutrons than protons).  The computation takes only seconds on the remote servers and we get some data tables and options for plotting.  We choose to plot the atomic number (Z=number of protons) on the x-axis and the abundance on the y-axis.  A logarithmic y-axis scale enables us to examine the wide range of abundances

In this plot, we see peaks on the left where Z<2 which correspond to high abundances of hydrogen (Z=1) and helium (Z=2).  Moving to the right, we see a few more peaks around carbon (Z=6) and oxygen (Z=8) and a really broad peak near iron (Z=26).  I'm told this particular set of defaults is probably appropriate to some Type 1a supernovae (Wikipedia).
This plot, generated by the Solar Abundance Tool at WebNucleo, can be used for comparison.  We see that the NSE plot exhibits some characteristics of the chemical abundance of the elements (Wikipedia), but not everything.  One glaring distinction is that the NSE calculation suggests much more iron is formed than carbon and oxygen, in sharp difference to the solar abundances.

This is because abundances in our region of the galaxy have contributions not only from supernovae, but from the ambient interstellar medium (still heavily loaded with hydrogen and helium).  Supernovae not only explode with a range of different abundances, but they do not always eject all their material into the interstellar medium (ISM).  While Type Ia supernovae are believed to be a total disruption of a white dwarf star which would send everything into the ISM, other types of supernovae can lock up a substantial amount of the heavier elements in a black hole or neutron star.

Boudreaux's Equilibrium?

Now consider Boudreaux's claim that he gets solar abundances with nuclear reactions starting with the composition of water - two hydrogen atoms (1 proton + 1 electron each) and one oxygen atom (8 protons+ 8 neutrons + 8 electrons).  The number of electrons is 2*1 + 8 = 10 and the number of baryons is 2*1+8+8 = 18.  This gives an electron to baryon ratio = 10/18 = 0.555 (which would actually favor nuclei with more protons than neutrons).
Here is a sample run from the Webnucleo NSE calculator illustrating the change in abundances for several key elements for a range of temperatures between 10-20 billion K and a density of 2.76e10 grams per cubic centimeter.
We can also plot at a single density and temperature.  Here, we see the at 10 billion K, we get virtually no elements with Z>40 (zirconium). We can explore other temperatures in the 10-20 billion K range as well as higher densities, but it becomes clear that it is difficult for this process to generate any heavy nuclei.  I've investigated some of the astronomy & cosmology resources at Boudreaux's site, the Origins Resource Association but have not found any details that distinguish this claim from a story that Dr. Boudreaux made up for his convenience.

If Dr. Boudreau was to provide a reasonable justification, he would need to specify such things as:
  1. What is the site for this process?  What is the density?  Did it all happen at the instant of Creation?
  2. How did the elements get dispersed over the billions of cubic light-years of universe?
  3. Once dispersed, how did they collapse to form current day stars, planets, etc.  as some of the resources at Origins Resource Association suggest Dr. Beudreaux believes this process cannot happen in the the Big Bang scenario.  This is probably a problem for his scenario as well, unless he is invoking a Miracle here.
  4. If Dr. Boudreaux is using current day nuclear physics, we've demonstrated above that this process will not work.  If Dr. Boudreaux is using some alternative claim about nuclear physics, such as accelerated decay rates, etc. he would need to specify the details of this, along with any experimental or observational justification.
Thanks to Dr. Brad Meyer (Clemson University) for assistance with WebNucleo.

Tuesday, March 9, 2010

Setterfield & c-Decay: "Data and Creation: The ZPE-Plasma Model" II

Here I'll continue some of my critiques of Barry Setterfield's "Data and Creation: The ZPE-Plasma Model".  My emphasis will be on Setterfield's Section III: 'The Origin of the elements'.  Here Setterfield makes some significant changes to material he outlined in his earlier document (see Setterfield & c-decay: "Reviewing a Plasma Universe with Zero Point Energy").

Setterfield repeats his misrepresentation of early stellar evolution
There is another problem as well, which comes into play before the formation of the elements.  It has to do with the proposed formation of the earliest stars which Big Bang proponents say formed those elements.  They need to get a gas cloud to contract enough to form a star.  As a gas cloud contracts, it heats up, and heating causes expansion.  The way BB proponents overcome this problem is to say that complex molecules radiate the heat away in the infrared range, thereby overcoming the heating problem presented by a contracting gas cloud.  The problem there is that they need those complex molecules to form.  That means they need more elements than hydrogen and helium to exist to form those complex molecules.  So where did THOSE elements come from?
Setterfield ignores the fact that stars can form from just hydrogen and helium, with no heavier elements needed.  These are often referred to as Population III stars (Wikipedia).   However, the models indicate that such stars would be much more massive than stars formed with some heavier elements.  This is due to a rather complex interaction between the gas pressure (related to the number of particles, electrons and nuclei) and the opacity (controlled predominantly by the number of free electrons) of the plasma.  As metals become available, stars can form with lower masses.

Setterfield also repeats a claim
Fusion occurs readily in plasma filaments under easily reproducible conditions, with no restriction on which elements may be formed
I suspect this may be some mis-citation from Don Scott's The Electric Sky (Setterfield references pg 105-107) since I do not find anything even close to this in Electric Sky.  This statement would be particularly strange coming from any Electric Universe (EU) advocate since part of EU's justification for stars powered by electrical energy is that there are so many nuclear reactions of stellar nucleosynthesis that have NOT been reproduced under laboratory conditions.  I haven't searched all the literature, but I'm pretty certain no experiment has been done to provide a reasonable test of this claim.

However, in this document, Setterfield adds another claim which he attributes to Ed Boudreaux (CreationWiki), a chemistry professor at the University of New Orleans.  According to Setterfield, Boudreaux claims that all the elements at their present day abundances could have been created in 30 minutes at a temperature of 10-20 billion K. 
The only proviso was that the plasma composition was such that the ratio of protons, neutrons, electrons and ions was the same as that found in water.
I have been unable to find further details of this claim, but it sounds like it may be referring to a process known as nuclear statistical equilibrium, or NSE.  Since that is a rather hefty topic, I'll defer that to the next post.

Wednesday, March 3, 2010

Mathematics: The Language of Science

I have occasional e-mails, usually from supporters of some pseudoscience I have challenged on these pages, claiming that presenting the mathematical details on my web sites makes them “too complex” and that I should express the science in 'simpler terms' without the mathematics. 

The language of science is mathematics. 
This is a concept that links back to Galileo (QuoteDB) and is the reason why technology works, because the physical world obeys regular mathematical rules independent of any human belief system.  Scientific concepts are interconnected by the rules of mathematics.  Much has been written about why nature seems to work so well with these techniques (one of the most famous papers on this topic being  “The Unreasonable Effectiveness of Mathematics in the Natural Sciences” by Eugene Wigner.  But all mathematics does not make valid science.  One of the goals of research is to determine which subset of mathematical principles apply in various physical and experimental configurations.

Regardless of why it works, the simple fact is that it does work.  The beauty of this is that mathematics provides a rigorous framework which facilitates communication of scientific ideas

This is why the pseudo-scientists rarely challenge my posts with strong mathematical content.  Most physics-related pseudoscience is communicated not by the rigorous language of mathematics, but by nuanced re-interpretation of terminology and rhetorical tricks.  Pseudoscience is communicated among its supporters more like politics than science.

Do I care that some of the analyses I present on this site are complex? 
One should note that this site actually has a surprisingly broad audience, ranging from high-school science teachers to Ph.D. astronomers who occasionally have to deal with these issues in their classrooms.  I attempt to present the information at the lowest mathematical level needed to illustrate the point.  Most of the mathematics I've presented on this site should be comprehensible to anyone familiar with high-school algebra and physics.  There are only a few pages where I have presented anything at a higher level, such as calculus and differential equations (which are the real workhorse mathematical tools of physics).

I present the details of these analyses because this is how REAL science is done.  Some teachers have expressed interest in using my material as a teaching tool in the classroom, providing examples of how science can test various claims and rule out ideas that don't work.  In science, the ability to identify junk science is just as important, perhaps even more important, than pursuing leading-edge research.  Instructors who wish to use this material usually have the scientific knowledge to distill it to a level for their target audience.

Could my material be presented in 'simpler statements'?
It is certainly possible that complex scientific topics such as Eugene Parker's solar wind model or forbidden spectral transitions could be explained in a few simple sentences, but such explanations would be of little scientific value.  Both of these topics leverage underlying concepts such as fluid dynamics and quantum mechanics that are complex in themselves.  Could an electrical engineer explain the operation of the semiconductor material in a transistor or VLSI chip (the heart of the computer you're reading this on) in a few simple sentences in a form that is scientifically useful and accurate?

When real scientists express their results in 'simple statements', there are usually many actual measurements and numerical models to back it up.  In the case of my refutations of  c-decay or the Electric Sun model, I demonstrate that I have done an actual analysis of the idea and compared the predictions to actual data, not made up a story.  After that, I may use 'simple sentences' to describe the results and some aspects of the mechanism, but I'll usually link to where I've done the work.  I show my assumptions and their consequences. 

The application of mathematics forces honest scientists to explicitly or implicitly define their assumptions, and let the laws of physics work out their implications. 

But even my analysis is not absolute.  Anyone who wishes to challenge my results can see exactly which assumptions I used, modify their assumptions, and redo the analysis.  However, the challenger must still play by the same rules.  If their rebuttals consist of whining that I “didn't include the (unnamed) non-linearities“, or “it's really electrodynamic“, then they are just spewing useless technobabble, indistinguishable from mutterings in really bad television science-fiction.

If the challenger insists that the standards of science be lowered to accommodate their less rigorous analysis, then they are basically admitting that they are doing pseudoscience.  The supporters of Intelligent Design discovered this in Dover, PA (Wikipedia).  The Electric Universe supporters seem to attack all mathematical models which generate predictions that they don't like, while producing no testable mathematical models themselves.

So when the supporters of pseudo-science complain about how difficult I'm making it for them to 'participate in the scientific debate', by pushing them to show that their claims meet the standards of real science, I know I'm doing something right.

So...What Happened?

Wow.  It's been over eight years since I last posted here... When I stepped back in August 2015,...