Friday, November 19, 2010

Holiday Hiatus...

As the holidays approach, a number of other family responsibilities have arisen which severely limit my ability to do weekly updates to this blog.  I also have a number of blog-related projects which are almost complete yet are being delayed by an increasing number of comments which I am dealing with though the comment system which limits the use of graphics and equations.  Many of these comments will be far easier to refute with some of the demonstration projects I've had under development.  I'm tempted to avoid significant postings again until I'm ready to release Project 1 (described below).  There may not be significant activity here again until January 2011.

Some of these projects are, in order of highest to lowest priority:
  1. Redshift quantization demonstration:  This has applications to many creationist and plasma cosmology claims.  I've made considerable progress on this and am now concentrating on some of the introductory material needed to explain why just about ANY non-zero distribution produces a power spectrum with 'peaks'.  Basically, I start with simple distributions WITHOUT periodicities installed in the data and show they generate peaked power spectra similar to those reported by supporters of redshift quantization.  Then I do the test that redshift quantization supporters never do - actually install the reported periodicities in the data and demonstrate just how different the power spectra appear.  This project will probably be a long series of posts which I have started mapping out.
  2. GPS relativistic corrections: Definitely needed considering how many relativity deniers in the Electric Universe and creationist communities make totally false claims about the size of the relativistic corrections in the GPS system.
  3. The Physics of Lagrange Points: More than the Inverse Square Law.  I've done a lot of work on this recently in my day job, so I should assemble it while it's fresh in my mind.
  4. Follow-up to instrument teams on the Michelson interferometers operating on-board satellites: Michelson interferometers are routinely used for precision wavelength measurements.  I've even used one in undergraduate physics optics labs.  Another problem created by the geocentrists claiming c is really c+v is for high-frequency resonance cavities in space, of which there are many!
  5. N-body code demonstration:  I already have a really nice demonstration code that does non-interacting particles in electric and magnetic fields.  I even have the interface generating output to nice renderers like POVray to generate movies.  My next step is to install particle interactions and to expand the for gravitational simulations and gravito-electromagnetic simulations.
  6. Particle-in-Cell (PIC) simulation: This is a follow-on to the N-body code.   I also have a need for a generalized capability for this in my day job.
I'd like to make the codes for doing this generally available, but after the flack from the  "ClimateGate" code (Guardian.co.uk: If you're going to do good science, release the computer code too) I'm trying to upgrade my coding habits to install regular unit testing, etc. rather than using comments to control test cases.  I'm better than many scientists when it comes to programming (I had a decade of business programming experience before I went into science), but I do need to adopt some of the newer programming tricks, even for my hobby projects.

I'm also in the process of configuring a new multi-core desktop computer system which will be more dedicated to simulation runs (running these codes on a laptop just sends the cooling fans into overdrive).  It's a real headache to get everything installed 64-bit so I'm currently in a configure-build-test-delete-change configure-... cycle.  I'll probably post some configuration details on the blog so others can possibly reproduce the work AND see that this work does not always require high-end supercomputing resources (a common complaint by pseudo-scientists is whining that they can't test their ideas because they are blocked from access to the types of resources of professionals).

I will probably post responses to some existing comments on the site but new comments will be let through as I have time to deal with them.  Those who insist on posting their long tomes, usually poorly referenced, which often trigger the Blogspot posting bug, may find themselves waiting a while.

If you want to follow any sparse activity without repeatedly visiting the site, you can subscribe to the post and comments feeds via RSS (links in left sidebar under "Subscribe To").

Happy Holidays! (just to piss off the fundamentalists and militant atheists bothered by those things :^)

Friday, November 12, 2010

Reading: "Cauldrons of the Cosmos" by Rolfs & Rodney

My my original studies for my Ph.D. were in nuclear astrophysics, it has been a while since I last really explored the topic in terms of what we know about the atomic nucleus, and what that knowledge tells us about the cosmos.  For this blog, I decided to 'recharge' my nuclear physics background since nuclear physics comes up often in creationists and electric universe claims.

To do nuclear astrophysics, one must have a broad understanding of both nuclear physics (how the atomic nucleus works) and astrophysics (to understand the types of environments where energies are sufficient that the nuclear structure can undergo change).  In graduate school, my nuclear astrophysics classes were taught by the man that wrote the book.  In this case, Principles of stellar evolution and nucleosynthesis, by Donald Clayton.

However, Don Clayton's interests were in later stages of stellar nucleosynthesis, the formation of the heavier elements.  In situations I deal with, crank claims usually try to claim few to no nuclear reactions are taking place in stars.  For that case, I needed a book that placed more emphasis on the nuclear reactions earlier in the star's life.  It would be even better if I could find a book that covered the status of experimental verification of stellar nuclear reactions. 

The book I found which did an excellent job of covering these two topics was

C. E. Rolfs and W. S. Rodney. Cauldrons in the cosmos: Nuclear astrophysics. 1988.

This book presented a lot of material on theoretical and experimental nuclear astrophysics, and a number of items I had not seen  covered to the level of detail before.

Going Backwards to Go Forwards
One of the major difficulties in doing experimental nuclear astrophysics is that it is very difficult to reproduce stellar interior conditions in Earth laboratories.  But sometimes nature provides a 'back door' on the process.  We have yet to find a violation of the principle of microscopic reversibility - that all atomic and subatomic reactions can run backwards and forwards.  If we can have the reaction
A + B -> X + Y
then
X + Y -> A + B
is also possible.  As in chemistry, there are physical principles we can use to mathematically relate the reaction rates in the forward and reverse reaction.

Photodisintegration, where a high-energy photon 'breaks' a nucleus into smaller components, is very hard to measure.  It requires a high density of high-energy photons that is very difficult to produce in a lab but easy to occur at the center of a million kilometers of hydrogen (for an example where the lab is reaching the capabilities of the stellar environment, see 'Out There' Astrophysics Impacts Technology (again)).  In some of these cases, we can measure the capture cross-section of the particles created in the original reaction and then compute the photoionization rate using quantum mechanics (pg 148).

Similarly, the very important triple-alpha reaction (wikipedia), where three helium nuclei (alpha particles) collide to form a carbon nucleus, be examined by exploring the reaction from the opposite direction (pg 282):


Proton+Proton Reaction
One nice part of Rolfs and Rodney text is the level of detail it covers on the theoretical AND experimental aspects of nuclear reactions in stars, starting with the proton-proton chain and covering the individual reactions.  The reaction at the base of the chain,


has never been observed under laboratory conditions.  The reason why is that the coulombic repulsion between two protons at stellar core temperatures is too large to be overcome by the classical mechanical means of overcoming an energy barrier.  However, quantum mechanics gives the process a very small probability of the two protons tunneling through the coulombic barrier to interact (the same process used with electrons in the tunnel diode).  The tunneling probability is LOW.  At a temperature of 10,000,000K, this probability is 9e-10 (pg 155).  Yet even with this low probability, there is so much hydrogen at the center of the Sun, at sufficient density, that the interaction rate makes up for this low probability and accounts for the energy release we measure from the Sun.

In terms of measuring this reaction rate in the laboratory, the calculation on pg 334 puts this in perspective.  The total cross-section for the reaction at a lab energy of 1 MeV is about 1e-47 cm^2.  With a proton beam of 1 milliamperes on a THICK target of pure hydrogen with 10^23 atoms/cm^2 in the beam, the time between reactions would be 1 MILLION YEARS.  This experiment is clearly impractical to do with current technology.

If the proton+proton reaction had a much higher reaction probability, sufficient for us to measure in current laboratories, then the reaction rate at the centers of stars would be so high that stars would have burned out long ago.

While there are very few nuclear reactions which we can measure in the laboratory at relevant stellar energies, the practice is calibrate the data to the cross-section computation in the energy ranges where we can measure, and then extrapolate this function to lower energies.  The advantage of this is that any other interactions which could influence the reaction rate would increase the reaction cross-section (pg 189).

Over 100 pages of the book (pg 190-327) is devoted to descriptions of experimental nuclear astrophysics techniques.

Once some of the techniques are described, the book goes into more detailed explorations of nuclear reactions with more complex nuclei heavier than helium.  There is discussion of the nuclear reactions around Be7, which decays via electron capture.  This creates the interesting effect that the beryllium-7 nucleus is stable, while the beryllium-7 atom is not.  This is because the electron wave-function in the innermost electrons has a significant amplitude in the nucleus itself, increasing the probability of electron can react with the protons for the inverse-beta decay reaction (pg 346):
 This effect from electrons suggested that for beta-decays, the CHEMICAL environment could influence the nuclear decay rate - a fact that was explored experimentally in 1947(ref) and realized theoretically by Bethe & Pierels in 1934 (1934Natur.133..689B).  The neutrino from this reaction, since it is a branch of the proton-proton chain, is also discussed as a means of testing the reaction (pg 364).  This reaction is also an important step in the formation of neutron stars (link).

Modern experiments with this method of altering radioactive decay rates have occasionally been invoked by Young-Earth Creationists as evidence that nuclear decay rates could have been significantly higher in the past (See Accelerated Radioactive Decay According to
Answers in Genesis)

The theoretical and experimental work around more complex reactions is explored as well.  I was surprised to find how many of the heavier nuclei experiments so important for described the steps of stellar reactions beyond hydrogen and helium burning actually have significant laboratory experiments behind them.

Because researchers don't have the resources to explore EVERY possible nuclear reaction hypothesized to occur in stars, they must rely on nuclear models to compute the structure of the nuclei of interest and then compute the cross sections of the reaction of interest.  One model that has found successful use is the Hauser-Feschback statistical model which can be used to compute an energy-averaged cross-section for nuclei were there are many resonances (pg 432).  While these models are used for reactions where we don't have data, they are often tested and calibrated by applying them to reactions where we can obtain experimental data (pg 434).

Pages 493-495 covered some of the early ideas (about 1988) for solving the solar neutrino problem which was still unsolved at that time. 

Stellar Composition Effects
One of the other interesting applications in the book was a discussion of limiting cases in stellar composition.  In a simple case of a chemically homogeneous star, the structure is largely determined by the mean molecular weight, mu.  This value can be approximated in atomic mass units, by the equation

mu= 1/(2X+0.75Y+0.5Z)

where X is the mass fraction of hydrogen, Y is the mass fraction of helium, and Z is the mass fraction of every element with an atomic mass greater than helium, often referred to as 'metals'.  Since everything must add up to 1.0, we require that Z = 1-X-Y.  In the text, the authors examined two interesting limiting cases, a star of all iron (Z=1, X=Y=0) and all hydrogen (X=1, Y=Z=0).  In the early 1900s, it was believed that stars were largely iron due to the large number of iron lines in the Sun's spectra.  However, once astronomers understood how spectral lines were formed, largely through the work of Cecilia Payne, it was eventually understood that they were largely hydrogen.  In the example, we see that the mean molecular weight of an all hydrogen star is mu=0.5, while for an all iron star it is mu=2.0.  In terms of gas pressure, this means that the iron star must have about four times higher pressure than a hydrogen star to maintain hydrostatic balance (pg 96).


Applications
One of my motivations for reading this book was to refresh my nuclear physics background but also to explore possible alternative sources of information for dealing with the cranks that claim the Sun and other stars are not nuclear powered.  While the book is now over 20 years old, it directed me to some of the earlier research in experimental nuclear astrophysics from which I was able to track more modern work and updates via ADS.

One of the most useful items I found was the computation of what was required to produce the proton+proton -> deuteron reaction in the laboratory.  While various cranks like to claim that the failure to produce the reaction in the laboratory is a failure of nuclear astrophysics, the facts paint a different picture.  It is not that researchers tried to produce the reaction and failed, but they probably have never tried to produce that reaction specifically because it had such a low probability at the energies available in the Sun's core.  There have been many experiments in the late 1930s to the 1950s colliding protons together over a broad range of energies (refs).

While this twenty-year old book provided an excellent overview of stellar nuclear reactions, I suspect significant revision of some experimental results will take place as the National Ignition Facility (link) enables these reactions to be explored at significantly higher densities.

Friday, October 29, 2010

GPS, Relativity & Geocentrism

This is in response to Rick DeLano comments on GPS and relativity, originally posted in the comments of John Hartnett's Cosmos. 1. Introduction.

For this analysis, I will define Physical Geocentrism as a system which claims that the Michelson-Morley experiment (MMX) makes the Earth a fixed, motionless frame of reference.  This seems to be consistent with the claims Mr. DeLano makes in his earlier comments and below.

Just how well tested is relativity?
DeLano: "We are only now beginning to be in a position to determine whether the behavior of “c“ is as predicted in non-Earth reference frames."
False. 

We've pretty much had the capability since space flight - and especially interplanetary flight.  We've been able to measure newer predicted relativistic effects since the 1960s, such as the Shapiro Delay (Wikipedia

In addition, there are currently at least two satellite instruments flying in space, moving relative to the Earth, AND to the object they're imaging, where the optical configuration used is similar to the Michelson-Morley experiment.  The system is used for precision Doppler velocity measurement.  They use 'c' for the velocity of light when removing Doppler effects from the spacecraft velocity relative to the target.  

As high-bandwidth transmissions become common in space, we will have to include the relativistic effects there as well to keep precise timing.  Physics Today: Time dilation seen at just 10 m/s.

But Doesn't GPS Use Geocentric Coordinates?
The GPS system uses several different coordinate systems, including an inertial system with an orientation fixed to the distant stars.  Computations transform to a geocentric system when needed to compute locations physically on the Earth.  For more details and references, see Scott Rebuttal. I. GPS and Relativity.
DeLano: "The early evidence is shockingly unsupportive of Relativity (JPL time correction built into GPS software, for example, which renders “c“ constant in only one frame. Hint: it ain't the solar system barycenter)."
That can only be a credible statement to an audience that knows NOTHING about relativity.

According to relativity, provided you do ALL your calculations in a given frame, you can always use 'c' as the speed of light in that frame. 

That is what frame-independence of the speed of light MEANS. 

If you transition between reference frames you must do the appropriate relativistic transformation and then do all your calculations based on measurements in THAT frame.  Then you can use 'c' in that frame as well. 

If You Want to Know About GPS, Read the Spec!
But here is the real killer for the claim of no relativistic effects in the GPS system. Back during the system's development, the contention over the reality of relativistic effects was so severe, a frequency synthesizer was installed to alter the system clock frequency to the relativistically-corrected value - just in case. Neil Ashby describes how the required clock synchronization could not be achieved until the corrected synthesizer was turned on (see General Relativity in the Global Positioning System by Neil Ashby). A copy of the original paper, from 1978, describing the launch and initial testing of the first GPS satellite, is available online: INITIAL RESULTS OF THE NAVSTAR GPS NTS-2 SATELLITE.

Today, the relativistic correction is described in the GPS specifications. It's available at the Navigation Center of the U.S. Coast Guard, under GPS References, see Interface Specification (IS-GPS-200E, 8 June 2010). The relativistic corrections are described in Sections 3.3.1.1 and 20.3.3.3.3. (Note that the USCG site layout has changed so these links are different than the earlier article.)

So I ask the question, could Physical Geocentrists have built a working GPS system?

But the implications of an absolute reference frame like Physical Geocentrism requires goes far beyond what is covered here.

Coming Soon: More technology implications (more things that would NOT work if Physical Geocentrism were valid) AND APPLICATIONS (what WOULD work if Physical Geocentrism were valid) of Physical Geocentrism!

Friday, October 22, 2010

On Dark Matter. II: An Exotic Hack?

This is a continuation of an earlier post, “On Dark Matter. I: What and Why?”

But the planet Neptune is not some 'exotic' form of matter, so it can't be considered Dark Matter.

To re-iterate, “Dark Matter” is an all-encompassing generic term used over the years covering cases where we see evidence of gravity before identifying the mass or masses responsible. 

Don't confuse the terminology used to describe the modern Dark Matter problem with the underlying concept of matter being detected by indirect means, in this case, gravitationally, before it could be directly detected.  I say Neptune was 'Dark Matter' in quotes to indicate it was not literally called Dark Matter by the researchers of the day.  However it does still meet the requirements of Dark Matter, that it was detected through its gravitational influence before it was detected by more direct means.

Today we continue to find remote members of the solar system that were previously below the detection threshold of our instruments.  We also are finally developing an inventory of extrasolar planets, some detected initially by their gravitational influence.  These are also components of Dark Matter (the baryonic component).

So what qualifies as 'exotic' matter?

For a time, Dark Matter had a far broader range of definition, which included baryonic (Wikipedia) matter.  It is only fairly recently, as more of the baryonic components are identified, that the definition has narrowed in on a subatomic particle.

Does the neutrino qualify as 'exotic' matter?  The neutrino is non-baryonic, as are electrons.  Neutrinos are suspected to be just one of the possible components of Dark Matter.

If one wishes to claim that 'dark matter' is nonsense, the statement carries with it the implication that our current level of science and technology is at its peak and there is nothing which our current technology cannot detect.  That is:
  1. We know how to detect all types of subatomic particles in the universe, no matter how they interact with other particles that we know.
  2. Our telescopes can detect all matter in the universe by the light it emits.  There is nothing below the level of sensitivity of our current telescopes.
Dark matter can be ruled out only if you can demonstrate that 1 and 2 are absolutely true.

The history of science and astronomy has shown that assuming nothing can be beyond our current technology's level of detection is a losing bet.  Every time we've had dramatic increases in instrument sensitivity, we've made new discoveries of what is 'out there' and sometimes new discoveries on smaller scales of size as well.

As I noted in an earlier post, Theory Vs. Experiment. II, there is a certain symmetry in the possible existence of an additional class of particles if we group the particles by the interactions to which they respond:

color (strong)electromagneticweakgravitational
Quarks

YES


YES


YES


YES
Electrons muons, tau

NO


YES


YES


YES
Neutrinos
NO

NO


YES


YES
Dark Matter
NO

NO

NO


?

Such a pattern, if real, might suggest a new avenue for the whole Grand Unified Theory (Wikipedia) option.  After all, even I am beginning to think string theory is stretching to the point of breaking.

Dark Matter is a Hack

In some ways it is.  But it has the advantage of being a very simple hack, just an additional particle that only interacts via gravity.  Even better, it is a TESTABLE hack.  In your simulations, you add an extra density component that only responds to the gravitational interaction and see how it changes your results.  Though this process, Dark Matter has made a number of successful predictions detectable in astronomical observations (such as the Bullet cluster, Wikipedia). 

Another advantage, compared to some other alternatives to Dark Matter, is that a previously undetected particle has the potential of being demonstrated in laboratories (XENON project).

Realistic Alternatives to Dark Matter

Numerous alternatives have been proposed to solve the missing mass problem.  Some, such as Anthony Peratt's galaxy model, have already been ruled out by more recent observations by instruments such as COBE and WMAP.  I've written much about this model on this site as it lives on among Electric Universe supporters.

Modified Newtonian Dynamics (MOND): At one time I regarded this option as borderline 'crank' science.  In recent years, the supporters have actually been producing mathematical models that are actually *testable* against observations.  Unfortunately, unlike the possible particle component of dark matter discussed above, it is unclear if MOND could ever be tested at laboratory scales. (Wikipedia)

Relativistic Effects due to Matter Inhomogeneities:  I include this possibility since when I first read about them, I thought it was a cool idea.  Basically, some aspects of cosmology rely heavily on the universe being very smooth or uniform density on large scales.  But what happens if there are large non-uniformities?  There were some interesting papers suggesting that the gravitational self-energy (the gravity created by effective mass density of gravitational energy) could distort space-time sufficiently to mimic the effects of Dark Matter.  The last I heard, these have been dismissed as mathematical errors.

Next: Just how much 'dark matter' do we need?

Friday, October 15, 2010

Reading: "Discarded Science" by John Grant

I've finally managed to read one of the books I picked up several years ago at a science fiction convention, John Grant's “Discarded Science: Ideas that seemed good at the Time...”

The book delves into a number of ideas that, for a time, were actually regarded as science.  While this is the emphasis, it does occasionally divert away from that theme into descriptions of many flavors of pseudo-sciences that were never part of accepted science.

There were a few pages (78-82) devoted to Velikovsky, whose ideas were never accepted science.  However, there was nothing on modern merging of Velikovsky's ideas with plasma cosmology, the  “Electric Universe“ crowd.  Eric Lerner's “The Big Bang Never Happened” is listed in the bibliography but plasma cosmology is not discussed in the text.  Plasma cosmology did enjoy some resurgence of interest before the COBE and WMAP results ruled it out (see Scott Rebuttal. II. The Peratt Galaxy Model vs. the Cosmic Microwave Background).

Creationism and Intelligent design received some pretty extensive treatment and mention of connections with other ideas throughout the book.  Some items I found of interest included pre-Darwinian ideas, quite ancient, which suggested the notion that species change over time has a very long history (pg 131).

Grant includes a discussion of panspermia (pg 189), which covered the more legitimate investigations of Fred Hoyle[1] and Chandra Wickramasinghe, as well as its variants that have been integrated into some religions.  Grant even mentions the somewhat irreverent treatment of the idea (pg 214), Allegro Non Troppo (wikipedia), where life evolves from the discarded trash of an extraterrestrial visitor. A segment of the movie is available on (YouTube),

On page 31, Grant reports that Martin Luther and St. Augustine insisted the Earth had to be flat which I cannot say I've heard before.  This brings me to one of my most serious complaints about the book, that citations for some of the topics in the text are weak to lacking.  This greatly inhibits its use as a more general reference. 

Another example is on pages 251-252, where Grant discusses N-rays, mentioning that when R.W. Wood exposed their subjective nature, N-rays were quickly rejected by the scientific community everywhere except France, the home country of the discoverer.  I've not found this cultural bias confirmed in other sources (Skeptic's Dictionary; Wikipedia).

While not relevant to the topics of this site, I did enjoy the sections on anthropology and medicine.  Some of the beliefs discussed were familiar from my youth as some of the more bizarre ideas mentioned were espoused by family and friends.  It was interesting to read about their origins.  The section on chromotherapy [pg 303-304] was particularly funny as I recall attending a Mensa meeting where one of the attendees would shine this little red light on her food before she ate it.  High I.Q.s do not make people immune to pseudo-science!

There are a number of additional topics of interest.  Puthoff's zero point energy (pg 254), is a popular source of claims for some creationists models such as Barry Setterfield's c-decay.  The section on the Pons & Fleishmann “Cold Fusion” scandal (pg 260) had one of the more entertaining quotes by the author:

“It's an obvious rule of thumb that only a scientific illiterate would attempt to use a lawsuit to influence a scientific debate.“

As already mentioned, my greatest disappointment for using this book as a more general reference is the limited annotations of the various ideas presented.  Hopefully the information is sufficient that I will be able to follow up on some ideas mentioned through other sources. 

Grant has two other books on similar topics, “Corrupted Science” which is waiting on my shelf, and “Bogus Science” which I will probably purchase in the near future.

Footnotes
[1] In graduate school, I had an opportunity to meet Fred Hoyle (Wikipedia) and had him autograph my copy of “Diseases From Space” by Hoyle and Wickramasinghe.

Saturday, October 9, 2010

The IBEX Challenge for the Electric Sun

The IBEX mission was in the news recently yet again.  This time publishing a new skymap from the past six months of observing neutral atoms from the heliopause and beyond.  The new map reveals some significant changes in the emission of energetic neutral atoms (ENAs) along some regions of the 'ribbon'.

IBEX Finds Surprising Changes at Solar Boundary

There are a number of proposed explanations by researchers (some descriptions here at SWRI) actually working with the data (SWRI/IBEX data).  Each of the proposed mechanisms can create mathematical predictions that match some characteristics of the ENA emission, but not all.  As happens in many of these cases, the truth is probably some combination of these mechanisms.

The IBEX results have again caused a stir among the Electric Universe (EU) and Electric Sun (ES) supporters as reinforcing their claims of the Sun being powered by external electric currents.  The new result has re-invigorated the topic at Thunderbolts (forum link).  I've written about this with some earlier IBEX releases:
In the last link, I covered one of the proposed mechanisms for the ribbon emission. This was a really good paper as the researchers used their model to generate a neutral atom emission map which could be compared DIRECTLY to the IBEX result. Here's a comparison:



The agreement between the actual data (left) and the model, which does not include background emission, (right) is surprisingly good.

While the Electric Sun supporters CLAIM their 'model' explains the IBEX result, where are their model predictions that we can directly compare to the IBEX results?  Note the arrogance of the poster 'mharratsc' in this thread (thunderbolts forum), claiming that IBEX PROVES THE ELECTRIC SUN.
Bridgeman et al can yammer up a storm about what they think about the Electric Sun model and plasma cosmology in general, but when it comes to IBEX- their model was PROVEN WRONG.

EU/PC/Electric Sun- VALIDATED.
Really?

Then where IS the equivalent IBEX map generated for the Electric Sun model?  

The ES map should show better (or even perfect) agreement with the data map.  It should show better agreement than the other proposed models.

After all, such a bold claim by EU requires evidence that can be compared to real data.  Without it, why should anyone regard the EU claim as anything more than a fairy tale?  Failure to present the EU claim at the same standards that other scientists must satisfy makes the EU claim look more like scientific fraud.

Here's just a few additional questions I would have for the ES theorists when it comes to explaining the IBEX observations:
  1. Describe the mechanism for the pinch current powering stars to produce this sunward flux of neutral atoms.  How do we compute the particle fluxes and energies?
  2. If the change in the ribbon represents a change in the current of the z-pinch that powers other stars, shouldn't we expect to see a pattern of nearby stars (powered by this stream) changing brightness?  If so, by how much?  When and where could we expect to see this change?
  3. Related to 2, if the IBEX changing knot is the imprint of another current stream against the Sun's current stream, we should be able to use this to build a map of one of these nearby current streams.  Where's the skymap of this current stream?
If the ES model is insufficient to answer these questions, then their claims of a model that explains the IBEX observations is false.

Is there ANY EU supporter up to the challenge of doing something that could be described as REAL science?

Saturday, October 2, 2010

Electric Universe: Plasma Modeling vs. 'Mystic Plasma'

Earlier posts in this topic line:
1) Electric Universe: Real Plasma Physicists Use Mathematical Models!
2) Electric Universe: Real Plasma Physicists BUILD Mathematical Models
3) Electric Universe: Plasma Physics for Fun AND Profit!

So were does this leave us?

Building Plasma Models for EU...

Siggy_G, in comments earlier in this thread, has set himself (herself?) to the task of setting up a suitable simulation for the Peratt galaxy model to run on modern hardware.  It would be consistent with my purpose for this blog to post reports on the progress of that activity.  It would also be fair to discuss computational tricks/techniques for solving the physics of such an effort.  Siggy_G should feel free to contact me directly via e-mail if they wish to discuss the topic.  I might consider setting up a 'sticky post' in this blog on the project so others can observe the progress and problems.  I think it is valuable when others not in the scientific community get a first-hand experience of what solving these types of complex problems is really like.

APODNereid redirected me back to re-examine Peratt's “Physics of the Plasma Universe” in an attempt to determine if Peratt had actually included the effects of gravitation in his simulations that receive so much attention among EU supporters.  On examining the text, I find that Peratt uses a number of his examples (pages 62-66) where he sets up energy contributions from gravity and electromagnetism and demonstrates that gravity has a significant contribution in the larger-scale   configurations  In some of his estimates, he appears to take the upper bound for contributions from electromagnetism and the lower bound for contributions from gravity which biases the result in favor of electromagnetism.

A re-examination of Chapter 8, where Peratt outlines the requirements for simulating these configurations, suggests I had interpreted this incorrectly before.  I read this section with the impression that this was how Peratt had done these simulations.  On reexamination, I now realize Peratt is outlining how he thinks these simulations should be done.  I can find no evidence that the TRISTAN code in the text (Appendix E) includes gravity, and by Peratt's own work, the gravitational energy is NOT negligible when you get to galactic scales.  This also suggests that the scaling laws invoked so many times by EU supporters to turn laboratory experiments into cosmic scale experiments have never included gravity. Thanks to APODNereid for bringing that to my attention.

So clearly Peratt's own analyses were incomplete, and I can't find any evidence in later papers that this shortcoming was repaired.  The paper referenced by Siggy_G, Rotation Velocity and Neutral Hydrogen Distribution Dependency on Magnetic Field Strength in Spiral Galaxies by Snell & Peratt, seems to entirely justify ignoring gravity based on the EM force being 10^7 times larger than gravity for roughly neutral hydrogen (this depends very much on the mass density and temperature which determines how readily those charges will recombine to form neutral atoms so this estimate is shakey).

I should add that Peratt's energy analyses are similar and consistent with what I did in my Electric Sun analyses which EU supporters always claim are wrong.  Why is that?  

Nonetheless, I suspect EU supporters will continue to use Peratt's work as their touchstone galaxy model.

Even though EU supporters say plasma models are useless...

On page 126 of The Electric Sky, Don Scott quotes Alfven:
"From the point of view of the traditional theoretical physicist, a plasma looks immensely complicated. We may express this by saying that when, by an immense number of vectors and tensors and integral equations, theoreticians have prescribed what a plasma must do, the plasma, like a naughty child, refuses to obey, The reason is either that the plasma is so silly that it does not understand the sophisticated mathematics, or it is that the plasma is so clever that it find other ways of behaving, ways which the theoreticians were not clever enough to anticipate."  -- H. Alfven. Double layers and circuits in astrophysics. IEEE Transactions on Plasma Science, 14:779–793, December 1986.
Alfven's description gives plasmas an almost mystic character, that it has a mind of its own, like a living being, beyond the ability of physics and mathematics to describe.  Attributing such mystical character to the natural world is common in many religions.  While such prose is common in popular-level science books, only someone with Alfven's level of prestige could have gotten away with making such a statement in a peer-reviewed scientific journal. 

Then Dr. Scott tries to make the point...
The Princeton statement [Scott is referring to the Princeton Plasma Physics Laboratory, particularly Magnetic Reconnection] that plasmas are “described very accurately with such a theory” is blatantly untrue.  Indeed, if plasma can be described very accurately with such a theory, why have all attempts to use this theory in order to obtain a sustained and controlled nuclear fusion reaction here on Earth have been so spectacularly unsuccessful for more than 50 years? [D.E. Scott, The Electric Sky, pg 126]
The details that Dr. Scott DOESN'T tell you with this statement is the topic for a future post, but the bottom line is the EU regards even attempts at plasma modeling as doomed to failure.  These statements clearly are not “plasmas models are good to precision 'x' or if they include process 'y'”.

In Summary...

As noted in comments to an earlier post, the promoted EU position on plasma modeling appears to be a two-parter:
  • Alfven: All mathematical models of plasmas or discharges are unreliable.
  • Peratt: Claims a successful model of galaxy formation from mathematical plasma model
Both of these statements CANNOT be true!

Since one commenter got so upset over my choice of the term 'schizophrenic' in my earlier post describing EU's position on plasma modeling, I'll clarify with a link to a dictionary definition,

Note definition 2:
2. a state characterized by the coexistence of contradictory or incompatible elements.
I think the term is certainly applicable.

Monday, September 27, 2010

Geocentrism: Galileo was wrong?

Back on September 7, 2010, I received an e-mail from a member of the National Capital Area Skeptics (NCAS) pointing me to this site:

Galileo Was Wrong

The site advertises a conference, scheduled for November 6, 2010 in South Bend, Indiana.  The topic “Galileo Was Wrong: The Church Was Right.  First Annual Catholic Conference on Geocentrism“. 
I had meant to write sooner on this topic, but a number of other blogs have given it some entertaining attention.
One of the more entertaining aspects of the “Galileo was Wrong” site, is the number of Ph.Ds, some in physical sciences, listed under “Reviews”.  Not surprisingly, I could find NO evidence that any of these Ph.D. geocentrists have done any work in space-based technologies where their Geocentric beliefs are actually applied to do real things.  None of them appear to be involved in computing complex interplanetary satellite trajectories, or even launching satellites into orbit.  (My favorite quote is from the aerospace engineer who conveniently doesn't tell you about the other reference frames that are important for GPS operation.  In my post, Scott Rebuttal. I. GPS & Relativity, I list a number of references on how GPS actually works.  Some of these are texts used for teaching others how to properly decode the signals for designing new applications.) 

Like most pseudo-sciences, its practitioners never actually apply their 'science' in developing real technologies in areas where their 'science' would make a difference.

Wednesday, September 22, 2010

The Classroom Astronomer: Crank Astronomy as a Teaching Tool

I have written an article based on my January 2010 AAS meeting poster (original post), titled “Crank Astronomy as a Teaching Tool”.   It has been published in the current (Fall 2010) issue of “The Classroom Astronomer”.

The main exercise in the article is the Electric Universe “Solar Resistor” model (see Electric Cosmos: The Solar Resistor Model and/or "The Electric Sky: Short-Circuited", pp 17-21) which I converted to a form suitable for analysis on a standard electronic spreadsheet. 

If there is enough interest, I hope to convert some more of the analyses on my sites into forms more appropriate for introductory physics and astronomy classes.

Saturday, September 18, 2010

Baryon Acoustic Oscillations are NOT 'Redshift Quantization'

I recently received this in my email, apparently in response to the redshift quantization article on my main site (William Tifft's Quantized Redshifts).
Re: Tifft's Quantized Redshifts article

How does one deal with the fact that more recent observations have confirmed and extended Tifft's original findings, even some studies by his initial detractors? Recent reports from Daniel J. Eisenstein and his collaborators have shown evidence of the baryon acoustic oscillations in the distribution of galaxies, in data from the SDSS and 2dF surveys. And they, as well as others, are planning even more extensive studies of this phenomenon with the next generation of telescopes and spectrographs.
Some Young Earth Creationists, as well as Electric Universe supporters argue that the alleged  redshift quantizations are real.

Here's the original story link from the Sloan Digital Sky Survey (SDSS) from 2005.  The lead author of the paper, Daniel Eisenstein, also has a web page concerning the discovery.

The main error the email author makes is the assumption that the 'baryon acoustic oscillations' mentioned in the article correspond to 'redshift quantization'.  A simple search can help clarify (wikipedia: Baryon Acoustic Oscillations).

All oscillations are not equivalent.  Baryon acoustic oscillations are seen in the cosmic microwave background radiation and are part of the evidence of the correctness of Big Bang cosmology.  There is a pretty good description of this on Eisenstein's site which graphically plots the evolution of this enhancement by plotting the power spectrum as the universe expands.

The bottom line is that the deviations from a uniform blackbody spectrum observed in the cosmic microwave background (CMB), represent density enhancements in the plasma at the time electrons and protons are binding to form hydrogen atoms.  We expect these density enhancements to provide the 'seeds' of enhanced gravity from which larger cosmological structures will collapse and eventually form clusters of galaxies, galaxies, and stars.  Therefore, we expect this enhancement visible in the CMB to leave an 'imprint' in the distribution of galaxies which we can observe today.

Note that the baryon acoustic peak is a broad peak in the power spectrum, corresponding to a wide range of frequencies, therefore a broad range of time, and therefore distance.  Eisenstein reports this scale of distance in on the order of 500 million light-years (the distance between here and the Andromeda galaxy is about 2.2 million light-years).  Clearly the oscillation does not represent that galaxies are only spaced every 500 million light-years, but there distribution of galaxies which is enhanced at those separation scales.  Not a very effective 'quantization', which requires a very narrow frequency peak.

In regards to William Tifft's work, Tifft claimed a very narrow frequency for his quantization, originally 220 km/sec, though later papers reported significantly different values.  The claim was galaxies only existed on 'shell's with this spacing.  Using the latest determined value of the Hubble constant of 72 km/s per million parsecs (about 22 km/s per million light-years), this corresponds to a galaxy 'shell' spacing of about 220 km/s / (22 km/s/mly) = 10 million light-years.  This is smaller than the spacing of the acoustic peak by a factor of 50!  Clearly the acoustic oscillation does not correspond to Tifft's 'quantization'.

I've added more information about redshift quantization on my blog, incorporating newer results, as well as discussing some of the errors that researchers make in using the power spectral density:

Sunday, September 12, 2010

Science Channel: Top 10 Science Mistakes

The Science Channel has published their list of “Top 10 Science Mistakes” through history.

Ways Scientists Were Wrong- Top 10 Science Mistakes- Science Channel.

While the list included items across all sciences, a few astronomical items made the list, both of which still have adherents today.

First is the popular claim of Young-Earth Creationism (YEC)
No. 6: The Earth Is Only 6,000 Years Old Top 10 Science Mistakes
which still has adherents (Wikipedia: Creationism, Young Earth Creationism, Creationist cosmologies) but not among professional cosmologists who actually work with real data.

Second, is a smaller group, the biblical geocentrists,
No. 2: The Earth Is the Center of the Universe- Top 10 Science Mistakes
which, while smaller in number, still has some supporters (Wikipedia: Geocentric model, Modern geocentrism).

From my Reading List

I just started reading “Discarded Science: Ideas that seemed good at the time...” by John Grant.  I'm about half-way through it at the time of this writing.  The title is slightly misleading, as Grant often goes beyond what mainstream science thought at the time to describe what appear to be random cranks of the day.  However, even these diversions seem worthwhile as many of these cranks seem to borrow ideas from each other, incorporating them into their own mythologies.  I've noted this type of behavior on this site, such as YECs adopting some aspects of the Electric Universe for their cosmologies (Barry Setterfield joins the Electric Cosmos?Setterfield & c-Decay: "Reviewing a Plasma Universe with Zero Point Energy"The Electric Universe & Creationism).

Tuesday, September 7, 2010

Electric Universe: Plasma Physics for Fun AND Profit!

Now for Part 3 of this series on plasma modeling.
  1. Electric Universe: Real Plasma Physicists Use Mathematical Models
  2. Electric Universe: Real Plasma Physicists Build Mathematical Models!
If, as EU supporters like to claim, plasmas are so intractable mathematically that no one can compute any model with any accuracy, why are commercial-grade software for modeling plasma systems on the market?  The fact that such systems exist at all is evidence that plasmas behave under the influence of natural laws and are not mystical, incomprehensible things.

There is plenty of published evidence of this fact.

Even More Research on Plasma Simulation Development

Consider this published dissertation: Studies of Electrical Plasma Discharges.  Note that Fig 1.10 of this work is generated by a plasma model, and is equivalent to the graphic in James Cobine's "Gaseous Conductors"  from the 1940s (pg 213, Figure 8.4) which EU supporters always like to reference.

Here are just SOME of the articles on plasma simulation codes I found with a quick search at the Cornell Preprint Server:

    •    From Theoretical Foundation to Invaluable Research Tool: Modern Hybrid Simulations
    •    dHybrid: a massively parallel code for hybrid simulations of space plasmas
    •    2.5 Dimensional Particle-in-Cell Simulations of Relativistic Plasma Collisions
    •    Three-dimensional PIC simulation of electron plasmas
    •    Particle simulation code for charge variable dusty plasmas
    •    New parametrization for differences between plasma kinetic codes
    •    Fokker-Planck and Quasilinear Codes
    •    Adaptive Multi-Dimensional Particle In Cell
    •    One-to-one direct modeling of experiments and astrophysical scenarios: pushing the envelope on kinetic plasma simulations

But more than just research papers, there are actually...


Available Plasma Simulation Codes

There is a publication that has collected a number of plasma simulation codes, with articles that serve as documentation:
Consider The Plasma Theory and Simulation Group at UC Berkeley.  If you scroll down the page, the group lists some of the projects, including commercial product development, where they have been involved.  A little further down the page, they actually provide a number of their plasma simulation codes in source code form.  Some of these codes were apparently used in designing a number of plasma devices so they have been tested against experiments.  Since they are provided in source code form, they could probably be compiled for almost any platform!  The EU supporters have NO excuses not to try these out for their favorite model of the Sun or galaxies.  But I will not hold my breath for them to do any actual work.

PLASMAKIN: a chemical kinetics package. From the SourceForge page:  “PLASMAKIN is a package to handle physical and chemical data used in plasma physics modeling and to compute gas-phase and gas-surface kinetics data: particle production and loss rates, photon emission spectra and energy exchange rates.”

VORPAL From the web page: “VORPAL enables researchers to simulate complex physical phenomena in less time and at a much lower cost than empirically testing process changes for plasma and vapor deposition processes. VORPAL offers a unique combination of physical models to cover the entire range of plasma simulation problems. Ionization and neutral gas models enable VORPAL to bridge the gap between plasma and neutral flow physics.”

PicUp 3D: This program models plasma interactions of satellites in the solar wind and other space environments.  A popular claim of EU supporters is that satellites cannot detect a 'uniform' flow of electrons or ions powering an electric Sun.  The problem with this notion is that the satellites are not uniform conductors so embedding in even a uniform plasma will generate voltages in the satellites structure and the electrons try to move into a configuration compatible with the plasma flow creating internal voltages which can sometimes kill the satellite.

But few of these codes are new.  How were they developed?

As with most of these types of codes, initially by a small group of researchers, or perhaps even an individual researcher, who had a need for a plasma code and wrote it themselves from scratch.  When the code was found to have reasonable agreement with the experiments the researcher(s) were doing, the code obtained wider distribution, and revisions by others.  Eventually, if the code is found useful for a wide range of problems where there is an industrial, commercial, or security interest, the code might get support from a larger team of researchers and developers, but that doesn't happen until the code has proven its usefulness.

Excuses, excuses...

Why haven't any of these codes been found (and utilized) by the EU 'theorists'?  There is sufficient documentation available that any interested party could run what currently exists, or write their own version in their programming language of choice.  Why aren't the Electric Universe books full of results of detailed simulations from which we can derive numbers which we can compare to actual measurements by spacecraft?

Why do we see nothing from EU but pictures (often taken by others doing legitimate research) and 'stories' indistinguishable from mythology?

Coming soon, some of the odds-n-ends on plasma modeling to close out (at least for now) this topic.

Saturday, September 4, 2010

Science, Reason and Critical Thinking: Modern Science Map

Over at the “Science, Reason and Critical Thinking” blog, Crispian Jago has created an interesting visual representation of the development of modern science
The chart links prominent individuals in the development of a number of sciences since the 16th century, including theoretical physics and astronomy.  The graphic is laid out like a subway map, each separate 'line' corresponding to fields of science.  'Transfer station' symbols are located at individuals whose work had impact in multiple fields

Crispan describes the graphic as a draft, currently version 0.37, subject to revision.  Commenters are reporting a number of corrections and revisions.

I did a similar exercise with my “Cosmos in Your Pocket” paper, converting it into a poster for the AAS meeting in Miami, FL last spring.  I'll explore converting this poster into a similar graphic for posting online - just another item for my extensive To Do list...

Tuesday, August 31, 2010

'Out There' Astrophysics Impacts Technology (again)

A favorite staple of high-tech science fiction, the gigantic laser weapon, may have some limitations imposed by fundamental physics.

Physics Central: Lasers reaching their limit

In 1997 at the Stanford Linear ACcelerator (SLAC), electrons with 47 GeV (giga-electron volts) of energy were collided with the beam of a green laser.   The high-energy electrons collide against the photons at an angle that transfers energy to the photons in the laboratory rest frame, a process known as the inverse Compton Effect.  This increased the energy of the laser photons from the green wavelength of light up into the gamma-ray range.  These gamma-ray photons subsequently collided with other low energy photons in the laser beam, creating electron-positron pairs.  The two photons collided with enough energy in the center-of-momentum (CM) frame that their combined energy exceeded 1.1MeV (million electron volts), the threshold for pair production.  This was the first time photon energy was directly converted into matter.  It is the inverse process of electrons and positrons colliding to form gamma-rays.  [NY Times: Scientists Use Light to Create Particles, 4]

It is now becoming clear that above some photon energy density, this pair-production process can happen spontaneously - enough photons will have energy above the threshold that they will start a cascade of pair production,  followed by pair annihilation, followed by pair production...  This would suggest there is a quantum-imposed limit to the energy density of lasers[1].

While this might not seem to be an astrophysics issue, one needs to investigate the history.  I mentioned some of this in an earlier post (see Testing Science at the Leading Edge)

When antimatter was first discovered in 1932, with the identification of the positron, we had the first experimental verification of the process of matter-antimatter annihilation, where the collision of an electron and positron would produce two photons (with no other particles around, at least two photons are required to conserve momentum).

One of the heavily tested (but by no means proven) fundamental principles of physics is that sub-atomic processes are reversible in time.  It is a principle that has been tested in many cases and found to hold, but it has not been demonstrated as an absolute.  However, it holds so well that it is generally assumed valid for interactions where it has not yet been tested.  If an opportunity arises where it is tested and fails, there will undoubtedly be a Nobel prize for that researcher. 

So if an electron and positron can collide to produce two photons, by time-reversal symmetry, it stands to reason that one can collide two photons of sufficient energy (in excess of 1.1 MeV) and create an electron-positron pair.  The probability for such a reaction was first calculated in 1934, shortly after the discovery of the positron, by Breit and Wheeler[2,3].  This reaction probability was sufficiently small that no one in the 1930s had the technology to test it, so it remained an interesting concept.

But in the 1960s, x-ray detectors (wikipedia, NASA/GSFC) launched on board rockets above the Earth's atmosphere (which was too thick for cosmic x-rays to penetrate) began detecting high-energy point sources in space.  Gamma-ray detectors would detect photons with energies in excess of the 1.1 MeV threshold and the question arose as to what could produce these high-energy photons (wikipediaNASA/GSFC).

One of the processes recognized as a possible source of these photons were extremely high temperature plasmas of electrons, positrons, and photons, also called a pair plasma.  Here are just a few of the papers published studying the environment created by such as plasma.

    •    1964, Neutrino Processes and Pair Formation in Massive Stars and Supernovae
    •    1979, Photon Pair Production in Astrophysical Transrelativistic Plasmas
    •    1981, Annihilation radiation from a hot e/+/-e/-/ plasma
    •    1982, Relativistic thermal plasmas - Pair processes and equilibria
    •    1983, Radiation spectrum of optically thin relativistic electron-positron plasma
    •    1984, Spectra from pair-equilibrium plasmas
    •    1995, Thermal Comptonization in Mildly Relativistic Pair Plasmas

Astrophysicists have been exploring this type of plasma environment for thirty years, prior to verification of the process in the laboratory, based only on the extrapolation of some very fundamental physical principles. 

There are a surprising number of phenomena where a fundamental principle has been subjected to pretty heavy testing at current laboratory scales: energy-momentum conservation, time reversibility, Lorentz invariance, the wave function properties of fermions and bosons, etc.  Astrophysicists have occasionally explored the extreme limits of these principles and obtained some unusual predictions.  For example, the fact that electrons and neutrons are fermions (no more than two can occupy the same quantum state at the same time, AKA the Pauli Principle) implies that there high density configurations where an object can be held up by the 'pressure' created by this limit.  Computations demonstrate that such objects would have sizes and masses consistent with white dwarf and neutron stars.  I'm still assembling some of the fascinating nuclear physics surrounding these ideas.

References
  1. Limitations on the attainable intensity of high power lasers
  2. G. Breit and J. A. Wheeler. Collision of Two Light Quanta. Physical Review, 46:1087–1091, December 1934. doi: 10.1103/PhysRev.46.1087.
  3. M. S. Plesset and J. A. Wheeler. Inelastic Scattering of Quanta with Production of Pairs. Physical Review,  48:302–306, August 1935. doi: 10.1103/PhysRev.48.302.
  4. D. L. Burke, R. C. Field, G. Horton-Smith, J. E. Spencer, D. Walz, S. C. Berridge, W. M. Bugg, K. Shmakov, A. W. Weidemann, C. Bula, K. T. McDonald, E. J. Prebys, C. Bamber, S. J. Boege, T. Koffas, T. Kotseroglou, A. C. Melissinos, D. D. Meyerhofer, D. A. Reis, and W. Ragg. Positron Production in Multiphoton Light-by-Light Scattering. Physical Review Letters, 79:1626–1629, September 1997. doi: 10.1103/Phys-RevLett.79.1626.

Saturday, August 21, 2010

Electric Universe: Real Plasma Physicists BUILD Mathematical Models

In the previous post on plasma modeling, I challenged Electric Universe (EU) supporters on their use (or lack of use) of mathematical models which can actually be tested.  The best that the EU boosters could come up with was work by Hannes Alfven “Cosmical Electrodynamics” (1963), “Cosmic Plasma” (1981) and Anthony Peratt's “Physics of the Plasma Universe” (1992).  Another popular book quoted by EU supporters is James Cobine's “Gaseous Conductors” originally published in 1941!

While these are certainly excellent texts on the fundamentals of plasma physics, they are considerably dated in terms of modern techniques of mathematical analysis, plasma simulations, and plasma diagnostics, especially when it comes to using such processes as spectra of ions and atoms in the plasma to determine physical conditions.  Plenty of researchers have improved experimental and theoretical techniques in the approximately 20+ years since these books were published.

Then there is the whole issue of the growth of computing power.  Wasn't Peratt's original galaxy model run on a machine around the mid-1980s?  In 1986, the Cray X-MP had a speed of about 1 GFLOP.  Depending on the benchmarks, modern commercial-grade desktop computers are timed at 30-40 GFLOPs (Wikipedia: Xeon processors).  Even desktop class machines are being combined in ways to create even more powerful multiprocessing clusters (Wikipedia: Xgrid, Beowulf).  ES supporters cannot claim lack of access to reasonable computing power for their own plasma models (if they actually exist).

There has been significant laboratory and theoretical research on plasmas in nested spherical electrode configurations (similar to some Electric Sun models such as the one I call the Solar Capacitor Model) in the years since Cobine was published.  This work was usually related to efforts in developing mechanisms for controlled fusion.  Here's just a few of the papers I've found specifically that examine this configuration:
  • C. B. Wheeler. Space charge limited current flow between concentric spheres at potentials up to 15 MV. Journal of Physics A Mathematical General, 10:1645–1649, September 1977. doi: 10.1088/0305-4470/10/9/017.
  •  L. J. Sonmor and J. G. Laframboise. Exact current to a spherical electrode in a collisionless, large-Debye-length magnetoplasma. Physics of Fluids B, 3:2472–2490, September 1991. doi: 10.1063/1.859619.
  • A. Ferreira. Fokker-Planck solution for the spherical symmetry of the electron distribution function of a fully ionized plasma. Physical Review E, 48:3876–3892, November 1993. doi: 10.1103/PhysRevE.48.3876.
  • A. Amin, H.-S. Kim, S. Yi, J. L. Cooney, and K. E. Lonngren. Positive ion current to a spherical electrode in a negative ion plasma. Journal of Applied Physics, 75:4427–4431, May 1994. doi: 10.1063/1.355986.
  • E. S. Cheb-Terrab and A. G. Elfimov. The solution of Vlasov’s equation for complicated plasma geometry. I. Spherical type. Computer Physics Communications, 85:251–266, February 1995. doi: 10.1016/0010-4655(94)00144-Q.
  • V. Y. Bychenkov, J. P. Matte, and T. W. Johnston. Nonlocal electron transport in spherical plasmas. Physics of Plasmas, 3:1280–1283, April 1996. doi: 10.1063/1.871752.
  • O. A. Nerushev, S. A. Novopashin, V. V. Radchenko, and G. I. Sukhinin. Spherical stratification of a glow discharge. Physical Review E, 58:4897–4902, October 1998. doi: 10.1103/PhysRevE.58.4897.
  • F. Cornolti, F. Ceccherini, S. Betti, and F. Pegoraro. Charged state of a spherical plasma in vacuum. Physical Review E, 71(5):056407–+, May 2005. doi: 10.1103/PhysRevE.71.056407.

Going Non-Linear...
One of the popular complaints from ES advocates that my analyses are not treating the 'non-linear' aspects of the Electric Sun model. If that is their complaint, you'd think ES advocates would be all over this paper:
  • S. Xu and K. N. Ostrikov. Series in vector spherical harmonics: An efficient tool for solution of nonlinear problems in spherical plasmas. Physics of Plasmas, 7:3101–3104, July 2000. doi: 10.1063/1.874166.
EU supporters should be able to use the results of this paper to demonstrate a ES model against actual MEASUREMENTS.  Yet in the ten years since its publication, I can find nothing but excuses.  Instead, the EU supporters keep using 'non-linear' the same way creationists use “God did it”, as a magical incantation which frees them from doing any actual WORK that could really be called SCIENCE.

Why don't we see any of the results of the works above (and the many others available) in support of Electric Sun models?  Perhaps it is because
  1. The models did not generate any results that would support ES?
  2. The experiments did not generate any results that would support ES?
  3. The papers didn't have any good 'quote mines' which EU supporters could spin into alleged support for ES?
  4. EU doesn't know about them because they aren't doing any actual research, or 
  5. All of the above?
I vote for 5.

Coming Soon: "Plasma Modeling for Fun AND Profit"!

Saturday, August 14, 2010

On Dark Matter. I: What & Why?

This post is a distillation of some e-mail discussions I have had on this topic.

Some (but not all) young-earth creationists (YEC)  deny the existence of Dark Matter because in galaxies and clusters of galaxies, it is needed to keep these systems gravitationally bound over cosmological times of billions of years.   Since YECs need a young universe, less than 10,000 years old, such long time scales are not needed so, according to them, Dark Matter is not needed.  Their explanation is that the structures we see were created in their present form by a deity and have not had time to undergo any detectable change.

Electric Universe (EU) supporters deny the existence of Dark Matter under the justification that galaxies are powered by giant Birkeland currents this mechanism explains the rotation curves of galaxies.  These currents are as yet undetected in spite of the fact that WMAP had more than enough sensitivity to detect the synchrotron radiation Dr. Peratt claimed they should emit.

Some popular-level treatments of Dark Matter:
365 Days of Astronomy Podcast, Dark Matter: Not Like the Luminiferous Ether, by Rob Kno
Dark Matter: A Primer

What is “Dark Matter”?
“Dark Matter“ is a generic term used for something which we currently don't know precisely what it is.  Once we know what it is and detect it directly, it will certainly be renamed.  Its most general description is matter which can be detected by its gravitational influence but not (as yet) by more direct means such as emitted light.

Over the years, its observational definition has changed as refined instruments made it possible to identify some non-lumininous or low-luminosity components of dark matter with known objects and processes.

- MACHOs: non-luminous stellar-scale objects detected as part of the MACHO project

- Ionized hydrogen: free protons (positive hydrogen ions, sometimes called HII by astronomers) have no spectrum.  However, because ionizing hydrogen contributes an equal number of free electrons in the intergalactic medium, it can alter the balance of ions of other elements which have spectra we can detect.  This relationship allows us to infer the amount of ionized hydrogen in the IGM

- Neutrinos: For a number of years, neutrinos with mass were regarded as the prime candidate for dark matter.  As solar neutrino and ground-based experiments of neutrino oscillations placed smaller and tighter limits on the mass and other characteristics of the neutrino, it was eventually realized that neutrinos could be only part of the non-baryonic Dark Matter problem.
Dark Matter hasn't been demonstrated in the laboratory, so why believe it exists?

Many things were 'known' before they could be clearly demonstrated in the laboratory.  In many cases it was possible to devise indirect tests which were used to narrow in on the details.  This information was then used to refine future experiments in techniques for direct detection.  Not all of these problems existed in distant space.
  • From about 1920 to 1932, atomic physicists could not explain why most atoms were about twice as massive as the protons they contained.  They knew there was something that made up for the mass difference and primary speculation was some type of tightly bound proton-electron configuration, but those types of models did not produce good results.  The answer would await the discovery of the neutron in 1932, which did not interact by the electromagnetic force.  I have yet to find any papers predicting the existence of a neutral particle with a mass approximately that of the proton.
  • From 1933 to 1954, nuclear physicists had great success calculating nuclear reaction rates using a hypothetical particle they called the neutrino.  The neutrino salvaged conservation of energy and explained why electrons emitted in beta decay did not have a fixed energy (characteristic of a 2-body decay process) but exhibited a range of energies up to the maximum allowed by energy conservation, characteristic of a many-body decay process.  The neutrino would not be detected directly until 1954.  The neutrino did not interact electromagnetically or via the strong nuclear force.
  • The 1/r^2 force law of Newtonian gravity was not demonstrated at laboratory scales until the 1990s.  The real precision in defining the Newtonian gravitation force was established primarily through observations and precise measurements of planetary motion done years before we could actually travel in space.  If U.S. science required a definition of strict laboratory demonstration of Newtonian gravity before launching our first ballistic missiles or orbiting satellites, the Soviet Union would have kept their lead in spaceflight.
 In addition, astronomy has a rather successful history of detecting things first by their gravitational influence and confirming the objects later as detection technology improved.  Consider these examples from the history of astronomy:
  • The planet Neptune could be considered as the first example of 'dark matter', detected gravitationally before seen optically.  We didn't know that the planet had to exist, we only observed discrepancies in the orbit of Uranus and inferred the existence of a planet based on the understanding of gravity that existed at the time.  Alternatives, such as an extra term in Newton's gravitational force law, were examined as well.
  • Perturbations in the motions of the stars Sirius and Procyon, detected in 1844, were due white dwarf stars too faint to be seen by telescopes of the day.  It took 50 years for telescopes to improve their sensitivity to a level that these small, faint stars could be detected close to a bright primary star.  For 50 years, these stars were 'dark matter'.  We would later determine that these white dwarf stars hinted at another state of matter that existed at densities too high to be produced in current laboratories.
  • Perturbations in the spectral lines of distant stars has been most recently used to detect extrasolar planets since 1995.  These perturbations are due to the gravitational influence of the orbiting planet on its parent star.  Only recently have some of these planets been imaged directly.
Just as in these historical examples, we know there are limits in our ability to detect some processes and particles.  If a problem is solved by known processes that operates below our current detection threshold, then these are reasonable lines of research to pursue (dark matter, proton-proton reaction).  However, if the suggested solution indicates that the process is well within the detection threshold of current technology, that is most likely a dead end for research (see “Testing Science at the Leading Edge".

To be continued...
Minor typo fixed.  Thanks to the commenter who caught it.

Blogspot problems posting comments

Over the past few weeks, there have been an annoying number of problems posting comments, where the comment system reports an error message, suggesting the comment was rejected, but actually accepting the comments.

This issue has been reported by multiple users in the blogspot help forums but does not appear to have been fixed, at least as of yesterday evening.  There have been a number of changes made in the comment moderation software and this may be related.

Hopefully it will be fixed soon. 

In the meantime, if you get an error message starting with "URI too long..." while posting a comment, odds are good that the comment was recorded by the system.

My apologies for the aggravation.  It happens to me too. 

Saturday, August 7, 2010

Electric Universe: Real Plasma Physicists Use Mathematical Models!

One of the problems with Electric Universe (EU) claims is they seem incapable of producing mathematical models that can be used by other researchers to compare the predictions of their theories to other observations and experiments.  The common EU excuse is that plasma behavior is too complex to be modeled mathematically.  But that excuse reveals an almost schizophrenic mindset of the EU community.

One of the heroes of the EU supporters is Hannes Alfven (Wikipedia).  They rarely mention Alfven without mentioning that he was a winner of the Nobel Prize in Physics in 1970 (Nobel) and that this gives him more credibility than other researchers.  However, Alfven is not the only winner of the Nobel prize.  There are laureates back to 1901 (Nobel Physics Laureate List), including a number of prizes related to astrophysics:
  • 1951: John Cockcroft and Ernest Walton for studies in the transmutation of the atomic nucleus.  Much of this effort was driven by George Gamow's (wikipedia), theoretical work on quantum tunneling for the nuclear reactions needed to power the stars.
  • 1967: Hans Bethe (wikipedia) for solving the problem of stellar nucleosynthesis, building the light elements from hydrogen by a series of fusion reactions.  Bethe did this work in 1939.  A few years later he would be leading the theory group at Los Alamos as part of the effort to build the first atomic bomb.  He would later lead the theory group for the development of the hydrogen bomb.
  • 1983: Subramanyan Chandrasekhar and William Alfred Fowler for their work in nuclear astrophysics
  • 1993: Joseph Taylor and Russell Hulse for demonstrating tests of general relativity in the binary pulsar.
  • 2006: John Mather and George Smoot for the COBE measurements of the Cosmic Microwave background
Many of these other winners are for achievements which EU claims are not valid science.  So what makes Alfven's claims about plasma cosmology more valid when he was given the award for the development of magnetohydrodynamics (MHD), NOT his work on plasma cosmology?

So how does Alfven's Nobel Prize for MHD give plasma cosmology more credibility than the Nobel Prizes received by others FOR work on the standard cosmology?  Is the prize Nobel or ignoble? 

But what about MHD?  Just what is MHD? MHD is a set of mathematical equations (Wikipedia) which describes the behavior of certain classes of plasmas.  MHD works best for dense plasmas where the mean-free-path of the charged particles (the average distance between particle collisions) is small compared to the gyro-radius (the radius of the orbit of the particle in the magnetic field) of the particles.  This means the plasma behaves much more like a fluid (hence magnetoHYDROdynamics).

    •    Magnetohydrodynamics at Scholarpedia.
    •    Computational Magnetohydrodynamics at Wikipedia
    •    Plasma Modeling at Wikipedia
    •    Plasma Physics at Wikipedia

Alfven's accomplishments in astronomy did earn him the Gold Medal of the Royal Astronomical Society in 1967 and he won the Nobel prize for MHD which is used actively in astronomy today (including cases with less than infinite conductivity).  The chronic EU claim that Alfven was ignored by the astrophysical community doesn't hold up to the facts.  Like all scientists, Alfven had ideas that worked and ideas that didn't.  His ideas that actually worked were clearly adopted and appreciated by the astrophysical community.

Most of the negative things about Alfven seem to focus around a tendency to cling too much to ideas such as Plasma cosmology that were clearly failures.  One of the greatest problems I've had with Alfven's papers was his focus on quantities such as the total current in a system.  While this quantity is useful for exploring constraints such as the energy budget (matching of energy inflows to outflows), it is otherwise a quantity very difficult to tie back to what an observation or instrument might actually measure such as a flux density, etc. 

Many other Electric Universe 'heroes' developed mathematical models of plasmas as well.  Anthony Peratt's galaxy model, received some examination because it was presented in a form that facilitated mathematical analysis.  The problem is that all the evidence is that Nature didn't see fit to actually build galaxies that way (see "Scott Rebuttal. II. The Peratt Galaxy Model vs. the Cosmic Microwave Background", "Electric Universe: More data refuting the EU galaxy model").

Irving Langmuir, who coined the term 'plasma' also pioneered the mathematical analysis of plasmas and electric discharges in gases.  He was the first to explore the effect of 'space charge' (Wikipedia) in a plasma, where the changing velocities of electrons and ions in an electric field can create regions of net charge density which can have significant effects on the plasma flow.

Considering the number of 'heroes' of the EU supporters were pioneers and strong advocates of mathematical modeling of plasma, EU's denial of plasma modeling could best be described as hypocritical or schizophrenic.

Sunday, August 1, 2010

Darwin & Hitler: the Intelligent Design-Eugenics connection?

Periodically, the “Hitler supported evolution” claim is raised by supporters of Creationism and Intelligent Design (ID). It was used heavily in the ID-supported 'documentary' “Expelled: No Intelligence Allowed” (see "Expelled Exposed"). Recently, the “Exposing Pseudoastronomy” blog had an interesting take with "If Darwin Is Responsible for the Holocaust, Newton Is Responsible for Bombs", making the point that scientific discovery is morally neutral and that knowledge can be used for good or evil. The same studies of atomic and nuclear physics that made modern computers possible also contributed to the development of the atomic bomb. Bottom Line: Blaming science for human abuses of knowledge is a cop-out.

I had always seen this claim justified, not by what Hitler actually wrote or said, but based on someone else's *interpretation* of Hitler's behavior or writing. Considering the level of distortions possible through such third-hand routes, I decided to read the “Mein Kampf” for myself.

First note that I read the Ralph Manheim translation (1998, Houghton Mifflin Co, ISBN: 0-395-92503-7) which is in many bookstores. Hopefully I caught all the typos in my transcription below. I'll give the page numbers so others can confirm my claim (and perhaps check against other translations).

The rest of this post is based largely on a thread I originally posted to the USENET group Talk.Origins, in August 2006.

So what did I discover in this reading? Hint: I didn't find a single mention of Darwin in the nearly 700 pages of Hitler's ramblings.

Hitler believed he was doing God's work:
“Hence today I believe that I am acting in accordance with the will of the Almighty Creator: by defending against the Jew, I am fighting for the work of the Lord.“ [pg 65]
In fact he used many religious comparisons throughout the text:
“Sooner will a camel pass through a needle's eye than than a great man be 'discovered' by an election.“ [pg 88]
“Verily a man cannot serve two masters. And I consider the foundation or destruction of a religion far greater than the foundation or destruction of a state, let alone a party.“ [pg 114]
“Certainly we don't have to discuss these matters with the Jews, the most modern inventors of the cultural perfume. Their whole existence is an embodied protest against the aesthetics of the Lord's image.“ [pg 178]
“Anyone who dares lay hands on the highest image of the Lord commits sacrilege against the benevolent creator of this miracle and contributes to the expulsion from paradise.“ [pg 383]
So regardless of any atheistic inclinations he exhibited after obtaining power, during his rise to power, he knew well invoking religion would increase his support among the populace. How many modern politicians exploit that same trick?

He expressed admiration of Christianity for its fanaticism:
“The greatness of Christianity did not lie in attempted negotiation for compromise with and similar philosophical opinions in the ancient world, but it its inexorable fanaticism in preaching and fighting for its own doctrine.“ [pg 351]
and the adherence to dogma over science:
“Here, too, we can learn by the example of the Catholic Church. Through its doctrinal edifice, and in part quite superfluously, comes into collision with exact science and research, it is none the less unwilling to sacrifice so much as one little syllable of its dogmas. It has recognized quite correctly that its power of resistance does not lie in its lesser or greater adaptation to the scientific findings of the moment, which in reality are always fluctuating, but rather in rigidly holding to dogmas once established, for it is only such dogmas which lend to the whole body of the character of a faith. And so today it stands more firmly than ever. It can be prophesied that in exactly the same measure in which appearance evade us, it will gain more and more blind support as a static pole amid the flight of appearances.“ [pg 459]
“Faith harder to shake than knowledge, love succumbs less to change than respect, hate is more enduring than aversion, and the impetus to the mightiest upheavals on this earth has at all time consisted less in a scientific knowledge dominating the masses than in a fanaticism which inspired them and sometimes in a hysteria which drove them forward.“ [337-338].
He didn't like the notion of being compared to apes (common ancestry with apes is a common complaint in creationist literature):
“A folkish state must therefore begin by raising marriage from the level of a continuous defilement of the race, and give it the consecration of an institution which is called upon to produce images of the Lord and not some monstrosities halfway between man and ape.“ [pg 402]
Here it almost looks like he's describing the Theory of Evolution:
“Nature herself in times of great poverty or bad climatic conditions, as well as poor harvest, intervenes to restrict the increase of population of certain countries or races; this, to be sure, by a method as wise as it is ruthless. She diminishes, not the power of procreation as such, but the conservation of the procreated, by exposing them to hard trials and deprivation with the result that all those who are less strong and less healthy are forced back into the womb of the eternal unknown. those whom she permits to survive the inclemency of existence are a thousandfold tested, hardened, and well adapted to procreate in turn, in order that the process of thoroughgoing selection may begin again from the beginning. By thus brutally proceeding against the individual and immediately calling him back to herself as soon as he shows himself unequal to the storm of life, she keeps the race and species strong, in fact, raises them to the highest accomplishments.“ [pp 131-134]
but then there's this:
“No more than Nature desires the mating of weaker with stronger individuals, even less does she desire the blending of a higher with a lower races, since, if she did, her whole work of higher breeding, over perhaps hundreds of thousands of years, might be ruined with one blow.“ [pg 286]
where he suggests higher breeding is a GOAL of Nature. Isn't that one of the claims of Intelligent Design???

And this is consistent with:
“And in this it must remain aware that we, as guardians of the highest humanity on this earth, are bound by the highest obligation, and the more it strives to bring the German people to racial awareness so that, in addition to breeding dogs, horses, and cats, they will have mercy on their own blood, the more it will be able to meet this obligation.“ [pg 646]
Hitler compares his program of racial purification not to Darwin's natural selection, but to ANIMAL BREEDING or 'controlled selection', a practice which predates Darwin by thousands of years. Such 'controlled selection' was practiced by humans in forms ranging from 'ethnic cleansing' to maintaining 'royal' bloodlines LONG before Darwin. Like other pseudosciences, such racial programs were happy to incorporate modern scientific terminology in an attempt to enhance their credibility (see “Electric Universe: Everything I needed to know about science I learned from watching Star Trek?”). That species can change was known by animal breeders for millennia - Darwin just recognized that the natural environment could also act as a selection mechanism.

One of the key arguments used to support Creationism and Intelligent Design is that Natural Selection 'loses information', or is a 'degenerative' process, a claimed consequence of the Second Law of Thermodynamics. This seems to be the very argument that Hitler uses against the process of 'natural selection' in that it still allows 'unfit' individuals to breed so he clearly advocated controlling breeding based on his criteria of 'fitness'.

The notion of Intelligent Design is that for 'higher' beings to evolve, a 'Designer' must intervene, lest Natural Selection cause the population to 'lose information' and degenerate. How is this different from Hitler's justification of his eugenics (Wikipedia) policies [note the 'defilement' quote from page 402 above]?

From an operational perspective, the only difference between eugenics and Intelligent Design I can see is that eugenics is willing to name the designer (other humans)!  I have been disturbed by the amount of ID rhetoric which seeks to enhance the distinction of (superior) humans and (inferior) non-human species.   How different is this different from the rhetoric of racist groups who equate others to non-humans? 

Could Intelligent Design be a Trojan Horse for eugenics?

So...What Happened?

Wow.  It's been over eight years since I last posted here... When I stepped back in August 2015,...