I discussed in a previous article one of the most important discoveries in all of astronomy — the discovery of dark energy. Key to this is an accurate measurement of the distances and recession velocities of distant galaxies. For this, cosmologists have used Type Ia supernovae.
Type Ia supernovae are ideal for cosmological measurements. They are extremely bright, so they can be viewed at very large distances. And even more importantly, they are empirically observed to occur at the same intrinsic luminosity (at least once a few corrections have been made). This means that when a Type Ia supernova is observed, one can determine how far away it must be based on how bright it is observed to be. But there is one problem with Type Ia supernovae — it is not known what they are. I recently returned from a workshop at Carnegie on Type Ia supernova progenitors so I thought I would summarize the state of the field.
What we do know about Type Ia supernovae
Strictly speaking, the Type Ia classification is a spectroscopic and photometric classification. It is important to note that the classification does not really care what the supernova is, only on the features of the spectrum and light curve. A Type Ia supernova has the following characteristics:
- There is a secondary peak in the infrared light curve
- No hydrogen lines in the spectrum
- No helium lines in the spectrum
And that’s it. In practice the first point is often not used when classifying a supernova and instead an alternate spectroscopic characteristic is used:
- Silicon lines are present in the spectrum
Other supernova classes have different spectral lines in their spectra. Despite the fact that this classification scheme is not based on what the supernova really is, it so happens that Type Ia supernovae are physically quite different from other kinds of supernova, which are collectively referred to as core-collapse supernovae. Although there are many open questions about core-collapse supernovae, the basic picture is that core collapse supernovae occur when massive stars (at least 8 or so) run out of nuclear fuel to burn at the end of their lives.
Type Ia supernovae are a completely different kind of phenomenon. What we do know from the spectroscopic properties and the shape of the light curve is that Type Ia supernovae are exploding white dwarfs. Most white dwarfs are made principally of carbon and oxygen, and if the density of the white dwarf gets large enough, the carbon and oxgyen can be made to undergo a runaway thermonuclear reaction. In this reaction, the carbon and oxygen of the white dwarf burns all the way to Ni, which is an unstable isotope with a half-life of 9 days. The Ni then decays to Co, which is also unstable with a half-life of 114 days. The Co then decays to Fe, which is stable. It is the radioactive decay of nickel and cobalt that powers the light curve of the supernova.
The central question of the Type Ia supernova progenitor problem is: What causes the white dwarf to explode? An isolated white dwarf is stable, so some companion must be present. But it is unknown what the companion is. Here are the competing models:
The single-degenerate model
Until recently the single-degenerate model was the only model typically taught in introductory astronomy classes and textbooks. If you had heard of Type Ia supernova at all before, you probably learned the single-degenerate model. The single-degenerate model is so named because there is only one white dwarf (i.e., degenerate object). The companion is some other star, possibly a main sequence star or possibly some sort of evolved star, like a red giant or an AGB star. At some point during the lifetime of the companion star, its radius increases as it evolves. If the radius increases enough, it can fill its Roche lobe. When this happens, the outer atmosphere of the star will become unbound, and some fraction of it will stream to the white dwarf, where it will form an accretion disk and eventually settle onto the surface of the white dwarf.
Over time, then, the white dwarf will grow by accreting matter from its companion. The white dwarf cannot do this indefinitely, however, because there is a maximum mass for which it is possible for the white dwarf to support itself against gravity, known as the Chandrasekhar mass. Once the white dwarf exceeds this mass, the white dwarf collapses, ignites a thermonuclear chain reaction, and explodes.
The advantage of the single degenerate model is that it elegantly explains why nearly all Type Ia supernovae have very similar luminosities since it predicts that all Type Ia supernovae should explode in Chandrasekhar mass white dwarfs. The single degenerate model also has the advantage that it correctly predicts the abundances of certain elements produced in Type Ia supernovae, in particular magnesium. It is very difficult to produce magnesium except in the very dense conditions of an exploding Chandrasekhar mass white dwarf. If the explosions were to typically occur in lower mass (and hence less dense) white dwarfs it would be difficult to explain the abundance of magnesium observed in the Sun and other Sun-like stars.
There are, however, several difficulties with the single degenerate model. Two of these difficulties are related to the observational fact that no hydrogen is observed in the spectrum. The first is that it turns out to be very difficult for the white dwarf to accrete matter from its companion in a way consistent with observations. If the white dwarf accretes matter too quickly, it builds up a large hydrogen or helium atmosphere which engulfs its own Roche lobe, leading to common envelope evolution. But if the white dwarf accretes matter too slowly, it builds up a thin, cold degenerate hydrogen atmosphere. Once the atmosphere gets heavy enough, it can rapidly ignite hydrogen burning. This then causes a small explosion in the hydrogen atmosphere of the white dwarf which blows away the rest of the accreted material, leading to no net mass gain. There seems to be only a small range of mass transfer rates (only a factor of a few wide) within which it is possbile for the white dwarf to steadily gain mass. To achieve these mass transfer rates is certainly possible in some systems, but seems to require fine tuning in general.
The second difficulty with the single degenerate model is simply the lack of hydrogen observed in the spectrum. The white dwarf is accreting hydrogen, and although the hydrogen on the surface of the white dwarf immediately burns into helium, one would expect that some of the hydrogen in the accretion disk would appear in the spectrum. In fact, the energy of the shock from the explosion on the companion star would have the effect of puffing it up, increasing its luminosity, and emitting light in the spectral lines of hydrogen. Yet even very deep spectra of nearby Type Ia supernovae have not revealed any hydrogen in the spectrum at all. It is difficult to reconcile these modern observational constrains with the single degenerate model, and for this reason the double degenerate model has begun to become more popular.
The double-degenerate model
In the double degenerate model there are two white dwarfs instead of one. In any binary system, the two objects will lose orbital energy due to gravitational radiation and the orbit will shrink. Given enough time, the orbit of two white dwarfs will shrink to the point that they merge and coalesce into a single white dwarf. If the merger product exceeds the Chandrasekhar mass, the white dwarf may then collapse and explode as a Type Ia supernova.
One of the strongest pieces of evidence in favor of the double degenerate model lies in what is called the “delay time distribution.” If we were to take a galaxy that undergoes some burst of star formation and monitor it for Type Ia supernovae for billions of years, we would see a characteristic pattern. There would be a great deal of Type Ia supernovae at first, but the rate of Type Ia supernovae would fall off inversely with time. However, even many billions of years after the burst of star formation a few Type Ia supernovae would still be observed. In the single degenerate model it is difficult to explain why Type Ia supernovae are still observed many billions of years after a burst of star formation since after a few billion years the companion star should have evolved into a white dwarf itself. The single degenerate model generically predicts some sort of a cutoff in the delay time distribution after three or four billion years. However, the double degenerate model quite easily predicts that the observed rate of Type Ia supernovae should fall off inversely with time for the whole lifetime of the universe.
The difficulty with the double degenerate model has historically been the a problem of rates. One can look out in the universe and count up the number of white dwarf-white dwarf binaries at various orbital sizes and then estimate the rate at which they should merge in the Galaxy. These estimates are generally at least an order of magnitude smaller than the observed rate of Type Ia supernovae. The double degenerate model also has trouble reproducing the observed abundances of magnesium compared to the single degenerate model.
The triple scenario
The newest model has come to be known as the triple scenario. The triple scenario is similar to the double degenerate model, but with a twist — rather than proposing that isolated WD-WD binaries are the progenitors of Type Ia supernovae, we suppose that these binaries have tertiary companions in a distant orbit. The tertiary doesn’t have to be a white dwarf. In fact, any old star will do. This situation is not as uncommon as one might suppose — about 10% of all systems in the Galaxy are triple systems.
The key to the triple scenario is that if the orbit of the tertiary is at a high inclination relative to the inner WD-WD binary, it will induce Kozai-Lidov oscillations in the inner binary. Over time, it will cause the eccentricity of the inner binary to oscillate from relatively small values to extremely large values. At these extremely large eccentricities, gravitational wave radiation is much more efficient, greatly reducing the merger time. The accelerated merger of the white dwarfs in the inner binary due to the tertiary may therefore serve to mitigate some of the rate problems that the traditional double degenerate model has.
An even more exciting variant of the triple scenario is that if the eccentricity of the inner orbit becomes large enough its angular momentum can fluctuate stochastically due to the perturbative influence of the tertiary. Given enough time, this can lead to the two white dwarfs to actually collide head-on.
The strongest piece of evidence in favor of these collisions is the appearence of double peaked spectral lines in the late time spectrum of Type Ia supernovae. These peaks are moving relative to each other at several thousands of kilometers per second. Unfortunately these double peaks have not been studied in much detail. The only paper published on them found them in three Type Ia supernovae out of a sample of twenty, and it is unclear if those three are really typical.
The main issue with the triple scenario is how to construct a WD-WD binary with a highly inclined tertiary in the first place. One would imagine that if Kozai-Lidov oscillations can drive two white dwarfs to collide then Kozai-Lidov oscillations would drive the two stars to coalesce while still on the main sequence. This concern is borne out by more careful calculations. It may be possible to get around this problem, however. If the triple started at low inclination, Kozai-Lidov oscillations would not occur and the two main sequence stars would not interact. Eventually, after evolving into white dwarfs, the triple might be reoriented in some way — perhaps during the mass loss process or if another star were to pass near the triple — and be pushed to high inclination so that Kozai-Lidov oscillations could begin. Whether any of these processes can produce enough high inclination triples is still an open question (and a focal point of my research), but the preliminary results suggest that it is difficult to do so.
Where we are now
The current state of affairs in Type Ia supernova research is undoubtedly unsatisfying. There seems to be strong evidence both in favor and against the single degenerate model and the double degenerate model. Can it be both? Possibly. However, the case for a mix of models must explain another observational fact: nearly all Type Ia supernovae fall on a very tight curve relating the luminosity of the supernova to the decay time of the light curve after maximum light, known as the Phillips relation. The scatter on the Phillips relation is very small and the curve appears very smooth (i.e., there are no breaks). It seems at least peculiar that a mix of two very different physical scenarios could produce a one-parameter curve with no break. But it is perhaps not so outrageous to imagine that the two models could both reproduce the Phillips curve. Since degenerate objects are relatively simple physical objects, it may just be that degenerate object explosions generically appear similar no matter the mechanism that led to the explosion. Alas, the SN Ia community is still far from answering these questions.