Little Satis, Mrs. Satis and I have quite come to enjoy watching Cosmos on Sunday evenings. Despite its often over-sensationalized tone, the show covers some intriguing topics and presents them in a famously easy-to-understand way. However, as with all simplification, sometimes crucial details are missed or skimped, leaving me at least with a deeper yearning for understanding.
Which is hopefully the point of the show.
Last night’s episode told the tale of Clair Cameron Patterson, who pioneered a now-common method of dating rocks to determine the age of the earth. To be fair, it wasn’t his idea but that of his principal investigator, Harrison Brown; yet Brown allowed Patterson to do the work as a PhD student, and kept him on even when uprooting and moving his lab to California, and when Patterson’s experiments seemed to be failing miserably.
The essence of the episode was in fact to showcase how Patterson inadvertently revealed the extent of lead toxicity in the environment around us, and fought for decades afterward to remove lead from paints, petrols and pipes. Because of this, the hardcore science of the experiments was, if not scaled back, not dived into. And it left me with a burning question, which I’ll get to in a moment.
How on earth could this method of comparison work?
You see, to date the earth (or any piece of rock for that matter), one of the primary methods is to measure the amount of uranium in the sample, and compare it to the amount of lead. Why does this help? Because uranium is an unstable radioactive element, which decays into lead over a very long period of time (the half-life of 238U being almost 4.5 billion years). Because this decay occurs at a constant rate, by comparing the amount of uranium to the amount of lead the age of the rock can be calculated.
From this point, the episode covered the discovery that Patterson’s initial measurements were skewed because of the fact that lead was present in the atmosphere and soil in unnatural quantities, and veered off into the politics of fighting the oil companies who were putting lead in fuel as an “anti-knock” agent. But it left me wondering: how on earth could this method of comparison work?
After all, we can measure the amount of uranium and the amount of lead in a rock as it stands today, but how can we possibly know what proportions existed in that rock three or four billion years ago? After all, lead is as naturally-occuring an element as uranium, so presumably there was lead in primeval igneous rocks as well as uranium. If this was the case, how to tell the difference?
So I had to do a little research, and the answer, as far as I can tell, is this: the key was using zircon crystals. Zircon, it seems, actually rejects lead from its crystalline structure: in other words, zirconium and lead don’t mix. This was presumably known at the time, which meant that the traces of lead that were found in zircon crystals could only be from uranium decay. This implies that when the rock was first formed, there was no lead in it at all. Since the rate of uranium decay is known, then the amount of lead can be directly used to backward-calculate how long it took to form, ergo the age of the rock.
Now what Patterson did was a little more complex, by comparing different isotopes of lead in a series of meteorite samples, but the principle is the same. The maths involved are a little beyond me at the moment (bringing back that rusty high-school physics!), but as I understand it, it nonetheless depends on the ratio of radiogenic (formed by radioactive decay) and non-radiogenic versions of lead.
What I still don’t understand about this, however, is that presumably meteorite fragments would have been decaying their uranium at the same rate as the earth, and how could we know in an iron meteorite fragment what amount of non-radiogenic lead would have been present 4.5 billion years ago?
Any geophysicists out there that could help me with this one?