We live in a technological era that would be impossible without the ability to make quantitative predictions. And the first great example of quantitative prediction was to be found in Newton’s theory of universal gravitation. Starting from the hypothesis that the gravitational attraction between two masses is directly proportional to the product of the masses and inversely proportional to the square of the distance between them, Newton figured out that the orbit of a planet was an ellipse with the sun at one of the foci. Johannes Kepler had reached this conclusion from years of painstaking observations, but Newton was able to do so with no more than the assumption of gravitational attraction and the mathematical tool of calculus (which he had invented for this purpose).
Curiously, though the gravitational constant, G, was the first constant to be discovered, it is the least accurately known of all 13 constants. That is because of the extreme weakness of the gravitational force when compared with the other basic forces. Consider that through a mass of the earth is approximately 6 x 1024 kilograms, by 1957—about three centuries after Newton left plague-ravaged London—humans overcame the earth’s gravitational attraction by using a simple chemical-powered rocket to place Sputnik, the first artificial satellite, in orbit.
By the end of the nineteenth century, technology and ingenuity had advanced so far that it was possible to measure the speed of light within 0.02 percent of its actual value. This enabled Albert Michelson and Edward Morley to demonstrate that the speed of light was independent of direction. This startling result led eventually to Einstein’s theory of relativity, the iconic intellectual achievement of the 20th century and perhaps of all time.
It is often said that nothing can travel faster than light. Indeed, nothing physical in the universe can travel faster than the speed of light, but even though our computers process information at near light speed, we still wait impatiently for our files to download. The speed of light is fast, but the speed of frustration is even faster.
Robert Boyle was perhaps the first great experimentalist and was responsible for what we now consider to be the essence of experimentation: vary one or more parameter, and see how other parameters change in response. It may seem obvious in retrospect, but hindsight, as the physicist Leo Szilard once remarked, is notably more accurate than foresight.
Boyle discovered the relationship between the pressure and volume of a gas, and a century later, the French scientists Jacques Charles and Joseph Gay-Lussac discovered the relationship between volume and temperature. This discovery was not simply a matter of donning a traditional white lab jacket (which hadn’t yet been invented) and performing a few measurements in comfortable surroundings. To obtain the required data, Gay-Lussac took a hot-air balloon to an altitude of 23,000 feet, possibly a world record at the time. The results of Boyle, Charles and Gay-Lussac could be combined to show that in a fixed quantity of a gas, the temperature was proportional to the product of pressure and volume. The constant of proportionality is known as the ideal gas constant.
Michael Faraday, who is far better known for his contributions to the study of electricity, was the first to suggest the possibility of producing colder temperatures by harnessing the expansion of a gas. Faraday had produced some liquid chlorine in a sealed tube, and when he broke the tube (and thereby lowered the pressure), the chlorine instantly transformed into a gas. Faraday noted that if lowering the pressure could transform a liquid into a gas, then perhaps applying pressure to gas could transform it into a liquid—with a colder temperature. That’s basically what happens in your refrigerator; gas is pressurized and allowed to expand, which cools the surrounding material.
Pressurization enabled scientists to liquefy oxygen, hydrogen and, by the beginning of the 20th century, helium. That brought us to within a few degrees of absolute zero. But heat is also motion, and a technique of slowing down atoms by using lasers has enabled us to come within millionths of a degree of absolute zero, which we now know to be slightly more than –459 degrees Fahrenheit. Absolute zero falls in the same category as the speed of light. Material objects can get ever so close, but they can never reach it.
The first key, the atomic theory, was discovered by John Dalton at the dawn of the 19th century. The renowned physicist Richard Feynman felt that the atomic theory was so important that he said, “If, in some cataclysm, all of the scientific knowledge were to be destroyed, and only one sentence passed on to the next generation of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis that all things are made of atoms—little particles that move around in perpetual motion.”
These are the 92 (naturally occurring) elements that are the fundamental building blocks of all the matter in the universe. However, almost everything in the universe is a compound; a combination of different kinds of elements. Thus, the second key to modern chemistry was the discovery that each compound was a collection of identical molecules. For example, a batch of pure water is made of lots and lots of identical H2O molecules.
But just how many molecules? Getting the bookkeeping right so that we could predict the result of chemical reactions proved to be a major roadblock to the advancement of chemistry. The Italian chemist Amadeo Avogadro proposed that at the same temperature and pressure equal volumes of different gases contained the same number of molecules. This hypothesis was largely unappreciated when it was first announced, but it enabled chemists to deduce the structure of molecules by measuring volumes at the start and finish of a chemical reaction. Avogadro’s number is defined to be the number of atoms in 12 grams of carbon, and is approximately six followed by 23 zeroes. (It’s also the number of molecules in a mole a unit of measurement that chemists use to express the amount of a substance.)
It’s a good thing, too—the fact that electricity is so much stronger than gravity enables life to exist. Life is a complex of chemical and electrical reactions, but even the chemical reactions that power the motions of muscles or the digestion of food are, at their core, dependent upon electricity. Chemical reactions take place as the electrons at the outer edges of atoms shift their allegiance from one atom to another. In doing so, different compounds are formed as the atoms recombine. These shifts cause our nerves to send messages to our muscles, to enable us to move, or to our brain, where the information gathered by our senses is processed.
If electricity were weaker relative to gravity than it actually is, this would be more difficult. It’s possible that evolution could produce a way for life to adapt to such a circumstance. But we’ll have to check in another universe to find out.
The solution to this problem was found by the Austrian physicist Ludwig Boltzmann, who discovered that there were many more ways for energy to be distributed throughout the molecules of a glass of tepid water than in a glass of hot water with ice cubes. Nature is a percentage player. It goes most often with the most likely way to do things, and Boltzmann’s constant quantifies this relationship. Disorder is much more common than order—there are many more ways for a room to be messy than clean (and it’s much easier for an ice cube to melt into disorder than for the ordered structure of an ice cube to simply appear).
Boltzmann’s entropy equation, which incorporates Boltzmann’s constant, also explains Murphy’s law: If anything can go wrong, it will. It isn’t that some malignant force is acting to make things go wrong for you. It’s just that the number of ways that things can go wrong greatly exceeds the number of ways that things can go right.
Strong words, indeed, but time proved Planck was absolutely correct. His startling revelation was that the universe packages energy in finite multiples of a smallest amount, much as the atomic theory proclaims that the universe packages matter in finite multiples of atoms. These small packages of energy are known as quanta, and Planck’s constant, abbreviated h, tells us the size of these packages.
Planck’s quantum theory has proved to be not only an explanation of the way the universe is structured, but also the spark of the technological revolution of the 20th and 21st centuries. Almost every advance in electronics, from lasers to computers to magnetic resonance imagers, derives from what the quantum theory tells us about the universe. Additionally, the quantum theory provides us with a highly counterintuitive picture of reality. Concepts such as parallel universes, once the stuff of science fiction (if envisioned at all), are now firmly entrenched, thanks to quantum theory, as legitimate explanations of the way things are—or at least the way they might be.
Einstein put forth his theory in the form of a system of equations. These equations were extremely difficult to solve, but Schwarzschild managed to find a solution to them in the midst of the carnage of a war. Not only that, but he also showed that for any given quantity of matter, there was a sphere so small that if all that matter were packed inside it, it would become a black hole. The radius of the sphere is known as the Schwarzschild radius. (There is no single Schwarzschild radius; it’s a different size for every possible mass.)
Popular treatments leave us with the impression that black holes are ominously small, dense and black. For example, the Schwarzschild radius for a mass the size of the earth is only about 1 centimeter. But surprisingly, much larger black holes can be diffuse. If an entire galaxy’s mass were distributed evenly within its Schwarzschild radius to create a black hole, the black hole’s density would be about 0.0002 the density of the earth’s atmosphere.
Carl Sagan famously said, “We are all star-stuff.” That’s true, and it’s thanks to the efficiency of hydrogen fusion.
The universe is mostly hydrogen. To produce more complex elements—in particular, the ones that make life possible—there has to be a way to get those other elements from hydrogen. The universe does it with stars, which really are just very large balls of hydrogen, assembled through gravitational attraction. The pressure of this gravitational attraction is so strong that nuclear reactions start to occur, and hydrogen is transmuted into helium through fusion.
The amount of energy released in this process is given by Einstein’s famous equation E = mc2. But only 0.7 percent of the hydrogen initially present actually becomes energy. Expressed as a decimal, this number is 0.007. This is the efficiency of hydrogen fusion, and the presence of life in the universe is very sensitive to this number.
One of the first steps in the fusion of hydrogen is the production of deuterium (heavy hydrogen) and this would not happen if the efficiency of hydrogen fusion fell below 0.006. Stars would still form, but they would simply be large glowing balls of hydrogen. If the efficiency of hydrogen fusion were 0.008 or higher, then fusion would be too efficient. Hydrogen would become helium so quickly that the hydrogen in the universe would be used up. Since each water molecule contains two atoms of hydrogen, it would be impossible for water to form. Without water, life as we know it could not exist.
Life as we know it is based on the element carbon, but life also requires a large variety of other, heavier atoms. There is only one process in the universe that produces these heavier elements, and that is a supernova, the explosion of a giant star. A supernova explosion produces all those heavier elements and scatters them throughout the universe, enabling planets to form and life to evolve. Supernovas are rare but spectacular. The supernova that appeared in the sky 1987 actually happened more than 150,000 light-years from earth but was still visible to the naked eye.
The size of a star determines its fate. Stars the size of the sun live relatively quiet lives (though billions of years from now the sun will expand and engulf the earth). Stars slightly larger than the sun will become white dwarves, intensely hot but small stars that will cool slowly and die. However, if a star exceeds a certain mass—the Chandrasekhar limit—then it is destined to become a supernova.
The Chandrasekhar limit is approximately 1.4 times the mass of the sun. Extraordinarily, Subrahmanyan Chandrasekhar discovered this as a 20-year-old student by combining the theories of stellar composition, relativity and quantum mechanics during a trip on a steamship from India to England.
There are really only two possibilities for the universe: Either it has always been here, or it had a beginning. The question as to which is right was resolved in the late 1960s, when conclusive evidence showed that the universe began in a giant explosion. The particulars of the big bang are almost impossible to comprehend. All the matter of the universe, all its stars and galaxies, originally was squished together inside a volume so small it makes the volume of a single hydrogen atom seem gargantuan in comparison.
If the universe began in a giant explosion, how long ago did that explosion take place, and how big is the universe today? It turns out that there is a surprising relationship between those two questions, a relationship that was first suspected in the 1920s as the result of observations by Edwin Hubble (for whom the famous space telescope is named) at the Mount Wilson observatory outside Los Angeles.
Hubble, using a technique similar to the one currently used by radar guns, discovered that the galaxies were generally receding from earth. Since there is nothing astronomically special about earth’s place in the universe, this must be taking place across the universe: All the galaxies are flying apart. The relationship between the speed at which a galaxy appears to be moving away and its distance from earth is given by Hubble’s constant. From this we can figure out that the big bang occurred approximately 13.7 billion years ago.
If you launch a rocket from a planet, and you know the rocket’s speed, then knowing whether it can escape a planet’s gravity depends upon how massive the planet is. For instance, a rocket with enough speed to escape the moon might not have enough speed to escape the earth.
The fate of the universe depends upon the same kind of calculation. If the big bang imparted enough velocity to the galaxies, they could fly apart forever. But if it didn’t, then the galaxies would find themselves similar to rockets without escape velocity. They would be pulled back together in a big crunch—the reverse of the big bang.
It all depends upon the mass of the entire universe. We know that if there were approximately five atoms of hydrogen per cubic meter of space, that would be just enough matter for gravitational attraction to bring the galaxies back together in a big crunch. That tipping point is called Omega; it’s the ratio of the total amount of matter in the universe divided by the minimum amount of matter needed to cause the big crunch. If Omega is less than one, the galaxies will fly apart forever. If it’s more than one, then sometime in the far-distant future the big crunch will happen. Our best estimate at the moment is that Omega lies somewhere between 0.98 and 1.1. So the fate of the universe is still unknown.
STEVE RAMSEY – ALBERTA -CANADA