IN 2006, over 250 new molectler machines were discovered.
Vern Bender
Problem for Origin-of-Life Theories
A cell is often described as a factory — a quite extraordinary factory that can run autonomously and reproduce itself. The first cell required a lengthy list of components, layers of organization, and a large quantity of complex specified information, as described by previous episodes of Long Story Short. The latest entry in the series emphasizes yet another requirement for life: an abundance of specific repair mechanisms.
Damage to the “factory” of the cell occurs on two levels: damage to the stored information (either during replication or by natural degradation over time) and damage to the manufacturing machinery (either from faulty production of new machinery or damage incurred during use). Each type of damage requires specific repair mechanisms that demonstrate foresight — the expectation that damage will occur and the ability to recognize, repair and/or recycle only those components that are damaged. All known life requires these mechanisms.
Damage to Stored Information
The initial process of DNA replication is facilitated by a polymerase enzyme which results in approximately one error for every 10,000 to 100,000 added nucleotides.1 However, no known life can persist with such a high rate of error, if left uncorrected.2 Fortunately, DNA replication in all life includes a subsequent proofreading step — a type of damage repair — that enhances the accuracy by a factor of 100 to 1,000. The current record holder for the sloppiest DNA replication of a living organism, under normal conditions, is Mycoplasma acids (and its human-modified relative, JVCI-syn 3A), where only 1 in 33,000,000 nucleotides are incorrectly copied.3
Following the replication of DNA, a daily barrage of DNA damage occurs during normal operating conditions. Life, therefore, requires sophisticated and highly specific DNA repair mechanisms. In humans, DNA damage response is estimated to involve a hierarchical organization of 605 proteins in 109 assemblies.4 Efforts to make the simplest possible cell by stripping out all non-essential genes has successfully reduced DNA repair to a minimal set of six genes.5 But, these six genes are encoded in thousands of base pairs of DNA, and the machinery to transcribe and translate those genes into the repair enzymes requires a minimum of 149 genes.6 Thus, the DNA code that is required to make DNA repair mechanisms easily exceeds 100,000 base pairs. Here, we encounter a great paradox, first identified in 1971 by Manfred Eigen7: DNA repair is essential to maintain DNA but the genes that code for DNA repair could not have evolved unless the repair mechanisms were already present to protect the DNA.
Faulty Production of New Machinery
We used to think that the metabolic machinery in a cell always produced perfect products. But faulty products are unavoidable, resulting in the production of interfering or toxic garbage. All living organisms must therefore have machinery that identifies problems and either repairs or recycles the faulty products.
The cell’s central manufacturing machine is the ribosome, a marvel that produces functional proteins from strands of mRNA (with the help of many supporting molecules). Unfortunately, about 2-4 percent of mRNA strands get stuck in the ribosome during translation into a protein.8 Not only does this halt production, but it could result in production of a toxic, half-finished protein.
If the mitochondria could not get “unstuck,” life as we know it would end. In the process of self-replication, a single cell must produce an entire library of proteins, placing a heavy burden on the cell’s mitochondria. But with a 2-4 percent rate of stuck mRNA strands, the average cell would have each of its mitochondria get stuck at least five times before the cell could replicate.9 Therefore, life could never replicate and metabolism would cease unless this problem was solved.
Fortunately, all forms of life, even the simplest,9 are capable oftrans–translation, typically involving a three-step process. First, a molecule combining transfer and messenger RNA and two helper molecules (SB and EF-Tu) recognizes that mRNA is stuck in the ribosome and attaches a label to the half-formed protein. This label, called a degree, is essentially a poly alanine peptide. The condemned protein is recognized, degraded, and recycled by one of many proteases. Finally, the mRNA must also be labeled and recycled to keep it from clogging other ribosomes. In some bacteria,10 a pyrophosphohydrolase enzyme modifies the end of the mRNA, labeling it for destruction. A RNAse (another enzyme) then recognizes this label, grabs hold of the mRNA, and draws it close to its magnesium ion, which causes cleavage of the RNA. Another RNAse then finishes the job, breaking the mRNA up into single nucleotides, which can be re-used.
The required presence of tools that can destroy proteins and RNA also comes with a requirement that those tools are highly selective. If these tools evolved, one would expect the initial versions to be non-selective, destroying any proteins or RNA within reach, extinguishing life and blocking the process of evolution.11
Note that the set of tools for trans-translation and protein and RNA recycling are all stored in DNA, which must be protected by repair mechanisms. And, these tools cannot be produced without mitochondria, but the mitochondria cannot be unstuck without the action of trans-translation. Thus, we encounter another case of circular causality.
Damage Incurred During Use
The normal operation of enzymes or metabolites like co-enzymes or cofactors involves chemical reactions that follow specific paths. Deviations from the desired paths can occur from interferences like radiation, oxidative stress, or encountering the wrong “promiscuous” enzyme. These deviations result in rogue molecules that interfere with metabolism or are toxic to the cell. As a result, even the simplest forms of life require several metabolic repair mechanisms:
THERE can be little room left to doubt that metabolite damage and the systems that counter it are mainstream metabolic processes that cannot be separated from life itself.
“It is increasingly evident that metabolites suffer various kinds of damage, that such damage happens in all organisms and that cells have dedicated systems for damage repair and containment.”
As a relatively simple example of a required repair mechanism, even the simplest known cell (JVCI Syn 3A) has to deal with a sticky situation involving sulfur. Several metabolic reactions require molecules with a thiol group — sulfur bonded to hydrogen and to an organic molecule. The organism needs to maintain its thiol groups, but they have an annoying tendency to cross-linkcross-link (i.e., two thiol groups create a disulfide bond, fusing the two molecules together). Constant maintenance is required to break up this undesired linking. Even the simplest known cell requires two proteins (TrxB/JCVISYN3A_0819 and TrxA/JCVISYN3A_0065) to restore thiol groups and maintain metabolism.12 Because the repair proteins are themselves a product of the cell’s metabolism, this creates another path of circular causality: You can’t have prolonged metabolism without the repair mechanisms but you can’t make the repair mechanisms without metabolism.
Also, life’s required repair mechanisms. All forms of life include damage prevention mechanisms. These mechanisms can destroy rogue molecules, stabilize molecules that are prone to going rogue, or guide chemical reactions toward less harmful outcomes. As an example, when DNA is replicated, available monomers of the four canonical nucleotides (G, C, T, and A) are incorporated into the new strand. Some of the cell’s normal metabolites, like dUTP (deoxyuridine triphosphate), are similar to a canonical nucleotide and can be erroneously incorporated into DNA. Even the simplest cell (once again, JVCI-syn3A) includes an enzyme (deoxyuridine triphosphate pyrophosphatase) to hydrolyze DUTC and prevent formation of corrupted DNA.6
Summing Up the Evidence
Those who promote unguided abiogenesis simply brush off these required mechanisms, claiming that life started as simplified “proto-cells” that didn’t need repair. But there is no evidence that any form of life could persist or replicate without these repair mechanisms. And the presence of the repair mechanisms invokes several examples of circular causality — quite a conundrum for unintelligent, natural processes alone. Belief that simpler “proto-cells” didn’t require repair mechanisms requires blind faith, set against the prevailing scientific evidence.
d his “high-quality colleagues” in the ID research community has exposed the problems that Darwinists need to be working on to solve.
Settled Science
About Dr. Myer, he says:
I encountered people like Stephen Meyer, who were not phony scientists, pretending to do the work. They were actually very good at what they did. And I believe Stephen Meyer is motivated by a religious motivation, but we rarely ask the question when somebody takes up science, “What are you really in it for? Are you in it for the fame?” That’s not a legitimate challenge to somebody’s work.
And the fact is, Stephen Meyer is very good at what he does. He may be motivated by the thought that at the end of the search, he’s going to find Jesus. But in terms of the quality of his arguments, I was very impressed when I met him: his love for biology, his love for creatures, the weirder the better, he likes them, right? So that looked very familiar to me.
M motivation should be irrelevant. The quality of the science is what counts. I would add, none of us is a mind reader and we can never know what someone else’s motivation really is. Says Weinstein, ID clearly is about science, not religion:
Add it all also became obvious to me in interacting with Stephen Meyer and many of his high-quality colleagues that they’re actually motivated, for whatever reason, to do the job that we are supposed to be motivated to do inside of biology. They’re looking for cracks in the theory. Things that we haven’t yet explained. And they’re looking for those things for their own reasons, but the point is we’re supposed to be figuring out what parts of the stories we tell ourselves aren’t true, because that’s how we get smarter over time.
Darwinists, say Weinstein, are shrinking from a fight they wrongly feel they shouldn’t have to bother with: If you decide… that your challengers aren’t entitled to a hearing because they’re motivated by the wrong stuff, then you do two things. One, you artificially stunt the growth of your field, and you create a more vibrant realm where your competitors have a better field to play in because you’ve left a lot of holes in the theory ready to be identified, which I think is what’s going on. The better intelligent design folks are finding real questions raised by Darwinism, and the Darwinists, instead of answering those questions, [are] deciding it’s not worthy of their time. And that is it is putting us on a collision course.
“Giving Up Darwin”
Heying cites the 2019 public defection from Darwinism of Yale computer scientist David Gelernter, who pointed to Meyer’s writing as his primary reason for “Giving Up Darwin.” She admits she hasn’t kept up with the challenges from ID, but agrees that she should keep up, and that’s because challenges like those from ID can make the evolutionary establishment “smarter.” Ignoring the challenges makes the establishment dumber — stagnant and self-satisfied. I’m not familiar with most of the arguments that are coming out of the intelligent design movement. It hasn’t felt like it was my obligation to be familiar with them. Perhaps what you’re arguing is it is our responsibility.
Weinstein, unlike Coyne or Dawkins, is up for talking and debating with ID proponents:
I’m open to that battle and I expect that if we pursue that question, what we’re going to find is, oh, there’s a layer of Darwinism we didn’t get and it’s going to turn out that the intelligent design folks are going to be wrong. But they will have played a very noble and important role in the process of us getting smarter. And look, I think Stephen Meyer at the end of the day, I don’t think he’s going to surrender to the idea that there’s no God at the end of this process. But if we find a layer of Darwinism that hasn’t been spotted, that answers his question, I think he’s going to be delighted with it the same way he’s delighted by the prospect of seeing whale sharks.
Again, these are remarkable concessions from a couple of scientists who are not at all looking to make the leap to ID, but who understand that intelligent design, not Darwinism, is currently at biology’s cutting edge
nake NacashNacash rangers from the Late Cretaceous of Patagonia, which is one of the oldest fossil snakes known to science. It was found in terrestrial sediments and shows a well-defined sacrum with pelvis connected to the spine and functional hind legs. Therefore it was considered as supporting an origin of snakes from burrowing rather than aquatic ancestors (Groshong 2006). I had reported about the highly controversial and hotly debated topic of snake origins in a previous article (Bechly 2023), where you can find links to all the relevant scientific literature.
But there was another open question concerning the origin of snakes: Did their distinct body plan evolve gradually as predicted by Darwinian evolution, or did snakes appear abruptly on the scene as predicted by intelligent design theory? Earlier this year a seminal new study was published by a team of researchers from the University of Michigan and Stony Brook University in the prestigious journal Science (Title et al. 2024). This study brought important new insights with the mathematical and statistical modelling of the most comprehensive evolutionary tree of snakes and lizards, based on a comparative analysis of the traits of 60,000 museum specimens and the partial sequencing of the genomes of 1,000 species (SBU 2024, Osborne 2024). The study found that all the characteristic traits of the snake body plan, such as the flexible skull with articulated jaws, the loss of limbs, and the elongated body with hundreds of vertebrae, all appeared in a short window of time about 100-110 million years ago , became “evolutionary winners” because they evolved “in breakneck pace” (Wilcox 2024), which the senior author of the study explained with the ad hoc hypothesis that “snakes have an evolutionary clock that ticks a lot faster than many other groups of animals, allowing them to diversify and evolve at super quick speeds” (Osborne 2024). Well, that is not an explanation at all, but just a rephrasing of the problem. How could such a super quick evolution be accommodated within the framework of Darwinian population genetics and thus overcome the waiting time problem? After all, the complex re-engineering of a body plan requires coordinated mutations that need time to occur and spread in an ancestral population. Did anybody bother to do the actual math to check if such a proposed supercharged evolution is even feasible, given the available window of time and reasonable parameters for mutation rates, effective population sizes, and generation turnover rates? Of course not. We just have the usual sweeping generalizations and fancy just-so stories.
My prediction is that this will prove to be another good example of the fatal waiting time problem for neo-Darwinism. We can add the origin of snakes to the large number of abrupt appearances in the history of life (Bechly 2024), and I am happy to embrace the name coined by the authors of the new study f
Long before modern technology, students of biology compared the workings of life to machines.1 In recent decades, this comparison has become stronger than ever. As a paper in Nature Reviews Molecular Cell Biology states, “Today biology is revealing the importance of ‘molecular machines’ and of other highly organized molecular structures that carry out the complex physics-chemical processes on which life is based.”2 Likewise, a paper in Nature Methods observed that “[m]ost cellular functions are executed by protein complexes, acting like molecular machines.
What are Molecular Machines?
A molecular machine, according to an article in the journal Accounts of Chemical Research, is “an assemblage of parts that transmit forces, motion, or energy from one to another in a predetermined manner.”4 A 2004 article in Annual Review of Biomedical Engineering asserted that “these machines are generally more efficient than their macroscale counterparts,” further noting that “[c]ountless such machines exist in nature.”5 Indeed, a single research project in 2006 reported the discovery of over 250 new molecular machines in yea..6
Molecular machines have posed a stark challenge to those who seek to understand them in Darwinian terms as the products of an undirected process. In his 1996 book Darwin’s Black Box: The Biochemical Challenge to Evolution, biochemist Michael Behe explained the surprising discovery that life is based upon machines: Shortly after 1950 science advanced to the point where it could determine the shapes and properties of a few of the molecules that make up living organisms. Slowly, painstakingly, the structures of more and more biological molecules were elucidated, and the way they work inferred from countless experiments. The cumulative results show with piercing clarity that life is based on machines — machines made of molecules! Molecular machines haul cargo from one place in the cell to another along “highways” made of other molecules, while still others act as cables, ropes, and pulleys to hold the cell in shape. Machines turn cellular switches on and off, sometimes killing the cell or causing it to grow. Solar-powered machines capture the energy of photons and store it in chemicals. Electrical machines allow the current to flow through nerves. Manufacturing machines build other molecular machines, as well as themselves. Cells swim using machines, copy themselves with machinery, ingest food with machinery. In short, highly sophisticated molecular machines control every cellular process. Thus, the details of life are finely calibrated and the machinery of life is enormously complex.7
Behe then posed the question, “Can all of life be fit into Darwin’s theory of evolution?,” and answered: “The complexity of life’s foundation has paralyzed science’s attempt to account for it; molecular machines raise an as-yet impenetrable barrier to Darwinism’s universal reach,
Even those who disagree with Behe’s answer to that question have marveled at the complexity of molecular machines. In 1998, former president of the U.S. National Academy of Sciences Bruce Alberts wrote the introductory article to an issue of Cell, one of the world’s top biology journals, celebrating molecular machines. Alberts praised the “speed,” “elegance,” “sophistication,” and “highly organized activity” of “remarkable” and “marvelous” structures inside the cell. He went on to explain what inspired such words:
The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines. . . . Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts.9
Likewise, in 2000 Marco Piccolini wrote in Nature Reviews Molecular Cell Biology that “extraordinary biological machines realize the dream of the seventeenth-century scientists … that ‘machines will be eventually found not only unknown to us but also unimaginable by our mind.’” He notes that modern biological machines “surpass the expectations of the early life scientists.
A few years later, a review article in the journal Biological Chemistry demonstrated the difficulty evolutionary scientists have faced when trying to understand molecular machines. Essentially, they must deny their scientific intuitions when trying to grapple with the complexity of the fact that biological structures appear engineered to the schematics of blueprints:
Molecular machines, although it may often seem so, are not made with a blueprint at hand. Yet, biochemists and molecular biologists (and many scientists of other disciplines) are used to thinking as an engineer, more precisely, a reverse engineer. But there are no blueprints … ‘Nothing in biology makes sense except in the light of evolution’: we know that Dobzhansky (1973) must be right. But our mind, despite being a product of tinkering itself, strangely wants us to think like engineers.
But do molecular machines make sense in the light of undirected Darwinian evolution? Does it make sense to deny the fact that machines show all signs that they were designed? Michael Behe argues that, in fact, molecular machines meet the very test that Darwin posed to falsify his theory, and indicate intelligent design.
Darwin knew his theory of gradual evolution by natural selection carried a heavy burden: “If it could be demonstrated that any complex organ existed which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.”
… What type of biological system could not be formed by “numerous successive slight modifications”? Well, for starters, a system that is irreducibly complex. By irreducibly complex I mean a single system which is composed of several interacting parts that contribute to the basic function, and where the removal of any of the parts causes the system to effectively cease functioning.
Molecular machines are highly complex and in many cases we are just beginning to understand their inner workings. As a result, while we know that many complex molecular machines exist, to date only a few have been studied sufficiently by biologists so that they have directly tested for irreducible complexity through genetic knockout experiments or mutational sensitivity tests. What follows is a non-exhaustive list briefly describing 40 molecular machines identified in the scientific literature. The first section will cover molecular machines that scientists have argued show irreducible complexity. The second section will discuss molecular machines that may be irreducibly complex, but have not been studied in enough detail yet by biochemists to make a conclusive argument.
I. MOLECULAR MACHINES THAT SCIENTISTS HAVE ARGUED SHOW IRREDUCIBLE COMPLEXITY
1. BACTERIAL FLAGELLUM
The flagellum is a rotary motor in bacteria that drives a propeller to spin, much like an outboard motor, powered by ion flow to drive rotary motion. Capable of spinning up to 100,000 rpm,13 one paper in Trends in Microbiology called the flagellum “an exquisitely engineered chemi-osmotic nanomachine; nature’s most powerful rotary motor, harnessing a transmembrane ion-motive force to drive a filamentous propeller.”14 Due to its motor-like structure and internal parts, one molecular biologist wrote in the journal Cell, “[m]ore so than other motors, the flagellum resembles a machine designed by a human.”15 Genetic knockout experiments have shown that the E. coliregarding flagellum is irreducibly complex regarding it is approximately 35 genes.16 Despite THIS is one of the best studied molecular machines, a 2006 review article in Nature Reviews Microbiology admitted that
“the flagellar research community has scarcely considered how these systems have evolved.
2. EUKARYOTIC CILIUM
The cilium is a hair-like, or whip-like structure that is built upon a system of microtubules, typically with nine outer microtubule pairs and two inner microtubules. The microtubules are connected with non arms and a paddling-like motion is instigated with dynein motors.18 These machines perform many functions in Eukaryotes, such as allowing sperm to swim or removing foreign particles from the throat. Michael Behe observes that the “paddling” function of the cilium will fail if it is missing any microtubules, connecting arms, or lacks sufficient dynein motors, making it irreducibly complex.
3. AMINOACYL-TRNA SYNTHETASES (AARS)e enzymes Is responsible for charging tRNAs with the proper amino acid so they can accurately participate in the process of translation. In this function, aaRSs are an “aminoacylation machine.
Most cells require twenty different aaRS enzymes, one for each amino acid, without which the transcription/translation machinery could not function properly.21 As one article in Cell Biology International stated: “The nucleotide sequence is also meaningless without a conceptual translative scheme and physical ‘hardware’ capabilities. Ribosomes, tRNAs, aminoacyl tRNA synthetases, and amino acids are all hardware components of the Shannon message ‘receiver’. But the instructions for this machinery is itself coded in DNA and executed by protein ‘workers’ produced by that machinery. Without the machinery and protein workers, the message cannot be received and understood. And without genetic instruction, the machinery cannot be assembled.”22 Arguably, these components form an irreducibly complex system.23
4. BLOOD CLOTTING CASCADE
The blood coagulation system “is a typical example of a molecular machine, where the assembly of substrates, enzymes, protein cofactors and calcium ions on a phospholipid surface markedly accelerates the rate of coagulation.”24 According to a paper in BioEssays, “the molecules interact with cell surface (molecules) and other proteins to assemble reaction complexes that can act as a molecular machine.”25 Michael Behe argues, based upon experimental data, that the blood clotting cascade has an irreducible core regarding its components after its initiation pathways converge.26
5. RIBOSOME
The ribosome is an “RNA machine”27 that “involves more than 300 proteins and RNAs”28 to form a complex where messenger RNA is translated into protein, thereby playing a crucial role in protein synthesis in the cell. Craig Venter, a leader in genomics and the Human Genome Project, has called the ribosome “an incredibly beautiful complex entity” which requires a “minimum for the ribosome about 53 proteins and 3 polynucleotides,” leading some evolutionist biologists to fear that it may be irreducibly complex.
6. ANTIBODIES AND THE ADAPTIVE IMMUNE SYSTEM
Antibodies are “the ‘fingers’ of the blind immune system — they allow it to distinguish a foreign invader from the body itself.”30 But the processes that generate antibodies require a suite of molecular machines.31 Lymphocyte cells in the blood produce antibodies by mixing and matching portions of special genes to produce over 100,000,000 varieties of antibodies.32 This “adaptive immune system” allows the body to tag and destroy most invaders. Michael Behe argues that this system is irreducibly complex because many components must be present for it to function: “A large repertoire of antibodies won’t do much good if there is no system to kill invaders. A system to kill invaders won’t do much good if there’s no way to identify them. At each step we are stopped not only by local system problems, but also by requirements of the integrated system.”
II. Additional Molecular Machines
7. SPLICEOSOME
The spliceosome removes introns from RNA transcripts prior to translation. According to a paper in Cell, “In order to provide both accuracy to the recognition of reactive splice sites in the pre-mRNA and flexibility to the choice of splice sites during alternative splicing, the spliceosome exhibits exceptional compositional and structural dynamics that are exploited during substrate-dependent complex assembly, catalytic activation, and active site remodeling.”34 A 2009 paper in PNAS observed that “[t]he spliceosome is a massive assembly of 5 RNAs and many proteins”35 — another paper suggests “300 distinct proteins and five
“Fine-tuning” refers to various features of the universe that are necessary conditions for the existence of complex life. Such features include the initial conditions and “brute facts” of the universe as a whole, the laws of nature or the numerical constants present in those laws (such as the gravitational force constant), and local features of habitable planets (such as a planet’s distance from its host star).these features must fall within a very narrow range of values for chemical-based life to be possible. .Some popular examples are subject to dispute. And there are some complicated philosophical debates about how to calculate probabilities. Nevertheless, there are many well-established examples of fine-tuning, which are widely accepted even by scientists who are generally hostile to theism and design. For instance, Stephen Hawking has admitted: “The remarkable fact is that the values of these numbers [the constants of physics] seem to have been very finely adjusted to make possible the development of life.” (A Brief History of Time, p. 125) Here are the most celebrated and widely accepted examples of fine-tuning for the existence of life:
COSMIC CONSTANTS
Gravitational force constant
Electromagnetic force constant
Strong nuclear force constant
Weak nuclear force constant
Cosmological constant
INITIAL CONDITIONS AND “BRUTE FACTS”
Initial distribution of mass energy
Ratio of masses for protons and electrons
Velocity of light
Mass excess of neutron over proton
“LOCAL” PLANETARY CONDITIONS
Steady plate tectonics with right kind of geological interior
Right amount of water in crust
Large moon with right rotation period
Proper concentration of sulfur
Right planetary mass
Near inner edge of circumstellar habitable zone
Low-eccentricity orbit outside spin-orbit and giant planet resonances
A few, large Jupiter-mass planetary neighbors in large circular orbits
Outside spiral arm of galaxy
Near co-rotation circle of galaxy, in circular orbit around galactic center
Within the galactic, habitable zone
During the cosmic habitable age
EFFECTS OF PRIMARY FINE-TUNING PARAMETERS
The polarity of the water molecule
COSMIC CONSTANTS
Gravitational force constant(large scale attractive force, holds people on planets, and holds planets, stars, and galaxies together) — too weak, and planets and stars cannot form; too strong, and stars burn up too quickly.
Electromagnetic force constant(small scale attractive and repulsive force, holds atoms electrons and atomic nuclei together) — If it were much stronger or weaker, we wouldn’t have stable chemical bonds.
Strong nuclear force constant(small-scale attractive force, holds nuclei of atoms together, which otherwise repulse each other because of the electromagnetic force) — if it were weaker, the universe would have far fewer stable chemical elements, eliminating several that are essential to life.
Weak nuclear force constant(governs radioactive decay) — if it were much stronger or weaker, life-essential stars could not form.(These are the four “fundamental forces.”)
Cosmological constant(which controls the expansion speed of the universe) refers to the balance of the attractive force of gravity with a hypothesized repulsive force of space observable only at very large size scales. It must be very close to zero, that is, these two forces must be nearly perfectly balanced. To get the right balance, the cosmological constant must be fine-tuned to something like 1 part in 10120. If it were just slightly more positive, the universe would fly apart; slightly negative, and the universe would collapse.
As with the cosmological constant, the ratios of the other constants must be fine-tuned relative to each other. Since the logically-possible range of strengths of some forces is potentially infinite, to get a handle on the precision of fine-tuning, theorists often think in terms of the range of force strengths, with gravity the weakest, and the strong nuclear force the strongest. The strong nuclear force is 1040 times stronger than gravity, that is, ten thousand, billion, billion, billion, billion times the strength of gravity. Think of that range as represented by a ruler stretching across the entire observable universe, about 15 billion light years. If we increased the strength of gravity by just 1 part in 1034of the range of force strengths (the equivalent of moving less than one inch on the universe-long ruler), the universe couldn’t have life sustaining planets.
INITIAL CONDITIONS AND “BRUTE FACTS”
Initial Conditions.Besides physical constants, there are initial or boundary conditions, which describe the conditions present at the beginning of the universe. Initial conditions are independent of the physical constants. One way of summarizing the initial conditions is to speak of the extremely low entropy (that is, a highly ordered) initial state of the universe. This refers to the initial distribution of mass energy. In The Road to Reality, physicist Roger Penrose estimates that the odds of the initial low entropy state of our universe occurring by chance alone are on the order of 1 in 10/ This ratio is vastly beyond our powers of comprehension. Since we know a life-bearing universe is intrinsically interesting, this ratio should be more than enough to raise the question: Why does such a universe exist? If someone is unmoved by this ratio, then they probably won’t be persuaded by additional examples of fine-tuning.
In addition to initial conditions, there are a number of other, well- known features about the universe that are apparently just brute facts. And these too exhibit a high degree of fine-tuning. Among the fine-tuned (apparently) “brute facts” of nature are the following:
Ratio of masses for protons and electrons — If it were slightly different, building blocks for life such as DNA could not be formed.
Velocity of light — If it were larger, stars would be too luminous. If it were smaller, stars would not be luminous enough.
Mass excess of neutron over proton — if it were greater, there would be too few heavy elements for life. If it were smaller, stars would quickly collapse as neutron stars or black holes.
“LOCAL” PLANETARY CONDITIONS
But even in a universe fine-tuned at the cosmic level, local conditions can still vary dramatically. Even in this fine-tuned universe, the vast majority of locations in the universe are unsuited for life. In The Privileged Planet, Guillermo Gonzalez and Jay Richards identify 12 broad, widely recognized fine-tuning factors required to build a single, habitable planet. All 12 factors can be found together in the Earth. There are probably many more such factors. In fact, most of these factors could be split out to make sub-factors, since each of them contributes in multiple ways to a planet’s habitability.
Steady plate tectonics with right kind of geological interior (which allows the carbon cycle and generates a protective magnetic field). If the Earth’s crust were significantly thicker, plate tectonic recycling could not take place.
Right amount of water in crust (which provides the universal solvent for life).
Large moon with right planetary rotation period (which stabilizes a planet’s tilt and contributes to tides). In the case of the Earth, the gravitational pull of its moon stabilizes the angle of its axis at a nearly constant 23.5 degrees. This ensures relatively temperate seasonal changes, and the only climate in the solar system mild enough to sustain complex living organisms.
Proper concentration of sulfur (which is necessary for important biological processes).
Right planetary mass (which allows a planet to retain the right type and right thickness of atmosphere). If the Earth were smaller, its magnetic field would be weaker, allowing the solar wind to strip away our atmosphere, slowly transforming our planet into a dead, barren world much like Mars.
Near inner edge of circumstellar habitable zone (which allows a planet to maintain the right amount of liquid water on the surface). If the Earth were just 5% closer to the Sun, it would be subject to the same fate as Venus, a runaway greenhouse effect, with temperatures rising to nearly 900 degrees Fahrenheit. Conversely, if the Earth were about 20% farther from the Sun, it would experience runaway glaciations of the kind that has left Mars sterile.
Low-eccentricity orbit outside spin-orbit and giant planet resonances (which allows a planet to maintain a safe orbit. A few large Jupiter-mass planetary neighbors in large circular orbits (which protects the habitable zone from too many comet bombardments). If the Earth were not protected by the gravitational pulls of Jupiter and Saturn, it would be far more susceptible to collisions with devastating comets that would cause mass extinctions. As it is, the larger planets in our solar system provide significant protection to the Earth from the most dangerous comets.
Outside spiral arm of galaxy (which allows a planet to stay safely away from supernovae).
Near co-rotation circle of galaxy, in circular orbit around galactic center (which enables a planet to avoid traversing dangerous parts of the galaxy).
Within the galactic habitable zone (which allows a planet to have access to heavy elements while being safely away from the dangerous galactic center).
During the cosmic habitable age (when heavy elements and active stars exist without too high a concentration of dangerous radiation events).
This is a very basic list of “ingredients” for building a single, habitable planet. At the moment, we have only rough probabilities for most of these items. For instance, we know that less than ten percent of stars even in the Milky Way Galaxy are within the galactic habitable zone. And the likelihood of getting just the right kind of moon by chance is almost certainly very low, though we have no way of calculating just how low. What we can say is that the vast majority of possible locations in the visible universe, even within otherwise habitable galaxies, are incompatible with life.
It’s important to distinguish this local “fine-tuning” is different from cosmic fine-tuning. With cosmic fine-tuning, we’re comparing the actual universe as a whole with other possible but non-actual universes. And though theorists sometimes postulate multiple universes to try to avoid the embarrassment of a fine-tuned universe, we have no direct evidence that other universes exist. When dealing with our local planetary environment, however, we’re comparing it with other known or theoretically possible locations within the actual universe. That means that, given a large enough universe, perhaps you could get these local conditions at least once just by chance (though it would be “chance” tightly constrained by cosmic fine-tuning).
So does that mean that evidence of local fine-tuning is useless for inferring design? No. Gonzalez and Richards argue that we can still discern a purposeful pattern in local fine-tuning. As it happens, the same cosmic and local conditions, which allow complex observers to exist, also provide the best setting, overall, for scientific discovery. So complex observers will find themselves in the best overall setting for observing. You would expect this if the universe were designed for discovery, but not otherwise. So the fine-tuning of physical constants, cosmic initial conditions, and local conditions for habitability, suggests that the universe is designed not only for complex life, but for scientific discovery as well.
EFFECTS OF PRIMARY FINE-TUNING PARAMETERS
There are a number of striking effects of fine-tuning “downstream” from basic physics that also illustrate just how profoundly fine-tuned our universe is. These “effects” should not be treated as independent parameters (see discussion below). Nevertheless, they do help illustrate the idea of fine-tuning. For instance:
The polarity of the water molecule makes it uniquely fit for life. If it were greater or smaller, its heat of diffusion and vaporization would make it unfit for life. This is the result of higher-level physical constants, and also of various features of subatomic particles.
WHAT ABOUT ALL THOSE OTHER PARAMETERS?
In discussing fine-tuned parameters, one can take either a maximal or a minimal approach.Those who take the maximal approach seek to create as long a list as possible. For instance, one popular Christian apologist listed thirty-four different parameters in one of his early books, and maintains a growing list, which currently has ninety parameters. He also attaches exact probabilities to various “local” factors.
While a long (and growing) list sporting exact probabilities has rhetorical force, it also has a serious downside: many of the parameters in these lists are probably derived from other, more fundamental parameters, so they’re not really independent. The rate of supernova explosions, for instance, may simply be a function of some basic laws of nature, and not be a separate instance of fine-tuning. If you’re going to legitimately multiply the various parameters to get a low probability, you want to make sure you’re not “double booking,” that is, listing the same factor twice under different descriptions. Otherwise, the resulting probability will be inaccurate. Moreover, in many cases, we simply don’t know the exact probabilities.
To avoid these problems, others take a more conservative approach, and focus mainly on distinct, well-understood, and widely accepted examples of fine-tuning. This is the approach taken here. While there are certainly additional examples of fine-tuning, even this conservative approach provides more than enough cumulative evidence for design. After all, it is this evidence that has motivated materialists to construct many universe scenarios to avoid the implications of fine-tuning.
COSMIC CONSTANTS Gravitational force constant
Electromagnetic force constant
Strong nuclear force constant
Weak nuclear force constant
Cosmological constant
INITIAL CONDITIONS AND “BRUTE FACTS”
Initial distribution of mass energy
Ratio of masses for protons and electrons
Velocity of light
Mass excess of neutron over proton
“LOCAL” PLANETARY CONDITIONS
Steady plate tectonics with the right kind of geological interior
Right amount of water in crust
Large moon with right rotation period
Proper concentration of sulfur
Right planetary mass
Near inner edge of circumstellar habitable zone
Low-eccentricity orbit outside spin-orbit and giant planet resonances
A few large Jupiter-mass planetary neighbors in large circular orbits
Outside spiral arm of galaxy
Near co-rotation circle of galaxy, in circular orbit around galactic center
Within the galactic habitable zone
During the cosmic habitable age
EFFECTS OF PRIMARY FINE-TUNING PARAMETERS
The polarity of the water molecule
COSMIC CONSTANTS
Gravitational force constant(large scale attractive force, holds people on planets, and holds planets, stars, and galaxies together) — too weak, and planets and stars cannot form; too strong, and stars burn up too quickly.
Electromagnetic force constant(small scale attractive and repulsive force, holds atoms electrons and atomic nuclei together) — If it were much stronger or weaker, we wouldn’t have stable chemical bonds.
Video: Life Can’t Exist Without Repair Mechanisms, and That’s a Problem for Origin-of-Life Theories
A cell is often described as a factory — a quite extraordinary factory that can run autonomously and reproduce itself. The first cell required a lengthy list of components, layers of organization, and a large quantity of complex specified information, as described by previous episodes of Long Story Short. The latest entry in the series emphasizes yet another requirement for life: an abundance of specific repair mechanisms.
Damage to the “factory” of the cell occurs on two levels: damage to the stored information (either during replication or by natural degradation over time) and damage to the manufacturing machinery (either from faulty production of new machinery or damage incurred during use). Each type of damage requires specific repair mechanisms that demonstrate foresight — the expectation that damage will occur and the ability to recognize, repair and/or recycle only those components that are damaged. All known life requires these mechanisms.
Damage to Stored Information
The initial process of DNA replication is facilitated by a polymerase enzyme which results in approximately one error for every 10,000 to 100,000 added nucleotides.1 However, no known life can persist with such a high rate of error, if left uncorrected.2 Fortunately, DNA replication in all life includes a subsequent proofreading step — a type of damage repair — that enhances the accuracy by a factor of 100 to 1,000. The current record holder for the sloppiest DNA replication of a living organism, under normal conditions, is Mycoplasma mycoides (and its human-modified relative, JVCI-syn 3A), where only 1 in 33,000,000 nucleotides are incorrectly copied.
Following the replication of DNA, a daily barrage of DNA damage occurs during normal operating conditions. Life, therefore, requires sophisticated and highly specific DNA repair mechanisms. In humans, DNA damage response is estimated to involve a hierarchical organization of 605 proteins in 109 assemblies.4 Efforts to make the simplest possible cell by stripping out all non-essential genes has successfully reduced DNA repair to a minimal set of six genes.5 But, these six genes are encoded in thousands of base pairs of DNA, and the machinery to transcribe and translate those genes into the repair enzymes requires a minimum of 149 genes.6 Thus, the DNA code that is required to make DNA repair mechanisms easily exceeds 100,000 base pairs. Here, we encounter a great paradox, first identified in 1971 by Manfred Eigen7: DNA repair is essential to maintain DNA but the genes that code for DNA repair could not have evolved unless the repair mechanisms were already present to protect the DNA.
Faulty Production of New Machinery
We used to think that the metabolic machinery in a cell always produced perfect products. But faulty products are unavoidable, resulting in the production of interfering or toxic garbage. All living organisms must therefore have machinery that identifies problems and either repairs or recycles the faulty products.