A cell is often described as a quite extraordinary factory that can run autonomously and reproduce itself. The first cell required a lengthy list of components, layers of organization, and a large quantity of complex specified information, as described by previous episodes of Long Story Short. The latest entry in the series emphasizes yet another requirement for life: an abundance of specific repair mechanisms.
Damage to the “factory” of the cell occurs on two levels: damage to the stored information (either during replication or by natural degradation over time) and damage to the manufacturing machinery (either from faulty production of new machinery or damage incurred during use). Each type of damage requires specific repair mechanisms demonstrating foresight — the expectation that damage will occur and the ability to recognize, repair, and/or recycle only those damaged components. All known life requires these mechanisms.
Damage to Stored Information
The initial process of DNA replication is facilitated by a polymerase enzyme, which results in approximately one error for every 10,000 to 100,000 added nucleotides.1 However, if left uncorrected, no known life can persist with such a high error rate.2 Fortunately, DNA replication in all life includes a subsequent proofreading step — a type of damage repair — that enhances the accuracy by a factor of 100 to 1,000. The current record holder for the sloppiest DNA replication of a living organism under normal conditions is Mycoplasma acids (and its human-modified relative, JVCI-syn 3A), where only 1 in 33,000,000 nucleotides are incorrectly copied.3
Following the replication of DNA, a daily barrage of DNA damage occurs during normal operating conditions. Life, therefore, requires sophisticated and precise DNA repair mechanisms. DNA damage response is estimated to involve a hierarchical organization of 605 proteins in 109 human assemblies.4 Efforts to make the simplest possible cell by stripping out all non-essential genes have successfully reduced DNA repair to a minimal set of six genes.5 But, these six genes are encoded in thousands of DNA base pairs, and the machinery to transcribe and translate those genes into the repair enzymes requires a minimum of 149 genes.6 Thus, the DNA code required to make DNA repair mechanisms easily exceed 100,000 base pairs. Here, we encounter a great paradox, first identified in 1971 by Manfred Eigen7: DNA repair is essential to maintain DNA. However, the genes that code for DNA repair could not have evolved unless the repair mechanisms were already present to protect the DNA.
Faulty Production of New Machinery
We used to think that the metabolic machinery in a cell always produced perfect products. However, faulty products are unavoidable, resulting in the production of interfering or toxic garbage. All living organisms must, therefore, have machinery that identifies problems and either repairs or recycles the faulty products.
The cell’s central manufacturing machine is the ribosome, a marvel that produces functional proteins from strands of mRNA (with the help of many supporting molecules). Unfortunately, 2-4 percent of mRNA strands get stuck in the ribosome during translation into a protein.8 Not only does this halt production, but it could produce a toxic, half-finished protein.
If the mitochondria could not get “unstuck,” life as we know it would end. In self-replication, a single cell must produce an entire library of proteins, burdening the cell’s mitochondria. However, with a 2-4 percent rate of stuck mRNA strands, the average cell would have each of its mitochondria get stuck at least five times before the cell could replicate.9 Therefore, life could never replicate, and metabolism would cease unless this problem was solved.
Fortunately, all forms of life, even the simplest,9 are capable oftrans–translation, typically involving a three-step process. First, a molecule combining transfer and messenger RNA and two helper molecules (SB and EF-Tu) recognizes that mRNA is stuck in the ribosome and attaches a label to the half-formed protein. This label, called a degree, is essentially a poly alanine peptide. The condemned protein is recognized, degraded, and recycled by one of the numerous proteases. Finally, the mRNA must be labeled and recycled to prevent clogging other ribosomes. In some bacteria,10 a pyrophosphohydrolase enzyme modifies the end of the mRNA, labeling it for destruction. An RNAse (another enzyme) then recognizes this label, grabs hold of the mRNA, and draws it close to its magnesium ion, which causes cleavage of the RNA. Another RNAse finishes the job, breaking the mRNA into single nucleotides, which can be reused.
The presence of tools that can destroy proteins and RNA also requires that those tools be highly selective. If these tools evolved, one would expect the initial versions to be non-selective, destroying any proteins or RNA within reach, extinguishing life, and blocking the process of evolution. 11.
Note that the tools for trans-translation and protein and RNA recycling are all stored in DNA, which repair mechanisms must protect. These tools cannot be produced without mitochondria, but the mitochondria cannot be unstuck without the action of trans-translation. Thus, we encounter another case of circular causality.
Damage Incurred During Use
Regularly operating enzymes or metabolites, like co-enzymes or cofactors, involve chemical reactions that follow specific paths. Deviations from the desired paths can occur from interferences like radiation, oxidative stress, or encountering the wrong “promiscuous” enzyme. These deviations result in rogue molecules that interfere with metabolism or are toxic to the cell. As a result, even the simplest forms of life require several metabolic repair mechanisms:
There is little room to doubt that metabolite damage and the systems that counter it are mainstream metabolic processes that cannot be separated from life itself.
“It is increasingly evident that metabolites suffer various kinds of damage, that such damage happens in all organisms, and that cells have dedicated systems for damage repair and containment.”
As a relatively simple example of a required repair mechanism, even the simplest known cell (JVCI Syn 3A) has to deal with a sticky situation involving sulfur. Several metabolic reactions require molecules with a thiol group — sulfur bonded to hydrogen and an organic molecule. The organism needs to maintain its thiol groups. Still, they have an annoying tendency to cross-link cross-link (i.e., two thiol groups create a disulfide bond, fusing the two molecules). Constant maintenance is required to break up this undesired linking. Even the simplest known cell requires two proteins (TB/JCVISYN3A_0819 and TrxA/JCVISYN3A_0065) to restore thiol groups and maintain metabolism.12 Because the repair proteins are a product of the cell’s metabolism, this creates another path of circular causality: You can’t have prolonged metabolism without the repair mechanisms, but you cannot make the repair mechanisms without metabolism.
Also, life’s required repair mechanisms. All forms of life include damage prevention mechanisms. These mechanisms can destroy rogue molecules, stabilize molecules prone to going rogue, or guide chemical reactions toward less harmful outcomes. For example, when DNA is replicated, available monomers of the four canonical nucleotides (G, C, T, and A) are incorporated into the new strand. Some of the cell’s normal metabolites, like KUTP (Dryden triphosphate), are similar to a canonical nucleotide and can be erroneously incorporated into DNA. Even the simplest cell (once again, JVCI-syn3A) includes an enzyme (Dexedrine triphosphate pyrophosphatase) to hydrolyze DUTC and prevent the formation of corrupted DNA.6
Summing Up the Evidence
Those who promote unguided abiogenesis brush off these required mechanisms, claiming that life started as simplified “proto-cells” that did not need repair. However, no evidence exists that life could persist or replicate without these repair mechanisms. The presence of the repair mechanisms invokes several examples of circular causality — quite a conundrum for unintelligent, natural processes alone. The belief that simpler “proto-cells” didn’t require repair mechanisms requires blind faith, set against the prevailing scientific evidence.
d his “high-quality colleagues” in the ID research community have exposed the problems Darwinists need to work on solving.
Settled Science
About Dr. Myer, he says:
I encountered people like Stephen Meyer, who were not phony scientists, pretending to do the work. They were very good at what they did. And I believe Stephen Meyer is motivated by a religious motivation, but we rarely ask when somebody takes up science, “What are you really in it for? Are you in it for the fame?” That has not a legitimate challenge to somebody’s work.
And the fact is, Stephen Meyer is very good at what he does. He may be motivated by the thought that at the end of the search, he has gone to find Jesus. But in terms of the quality of his arguments, I was very impressed when I met him: his love for biology, his love for creatures, the weirder the better, he likes them? So that looked very familiar to me.
M motivation should be irrelevant. The quality of the science is what counts. I would add that none of us are mind readers, and we can never know someone else’s motivation. Says Weinstein, ID is about science, not religion:
Add it all also became obvious to me in interacting with Stephen Meyer and many of his high-quality colleagues that they are motivated, for whatever reason, to do the job that we are supposed to be motivated to do inside of biology. They are seeking cracks in the theory. Things that we have not yet explained. And they are seeking those things for their own reasons, but the point is we are supposed to be understanding what parts of the stories we tell ourselves are not true, because that has how we get smarter over time.
Darwinists say Weinstein is shrinking from a fight they wrongly feel they should not have to bother with: If you decide… that your challengers are not entitled to a hearing because they are motivated by the wrong stuff, then you do two things. One, you artificially stunt the growth of your field, and you create a more vibrant realm where your competitors have a better field to play in because you have left numerous holes in the theory ready to be identified, which I think is what has going on. The better intelligent design folks are finding real questions raised by Darwinism, and the Darwinists, instead of answering those questions, [are] deciding it is not worth their time. And that is, it is putting us on a collision course.
“Giving Up Darwin”
Heying cites the 2019 public defection from Darwinism of Yale computer scientist David Gelernter, who pointed to Meyer’s writing as his primary reason for “Giving Up Darwin.” She admits she has not kept up with the challenges from ID but agrees that she should keep up, and that has because challenges like those from ID can make the evolutionary establishment “smarter.” Ignoring the challenges makes the establishment dumber — stagnant and self-satisfied. I’m unfamiliar with most of the intelligent design movement arguments. It has not felt like it was my obligation to be familiar with them. Perhaps what you are arguing is it is our responsibility.
Weinstein, unlike Coyne or Dawkins, is up for talking and debating with ID proponents:
I’m open to that battle and I expect that if we pursue that question, what we are going to find is, oh, there has a layer of Darwinism we did not get and it has going to turn out that the intelligent design folks are going to be wrong. But they will have played a very noble and important role in the process of us getting smarter. And look, I think Stephen Meyer at the end of the day; I do not think he has going to surrender to the idea that there has no God at the end of this process. But if we find a layer of Darwinism that has not been spotted, that answers his question, I think he has going to be delighted with it the same way he has delighted by the prospect of seeing whale sharks.
Again, these are remarkable concessions from a couple of scientists who are not at all looking to leap ID but who understand that intelligent design, not Darwinism, is currently at biology’s cutting-edge
make NacashNacash rangers from the Late Cretaceous of Patagonia, one of the oldest fossil snakes known to science. It was found in terrestrial sediments and shows a well-defined sacrum with a pelvis connected to the spine and functional hind legs. Therefore, it was considered to support the origin of snakes from burrowing rather than aquatic ancestors (Gershon 2006). In a previous article, I reported on the highly controversial and hotly debated topic of snake origins (Bechly, 2023), where you can find links to all the relevant scientific literature.
But there was another open question concerning the origin of snakes: Did their distinct body plan evolve gradually as predicted by Darwinian evolution, or did snakes appear abruptly on the scene as predicted by intelligent design theory? Earlier this year, researchers from the University of Michigan and Stony Brook University published a seminal new study in the prestigious journal Science (Title et al. 2024). This study brought critical new insights with the mathematical and statistical modeling of the most comprehensive evolutionary tree of snakes and lizards, based on a comparative analysis of the traits of 60,000 museum specimens and the partial sequencing of the genomes of 1,000 species (SBU 2024, Osborne 2024). The study found that all the characteristic traits of the snake body plan, such as the flexible skull with articulated jaws, the loss of limbs, and the elongated body with hundreds of vertebrae, all appeared in a short window of time of about 100-110 million years ago, became “evolutionary winners” because they evolved “in breakneck pace” (Wilcox 2024), which the senior author of the study explained with the ad hoc hypothesis that “snakes have an evolutionary clock that ticks a lot faster than many other groups of animals, allowing them to diversify and evolve at super quick speeds” (Osborne 2024). That is not an explanation but just a rephrasing of the problem. How could such a super quick evolution be accommodated within the framework of Darwinian population genetics and thus overcome the waiting time problem? After all, the complex re-engineering of a body plan requires coordinated mutations that need time to occur and spread in an ancestral population. Did anybody bother to do the actual math to check if such a proposed supercharged evolution is even feasible, given the available window of time and reasonable parameters for mutation rates, effective population sizes, and generation turnover rates? Of course not. We have the usual sweeping generalizations and fancy just-so stories.
I predict this will be another good example of the fatal waiting time problem for neo-Darwinism. We can add the origin of snakes to many abrupt appearances in the history of life (Bechly 2024), and I am happy to embrace the name coined by the authors of the new study f
Long before modern technology, biology students compared the workings of life to machines.1 In recent decades, this comparison has become stronger than ever. As a paper in Nature Reviews Molecular Cell Biology states, “Today biology is revealing the importance of ‘molecular machines’ and of other highly organized molecular structures that carry out the complex physics-chemical processes on which life is based.”2 Likewise, a paper in Nature Methods observed that “[m]ost cellular functions are executed by protein complexes, acting like molecular machines.
What are Molecular Machines?
According to an article in the journal Accounts of Chemical Research, a molecular machine is “an assemblage of parts that transmit forces, motion, or energy from one to another in a predetermined manner.”4 A 2004 article in the Annual Review of Biomedical Engineering asserted that “these machines are more efficient than their macroscale counterparts,” further noting that “[c]ountless such machines exist in nature.”5 Indeed, a single research project in 2006 reported the discovery of over 250 new molecular machines in yea..6
Molecular machines have posed a stark challenge to those who seek to understand them in Darwinian terms as the products of an undirected process. In his 1996 book Darwin’s Black Box: The Biochemical Challenge to Evolution, biochemist Michael Behe explained the surprising discovery that life is based upon machines: Shortly after 1950, science advanced to where it could determine the shapes and properties of a few of the molecules that make up living organisms. Slowly, painstakingly, the structures of more biological molecules were elucidated, and the way they work was inferred from countless experiments. The cumulative results clearly show that life is based on machines made of molecules! Molecular machines haul cargo from one place in the cell to another along “highways” made of other molecules, while still others act as cables, ropes, and pulleys to hold the cell in shape. Machines turn cellular switches on and off, sometimes killing the cell or causing it to grow. Solar-powered machines capture the energy of photons and store it in chemicals. Electrical machines allow the current to flow through nerves. Manufacturing machines build other molecular machines, as well as themselves. Cells swim using machines, copy themselves with machinery, ingest food with machinery. In short, highly sophisticated molecular machines control every cellular process. Thus, the details of life are finely calibrated, and the machinery of life is enormously complex.7
Behe then posed the question, “Can all of life be fit into Darwin’s theory of evolution?,” and answered: “The complexity of life’s foundation has paralyzed science’s attempt to account for it; molecular machines raise an as-yet impenetrable barrier to Darwinism’s universal reach,
Even those who disagree with Behe’s answer to that question have marveled at the complexity of molecular machines. In 1998, former president of the U.S. National Academy of Sciences Bruce Alberts wrote the introductory article to an issue of Cell, one of the world’s top biology journals, celebrating molecular machines. Alberts praised the “speed,” “elegance,” “sophistication,” and “highly organized activity” of “remarkable” and “marvelous” structures inside the cell. He went on to explain what inspired such words:
The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines. . . . Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts.9
Likewise, in 2000, Marco Piccolini wrote in Nature Reviews Molecular Cell Biology that “extraordinary biological machines realize the dream of the seventeenth-century scientists … that ‘machines will be eventually found not only unknown to us but also unimaginable by our mind.’” He notes that modern biological machines “surpass the expectations of the early life scientists.
A few years later, a review article in the Biological Chemistry journal demonstrated evolutionary scientists’ difficulty understanding molecular machines. Essentially, they must deny their scientific intuitions when trying to grapple with the complexity of the fact that biological structures appear engineered to the schematics of blueprints:
Molecular machines, although it may often seem so, are not made with a blueprint at hand. Yet, biochemists and molecular biologists (and numerous scientists of other disciplines) are used to thinking as an engineer, more precisely, a reverse engineer. But there are no blueprints … ‘Nothing in biology makes sense except in the light of evolution’: we know that Dobzhansky (1973) must be right. But our mind, despite being a product of tinkering itself, strangely wants us to think like engineers.
But do molecular machines make sense in the light of undirected Darwinian evolution? Does it make sense to deny that machines show all signs that they were designed? Michael Behe argues that molecular machines meet the test Darwin posed to falsify his theory and indicate intelligent design.
Darwin knew his theory of gradual evolution by natural selection carried a heavy burden: “If it could be demonstrated that any complex organ existed which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.”
… What type of biological system could not be formed by “numerous successive slight modifications”? For starters, a system that is irreducibly complex. By irreducibly complex, I mean a single system which is composed of several interacting parts that contribute to the basic function, and where the removal of the parts causes the system to effectively cease functioning.
Molecular machines are highly complex; we are just beginning to understand their inner workings in many cases. As a result, while we know that many complex molecular machines exist, to date, only a few have been studied sufficiently by biologists so that they have directly tested for irreducible complexity through genetic knockout experiments or mutational sensitivity tests. A non-exhaustive list briefly describing 40 molecular machines identified in the scientific literature follows. The first section will cover molecular machines that scientists have argued show irreducible complexity. The second section will discuss molecular machines that may be irreducibly complex, but have not been studied in enough detail yet by biochemists to make a conclusive argument.
I. MOLECULAR MACHINES THAT SCIENTISTS HAVE ARGUED SHOW IRREDUCIBLE COMPLEXITY
1. BACTERIAL FLAGELLUM
The flagellum is a rotary motor in bacteria that drives a propeller to spin, much like an outboard motor powered by ion flow to drive rotary motion. Capable of spinning up to 100,000 rpm,13 one paper in Trends in Microbiology called the flagellum “an exquisitely engineered chemiosmotic nanomachine; nature’s most powerful rotary motor, harnessing a transmembrane ion-motive force to drive a filamentous propeller.”14 Because of its motor-like structure and internal parts, one molecular biologist wrote in the journal Cell, “[m]ore so than other motors, the flagellum resembles a machine designed by a human.”15 Genetic knockout experiments have shown that the E. coliregarding flagellum is irreducibly complex regarding it is approximately 35 genes.16 Despite THIS is one of the best-studied molecular machines, a 2006 review article in Nature Reviews Microbiology admitted that.
“the flagellar research community has scarcely considered how these systems have evolved.
2. EUKARYOTIC CILIUM
The cilium is a hair-like or whip-like structure built upon a system of microtubules, typically with nine outer microtubule pairs and two inner microtubules. The microtubules are connected with non arms and a paddling-like motion is instigated with dynein motors.18 These machines perform numerous functions in Eukaryotes, such as allowing sperm to swim or removing foreign particles from the throat. Michael Behe observes that the “paddling” function of the cilium will fail if it is missing any microtubules or connecting arms or lacks sufficient dynein motors, making it irreducibly complex.
3. AMINOACYL-TRNA SYNTHETASES (AARS)e enzymes Charge tRNA with the proper amino acid to accurately participate in the translation process. In this function, as are an “aminoacylation machine.
Most cells require twenty different SARS enzymes, one for each amino acid, without which the transcription/translation machinery could not function properly.21 As one article in Cell Biology International stated: “The nucleotide sequence is also meaningless without a conceptual translative scheme and physical ‘hardware’ capabilities. Ribosomes, tRNA, aminoacyl-tRNA synthetases, and amino acids are all hardware components of the Shannon message ‘receiver’. But the instructions for this machinery is itself coded in DNA and executed by protein ‘workers’ produced by that machinery. The message cannot be received and understood without the machinery and protein workers. And without genetic instruction, the machinery cannot be assembled.”22 Arguably, these components form an irreducibly complex system.23.
4. BLOOD CLOTTING CASCADE
The blood coagulation system “is a typical example of a molecular machine, where the assembly of substrates, enzymes, protein cofactors and calcium ions on a phospholipid surface markedly accelerates the coagulation rate.”24 According to a paper in Blesses, “the molecules interact with cell surface (molecules) and other proteins to assemble reaction complexes that can act as a molecular machine.”25 Michael Behe argues, based upon experimental data, that the blood clotting cascade has an irreducible core regarding its components after its initiation pathways converge.26
5. RIBOSOME
The ribosome is an “RNA machine”27 that “involves more than 300 proteins and RNAs”28 to form a complex where messenger RNA is translated into protein, thereby playing a crucial role in protein synthesis in the cell. Craig Venter, a leader in genomics and the Human Genome Project, has called the ribosome “a magnificent complex entity” that requires a “minimum for the ribosome about 53 proteins and 3 polynucleotides,” leading some evolutionist biologists to fear that it may be irreducibly complex.
6. ANTIBODIES AND THE ADAPTIVE IMMUNE SYSTEM
Antibodies are “the ‘fingers’ of the blind immune system — they allow it to distinguish a foreign invader from the body itself.”30 But the processes that generate antibodies require a suite of molecular machines.31 Lymphocyte cells in the blood produce antibodies by mixing and matching portions of unique genes to produce over 100,000,000 varieties of antibodies.32 This “adaptive immune system” allows the body to tag and destroy most invaders. Michael Behe argues that this system is irreducibly complex because numerous components must be present to function: “A large repertoire of antibodies won’t do much good if there is no system to kill invaders. A system to kill invaders won’t do much good if there’s no way to identify them. At each step, we are stopped by local system problems and requirements of the integrated system.”
II. Additional Molecular Machines
7. SPLICEOSOME
The spliceosome removes introns from RNA transcripts prior to translation. According to a paper in Cell, “To provide both accuracy to the recognition of reactive splice sites in the pre-mRNA and flexibility to the choice of splice sites during alternative splicing, the spliceosome exhibits exceptional compositional and structural dynamics that are exploited during substrate-dependent complex assembly, catalytic activation, and active site remodeling.”34 A 2009 paper in PNAS observed that “[t]he spliceosome is a massive assembly of 5 RNAs and many proteins”35 — another paper suggests “300 distinct proteins and five.
“Fine-tuning” refers to various features of the universe that are necessary conditions for the existence of complex life. Such features include the initial conditions and “brute facts” of the universe, the laws of nature or the numerical constants present in those laws (such as the gravitational force constant), and local features of habitable planets (such as a planet’s distance from its host star). These features must fall within a very narrow range of values for chemical-based life to be possible. .Some popular examples are subject to dispute. There are also some complicated philosophical debates about how to calculate probabilities.
Nevertheless, many well-established examples of fine-tuning are widely accepted even by scientists who are hostile to theism and design. For instance, Stephen Hawking has admitted: “The remarkable fact is that the values of these numbers [the constants of physics] seem to have been very finely adjusted to make possible the development of life.” (A Brief History of Time, p. 125) Here are the most celebrated and widely accepted examples of fine-tuning for the existence of life:
COSMIC CONSTANTS
Gravitational force constant
Electromagnetic force constant
Strong nuclear force constant
Weak nuclear force constant
Cosmological constant
INITIAL CONDITIONS AND “BRUTE FACTS”
Initial distribution of mass-energy
The ratio of masses for protons and electrons
Velocity of light
Mass excess of neutron over proton
“LOCAL” PLANETARY CONDITIONS
Steady plate tectonics with the right kind of geological interior
The right amount of water in the crust
Large moon with a right rotation period
Proper concentration of sulfur
Right planetary mass
Near inner edge of the circumstellar habitable zone
Low-eccentricity orbit outside spin-orbit and giant planet resonances
A few, large Jupiter-mass planetary neighbors in large circular orbits
The outside spiral arm of the galaxy
Near co-rotation circle of galaxy, in circular orbit around galactic center
Within the galactic habitable zone
During the cosmic habitable age
EFFECTS OF PRIMARY FINE-TUNING PARAMETERS
The polarity of the water molecule
COSMIC CONSTANTS
Gravitational force is constant(the large-scale attractive force that holds people on planets and planets, stars, and galaxies together) — too weak, planets and stars cannot form; too firm, and stars burn up too quickly.
Electromagnetic force constant(the small-scale attractive and repulsive force that holds atoms, electrons, and atomic nuclei together) — If it were much stronger or weaker, we would not have stable chemical bonds.
The strong nuclear force is constant(the small-scale attractive force that holds nuclei of atoms together, which otherwise repulse each other because of the electromagnetic force) — if it were weaker, the universe would have far fewer stable chemical elements, eliminating several essentials to life.
Weak nuclear force is constant(governs radioactive decay) — if it were much stronger or weaker, life-essential stars could not form. (These are the four “fundamental forces.”)
Cosmological constant(which controls the universe’s expansion speed) refers to the balance of the attractive force of gravity with a hypothesized repulsive force of space observable only at enormous scales. It must be very close to zero; these two forces must be nearly perfectly balanced. The cosmological constant must be fine-tuned to something like 1 part in 10120 to get the right balance. If it were just slightly more positive, the universe would fly apart; somewhat negative, and the universe would collapse.
As with the cosmological constant, the ratios of the other constants must be fine-tuned relative to each other. Since the logical range of strengths of some forces is potentially infinite, to get a handle on the precision of fine-tuning, theorists often think in terms of the range of force strengths, with gravity the weakest and the strong nuclear force the strongest. The strong nuclear force is 1040 times stronger than gravity, ten thousand, billion, billion, billion, billion times the strength of gravity. Think of that range as represented by a ruler stretching across the entire observable universe, about 15 billion light years. If we increased the strength of gravity by just 1 part in 1034of the range of force strengths (the equivalent of moving less than one inch on the universe-long ruler), the universe could not have life-sustaining planets.
INITIAL CONDITIONS AND “BRUTE FACTS”
Initial Conditions.Besides physical constants, there are initial or boundary conditions, which describe the conditions present at the universe’s beginning. Initial conditions are independent of the physical constants. One way of summarizing the initial conditions is to speak of the extremely low entropy (a highly ordered) initial state of the universe. This refers to the initial distribution of mass-energy. In The Road to Reality, physicist Roger Penrose estimates that the odds of our universe’s initial low entropy state occurring by chance alone are on the order of 1 in 10/ This ratio is vastly beyond our powers of comprehension. Since we know a life-bearing universe is intrinsically interesting, this ratio should be more than enough to raise the question: Why does such a universe exist? If someone is unmoved by this ratio, they probably will not be persuaded by additional examples of fine-tuning.
Besides initial conditions, there are some other well-known features about the universe that are apparently just brute facts. And these too exhibit a high degree of fine-tuning. Among the fine-tuned (apparently) “brute facts” of nature are the following:
The ratio of masses for protons and electrons — If it were slightly different, building blocks for life, such as DNA, could not be formed.
Velocity of light — If it were larger, stars would be too luminous. If it were smaller, stars would not be luminous enough.
Mass excess of neutron over proton — if it were more significant, there would be too few heavy elements for life. If it were smaller, stars would quickly collapse as neutron stars or black holes.
“LOCAL” PLANETARY CONDITIONS
But even in a universe fine-tuned at the cosmic level, local conditions can still vary dramatically. Even in this fine-tuned universe, the vast majority of locations in the universe are unsuited for life. In The Privileged Planet, Guillermo Gonzalez and Jay Richards identify 12 broad, widely recognized fine-tuning factors required to build a single, habitable planet. All 12 factors can be found together in the Earth. There are probably many more such factors. Most of these factors could be split out to make sub-factors since each contributes to a planet’s habitability in multiple ways.
Steady plate tectonics with the right kind of geological interior (which allows the carbon cycle and generates a protective magnetic field). If the Earth’s crust were significantly thicker, plate tectonic recycling could not occur.
The right amount of water in the crust (which provides the universal solvent for life).
Large moon with right planetary rotation period (which stabilizes a planet’s tilt and contributes to tides). With the Earth, the gravitational pull of its moon stabilizes the angle of its axis at a nearly constant 23.5 degrees. This ensures relatively temperate seasonal changes and the only climate in the solar system mild enough to sustain complex living organisms.
Proper concentration of sulfur (which is necessary for critical biological processes).
Right planetary mass (which allows a planet to retain the right type and right thickness of atmosphere). If the Earth were smaller, its magnetic field would weaken, allowing the solar wind to strip away our atmosphere, slowly transforming our planet into a dead, barren world like Mars.
Near the inner edge of the circumstellar habitable zone (which allows a planet to maintain the right amount of water on the surface). If the Earth were just 5% closer to the Sun, it would be subject to the same fate as Venus, a runaway greenhouse effect, with temperatures rising to nearly 900 degrees Fahrenheit. Conversely, if the Earth were approximately 20% farther from the Sun, it would experience runaway glaciations that have left Mars sterile.
Low-eccentricity orbit outside spin-orbit and giant planet resonances (which allows a planet to maintain a safe orbit. A few large Jupiter-mass planetary neighbors in large circular orbits (which protects the habitable zone from too many comet bombardments). If the Earth were not protected by the gravitational pulls of Jupiter and Saturn, it would be far more susceptible to collisions with devastating comets that would cause mass extinctions. As it is, the larger planets in our solar system protect the Earth from the most dangerous comets.
The outside spiral arm of the galaxy (which allows a planet to stay safely away from supernovae).
Near the co-rotation circle of the galaxy, in a circular orbit around the galactic center (which enables a planet to avoid traversing dangerous galaxy parts).
Within the galactic habitable zone (which allows a planet access to heavy elements safely away from the dangerous galactic center).
During the cosmic habitable age (when heavy elements and active stars exist without too high a concentration of dangerous radiation events).
This is an elementary list of “ingredients” for building a single, habitable planet. We currently have only rough probabilities for most of these items. For instance, less than ten percent of stars, even in the Milky Way Galaxy, are within the galactic habitable zone. And the likelihood of getting just the right kind of moon by chance is almost certainly very low, though we have no way of calculating just how low. We can say that most locations in the visible universe, even within otherwise habitable galaxies, are incompatible with life.
It’s important to distinguish this local “fine-tuning” differs from cosmic fine-tuning. With cosmic fine-tuning, we are comparing the actual universe with other possible but non-actual universes. And though theorists sometimes postulate multiple universes to try to avoid the embarrassment of a fine-tuned universe, we have no direct evidence that other universes exist. However, when dealing with our local planetary environment, we compare it with other known or theoretically possible locations within the universe. That means that, given a large enough universe, perhaps you could get these local conditions at least once just by chance (though it would be “chance” tightly constrained by cosmic fine-tuning).
So does that mean that evidence of local fine-tuning is useless for inferring design? No. Gonzalez and Richards argue that we can still discern a purposeful pattern in local fine-tuning. The same cosmic and local conditions, which allow complex observers to exist, also provide the best setting overall for scientific discovery. So, complex observers will find themselves in the best overall setting for observation. You would expect this if the universe were designed for discovery, but not otherwise. So the fine-tuning of physical constants, cosmic initial conditions, and local conditions for habitability suggests that the universe is designed for complex life and scientific discovery.
EFFECTS OF PRIMARY FINE-TUNING PARAMETERS
Some striking effects of fine-tuning “downstream” from basic physics illustrate just how profoundly fine-tuned our universe is. These “effects” should not be treated as independent parameters (see the discussion below). Nevertheless, they do help illustrate the idea of fine-tuning. For instance:
The polarity of the water molecule makes it uniquely suitable for life. If it were greater or smaller, its heat of diffusion and vaporization would make it unsuitable. This results from higher-level physical constants and various features of subatomic particles.
WHAT ABOUT ALL THOSE OTHER PARAMETERS?
One can take either a maximal or a minimal approach when discussing fine-tuned parameters. Those who take the maximal approach seek to create as long a list as possible. For instance, one popular Christian apologist listed thirty-four different parameters in one of his early books and maintained a growing list with ninety parameters. He also attaches exact probabilities to various “local” factors.
While a long (and growing) list sporting exact probabilities has rhetorical force, it also has a serious downside: many of the parameters in these lists are probably derived from other, more fundamental parameters, so they are not independent. For instance, the rate of supernova explosions may be a function of some basic laws of nature and not be a separate instance of fine-tuning. Suppose you multiply the various parameters legitimately to get a low probability. In that case, you want to ensure you are not “double booking,” listing the same factor twice under different descriptions. Otherwise, the resulting probability will be inaccurate. Moreover, in numerous cases, we do not know the exact probabilities.
Others take a more conservative approach to avoid these problems, focusing mainly on distinct, well-understood, and widely accepted examples of fine-tuning. This is the approach taken here. While there are undoubtedly additional examples of fine-tuning, even this conservative approach provides more than enough cumulative evidence for design. After all, this evidence has motivated materialists to construct many universe scenarios to avoid the implications of fine-tuning.
COSMIC CONSTANTS Gravitational force constant
Electromagnetic force constant
Strong nuclear force constant
Weak nuclear force constant
Cosmological constant
INITIAL CONDITIONS AND “BRUTE FACTS”
Initial distribution of mass-energy
The ratio of masses for protons and electrons
Velocity of light
Mass excess of neutron over proton
“LOCAL” PLANETARY CONDITIONS
Steady plate tectonics with the right kind of geological interior
The right amount of water in the crust
Large moon with a right rotation period
Proper concentration of sulfur
Right planetary mass
Near the inner edge of the circumstellar habitable zone
Low-eccentricity orbit outside spin-orbit and giant planet resonances
A few large Jupiter-mass planetary neighbors in large circular orbits
The outside spiral arm of the galaxy
Near co-rotation circle of the galaxy, in a circular orbit around the galactic center
Within the galactic habitable zone
During the cosmic habitable age
EFFECTS OF PRIMARY FINE-TUNING PARAMETERS
The polarity of the water molecule
COSMIC CONSTANTS
Gravitational force is constant(the large-scale attractive force that holds people on planets and planets, stars, and galaxies together) — too weak, planets and stars cannot form; too strong, and stars burn up too quickly.
Electromagnetic force constant(the small-scale attractive and repulsive force that holds atoms, electrons, and atomic nuclei together) — If it were much stronger or weaker, we would not have stable chemical bonds.
Video: Life Can’t Exist Without Repair Mechanisms, and That Has a Problem for Origin-of-Life Theories
A cell is often described as an extraordinary factory that can run autonomously and reproduce itself. The first cell required a lengthy list of components, layers of organization, and a large quantity of complex specified information, as described by previous episodes of Long Story Short. The latest entry in the series emphasizes yet another requirement for life: an abundance of specific repair mechanisms.
Damage to the cell’s “factory” occurs on two levels: damage to the stored information (either during replication or by natural degradation over time) and damage to the manufacturing machinery (either from faulty production of new machinery or damage incurred during use). Each type of damage requires specific repair mechanisms demonstrating foresight—the expectation that damage will occur and the ability to recognize, repair, and/or recycle only those damaged components. All known life requires these mechanisms.
Damage to Stored Information
The initial process of DNA replication is facilitated by a polymerase enzyme, which results in approximately one error for every 10,000 to 100,000 added nucleotides.1 However, if left uncorrected, no known life can persist with such a high error rate.2 Fortunately, DNA replication in all life includes a subsequent proofreading step — a type of damage repair — that enhances the accuracy by a factor of 100 to 1,000. The current record holder for the sloppiest DNA replication of a living organism under normal conditions is Mycoplasma modes (and its human-modified relative, JVCI-son 3A), where only 1 in 33,000,000 nucleotides are incorrectly copied.
Following the replication of DNA, a daily barrage of DNA damage occurs during normal operating conditions. Life, therefore, requires sophisticated and precise DNA repair mechanisms. DNA damage response is estimated to involve a hierarchical organization of 605 proteins in 109 human assemblies.4 Efforts to make the simplest possible cell by stripping out all non-essential genes have successfully reduced DNA repair to a minimal set of six genes.5 But, these six genes are encoded in thousands of DNA base pairs, and the machinery to transcribe and translate those genes into the repair enzymes requires a minimum of 149 genes.6 Thus, the DNA code required to make DNA repair mechanisms easily exceeds 100,000 base pairs. Here, we encounter a great paradox, first identified in 1971 by Manfred Eigen7: DNA repair is essential to maintain DNA. However, the genes that code for DNA repair could not have evolved unless the repair mechanisms were already present to protect the DNA.
Faulty Production of New Machinery
We used to think that the metabolic machinery in a cell always produced perfect products. However, faulty products are unavoidable, resulting in the production of interfering or toxic garbage. Therefore, all living organisms must have machinery that identifies problems and either repairs or recycles the faulty products.
Related
Vern Bender
AUTHOR ARETURNING CHRISTIANITY TO IWHAT IT ORIIIGIONALY WASND HISTORIAN