Site icon Vern Bender

IN 2006, over 250 new molectler machines were discovered.

  • Problem for Origin-of-Life Theories
  • A cell is often described as a quite extraordinary factory that can run autonomously and reproduce itself. The first cell required a lengthy list of components, layers of organization, and a large quantity of complex specified information, as described by previous episodes of Long Story Short. The latest entry in the series emphasizes yet another requirement for life: an abundance of specific repair mechanisms.
  1. Damage to the “factory” of the cell occurs on two levels: damage to the stored information (either during replication or by natural degradation over time) and damage to the manufacturing machinery (either from faulty production of new machinery or damage incurred during use). Each type of damage requires specific repair mechanisms demonstrating foresight — the expectation that damage will occur and the ability to recognize, repair, and/or recycle only those damaged components. All known life requires these mechanisms.
If the mitochondria could not get “unstuck,” life as we know it would end. In self-replication, a single cell must produce an entire library of proteins, burdening the cell’s mitochondria. However, with a 2-4 percent rate of stuck mRNA strands, the average cell would have each of its mitochondria get stuck at least five times before the cell could replicate.9 Therefore, life could never replicate, and metabolism would cease unless this problem was solved.
  1. Regularly operating enzymes or metabolites, like co-enzymes or cofactors, involve chemical reactions that follow specific paths. Deviations from the desired paths can occur from interferences like radiation, oxidative stress, or encountering the wrong “promiscuous” enzyme. These deviations result in rogue molecules that interfere with metabolism or are toxic to the cell. As a result, even the simplest forms of life require several metabolic repair mechanisms:
  1. d his “high-quality colleagues” in the ID research community have exposed the problems Darwinists need to work on solving.
About Dr. Myer, he says:
I encountered people like Stephen Meyer, who were not phony scientists, pretending to do the work. They were very good at what they did. And I believe Stephen Meyer is motivated by a religious motivation, but we rarely ask when somebody takes up science, “What are you really in it for? Are you in it for the fame?” That has not a legitimate challenge to somebody’s work. And the fact is, Stephen Meyer is very good at what he does. He may be motivated by the thought that at the end of the search, he has gone to find Jesus. But in terms of the quality of his arguments, I was very impressed when I met him: his love for biology, his love for creatures, the weirder the better, he likes them? So that looked very familiar to me.
  • Add it all also became obvious to me in interacting with Stephen Meyer and many of his high-quality colleagues that they are motivated, for whatever reason, to do the job that we are supposed to be motivated to do inside of biology. They are seeking cracks in the theory. Things that we have not yet explained. And they are seeking those things for their own reasons, but the point is we are supposed to be understanding what parts of the stories we tell ourselves are not true, because that has how we get smarter over time.
Heying cites the 2019 public defection from Darwinism of Yale computer scientist David Gelernter, who pointed to Meyer’s writing as his primary reason for “Giving Up Darwin.” She admits she has not kept up with the challenges from ID but agrees that she should keep up, and that has because challenges like those from ID can make the evolutionary establishment “smarter.” Ignoring the challenges makes the establishment dumber — stagnant and self-satisfied. I’m unfamiliar with most of the intelligent design movement arguments. It has not felt like it was my obligation to be familiar with them. Perhaps what you are arguing is it is our responsibility.
  • I’m open to that battle and I expect that if we pursue that question, what we are going to find is, oh, there has a layer of Darwinism we did not get and it has going to turn out that the intelligent design folks are going to be wrong. But they will have played a very noble and important role in the process of us getting smarter. And look, I think Stephen Meyer at the end of the day; I do not think he has going to surrender to the idea that there has no God at the end of this process. But if we find a layer of Darwinism that has not been spotted, that answers his question, I think he has going to be delighted with it the same way he has delighted by the prospect of seeing whale sharks.
Long before modern technology, biology students compared the workings of life to machines.1 In recent decades, this comparison has become stronger than ever. As a paper in Nature Reviews Molecular Cell Biology states, “Today biology is revealing the importance of ‘molecular machines’ and of other highly organized molecular structures that carry out the complex physics-chemical processes on which life is based.”2 Likewise, a paper in Nature Methods observed that “[m]ost cellular functions are executed by protein complexes, acting like molecular machines.
  1. Molecular machines have posed a stark challenge to those who seek to understand them in Darwinian terms as the products of an undirected process. In his 1996 book Darwin’s Black Box: The Biochemical Challenge to Evolution, biochemist Michael Behe explained the surprising discovery that life is based upon machines: Shortly after 1950, science advanced to where it could determine the shapes and properties of a few of the molecules that make up living organisms. Slowly, painstakingly, the structures of more biological molecules were elucidated, and the way they work was inferred from countless experiments. The cumulative results clearly show that life is based on machines made of molecules! Molecular machines haul cargo from one place in the cell to another along “highways” made of other molecules, while still others act as cables, ropes, and pulleys to hold the cell in shape. Machines turn cellular switches on and off, sometimes killing the cell or causing it to grow. Solar-powered machines capture the energy of photons and store it in chemicals. Electrical machines allow the current to flow through nerves. Manufacturing machines build other molecular machines, as well as themselves. Cells swim using machines, copy themselves with machinery, ingest food with machinery. In short, highly sophisticated molecular machines control every cellular process. Thus, the details of life are finely calibrated, and the machinery of life is enormously complex.7
Behe then posed the question, “Can all of life be fit into Darwin’s theory of evolution?,” and answered: “The complexity of life’s foundation has paralyzed science’s attempt to account for it; molecular machines raise an as-yet impenetrable barrier to Darwinism’s universal reach,
The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines. . . . Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts.9
Likewise, in 2000, Marco Piccolini wrote in Nature Reviews Molecular Cell Biology that “extraordinary biological machines realize the dream of the seventeenth-century scientists … that ‘machines will be eventually found not only unknown to us but also unimaginable by our mind.’” He notes that modern biological machines “surpass the expectations of the early life scientists. A few years later, a review article in the Biological Chemistry journal demonstrated evolutionary scientists’ difficulty understanding molecular machines. Essentially, they must deny their scientific intuitions when trying to grapple with the complexity of the fact that biological structures appear engineered to the schematics of blueprints:
Molecular machines, although it may often seem so, are not made with a blueprint at hand. Yet, biochemists and molecular biologists (and numerous scientists of other disciplines) are used to thinking as an engineer, more precisely, a reverse engineer. But there are no blueprints … ‘Nothing in biology makes sense except in the light of evolution’: we know that Dobzhansky (1973) must be right. But our mind, despite being a product of tinkering itself, strangely wants us to think like engineers.
But do molecular machines make sense in the light of undirected Darwinian evolution? Does it make sense to deny that machines show all signs that they were designed? Michael Behe argues that molecular machines meet the test Darwin posed to falsify his theory and indicate intelligent design.
Darwin knew his theory of gradual evolution by natural selection carried a heavy burden: “If it could be demonstrated that any complex organ existed which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.” … What type of biological system could not be formed by “numerous successive slight modifications”? For starters, a system that is irreducibly complex. By irreducibly complex, I mean a single system which is composed of several interacting parts that contribute to the basic function, and where the removal of the parts causes the system to effectively cease functioning.
  1. I. MOLECULAR MACHINES THAT SCIENTISTS HAVE ARGUED SHOW IRREDUCIBLE COMPLEXITY
  1. The cilium is a hair-like or whip-like structure built upon a system of microtubules, typically with nine outer microtubule pairs and two inner microtubules. The microtubules are connected with non arms and a paddling-like motion is instigated with dynein motors.18 These machines perform numerous functions in Eukaryotes, such as allowing sperm to swim or removing foreign particles from the throat. Michael Behe observes that the “paddling” function of the cilium will fail if it is missing any microtubules or connecting arms or lacks sufficient dynein motors, making it irreducibly complex.
The blood coagulation system “is a typical example of a molecular machine, where the assembly of substrates, enzymes, protein cofactors and calcium ions on a phospholipid surface markedly accelerates the coagulation rate.”24 According to a paper in Blesses, “the molecules interact with cell surface (molecules) and other proteins to assemble reaction complexes that can act as a molecular machine.”25 Michael Behe argues, based upon experimental data, that the blood clotting cascade has an irreducible core regarding its components after its initiation pathways converge.26 The ribosome is an “RNA machine”27 that “involves more than 300 proteins and RNAs”28 to form a complex where messenger RNA is translated into protein, thereby playing a crucial role in protein synthesis in the cell. Craig Venter, a leader in genomics and the Human Genome Project, has called the ribosome “a magnificent complex entity” that requires a “minimum for the ribosome about 53 proteins and 3 polynucleotides,” leading some evolutionist biologists to fear that it may be irreducibly complex.
  1. Gravitational force constant
  2. Electromagnetic force constant
  3. Strong nuclear force constant
  4. Weak nuclear force constant
  5. Cosmological constant
  1. Initial distribution of mass-energy
  2. The ratio of masses for protons and electrons
  3. Velocity of light
  4. Mass excess of neutron over proton
  1. Steady plate tectonics with the right kind of geological interior
  2. The right amount of water in the crust
  3. Large moon with a right rotation period
  4. Proper concentration of sulfur
  5. Right planetary mass
  6. Near inner edge of the circumstellar habitable zone
  7. Low-eccentricity orbit outside spin-orbit and giant planet resonances
  8. A few, large Jupiter-mass planetary neighbors in large circular orbits
  9. The outside spiral arm of the galaxy
  10. Near co-rotation circle of galaxy, in circular orbit around galactic center
  11. Within the galactic habitable zone
  12. During the cosmic habitable age
  1. The polarity of the water molecule

  1. Gravitational force is constant (the large-scale attractive force that holds people on planets and planets, stars, and galaxies together) — too weak, planets and stars cannot form; too firm, and stars burn up too quickly.
  2. Electromagnetic force constant (the small-scale attractive and repulsive force that holds atoms, electrons, and atomic nuclei together) — If it were much stronger or weaker, we would not have stable chemical bonds.
  3. The strong nuclear force is constant (the small-scale attractive force that holds nuclei of atoms together, which otherwise repulse each other because of the electromagnetic force) — if it were weaker, the universe would have far fewer stable chemical elements, eliminating several essentials to life.
  4. Weak nuclear force is constant (governs radioactive decay) — if it were much stronger or weaker, life-essential stars could not form. (These are the four “fundamental forces.”)
  1. Cosmological constant (which controls the universe’s expansion speed) refers to the balance of the attractive force of gravity with a hypothesized repulsive force of space observable only at enormous scales. It must be very close to zero; these two forces must be nearly perfectly balanced. The cosmological constant must be fine-tuned to something like 1 part in 10120 to get the right balance. If it were just slightly more positive, the universe would fly apart; somewhat negative, and the universe would collapse.
As with the cosmological constant, the ratios of the other constants must be fine-tuned relative to each otherSince the logical range of strengths of some forces is potentially infinite, to get a handle on the precision of fine-tuning, theorists often think in terms of the range of force strengths, with gravity the weakest and the strong nuclear force the strongest. The strong nuclear force is 1040 times stronger than gravity, ten thousand, billion, billion, billion, billion times the strength of gravity. Think of that range as represented by a ruler stretching across the entire observable universe, about 15 billion light years. If we increased the strength of gravity by just 1 part in 1034 of the range of force strengths (the equivalent of moving less than one inch on the universe-long ruler), the universe could not have life-sustaining planets.
  1. Initial Conditions. Besides physical constants, there are initial or boundary conditions, which describe the conditions present at the universe’s beginning. Initial conditions are independent of the physical constants. One way of summarizing the initial conditions is to speak of the extremely low entropy (a highly ordered) initial state of the universe. This refers to the initial distribution of mass-energy. In The Road to Reality, physicist Roger Penrose estimates that the odds of our universe’s initial low entropy state occurring by chance alone are on the order of 1 in 10/  This ratio is vastly beyond our powers of comprehension. Since we know a life-bearing universe is intrinsically interesting, this ratio should be more than enough to raise the question: Why does such a universe exist? If someone is unmoved by this ratio, they probably will not be persuaded by additional examples of fine-tuning.
Besides initial conditions, there are some other well-known features about the universe that are apparently just brute facts. And these too exhibit a high degree of fine-tuning. Among the fine-tuned (apparently) “brute facts” of nature are the following:
  1. The ratio of masses for protons and electrons — If it were slightly different, building blocks for life, such as DNA, could not be formed.
  2. Velocity of light — If it were larger, stars would be too luminous. If it were smaller, stars would not be luminous enough.
  3. Mass excess of neutron over proton — if it were more significant, there would be too few heavy elements for life. If it were smaller, stars would quickly collapse as neutron stars or black holes.
  1. Steady plate tectonics with the right kind of geological interior (which allows the carbon cycle and generates a protective magnetic field). If the Earth’s crust were significantly thicker, plate tectonic recycling could not occur.
  2. The right amount of water in the crust (which provides the universal solvent for life).
  3. Large moon with right planetary rotation period (which stabilizes a planet’s tilt and contributes to tides). With the Earth, the gravitational pull of its moon stabilizes the angle of its axis at a nearly constant 23.5 degrees. This ensures relatively temperate seasonal changes and the only climate in the solar system mild enough to sustain complex living organisms.
  4. Proper concentration of sulfur (which is necessary for critical biological processes).
  5. Right planetary mass (which allows a planet to retain the right type and right thickness of atmosphere). If the Earth were smaller, its magnetic field would weaken, allowing the solar wind to strip away our atmosphere, slowly transforming our planet into a dead, barren world like Mars.
  6. Near the inner edge of the circumstellar habitable zone (which allows a planet to maintain the right amount of water on the surface). If the Earth were just 5% closer to the Sun, it would be subject to the same fate as Venus, a runaway greenhouse effect, with temperatures rising to nearly 900 degrees Fahrenheit. Conversely, if the Earth were approximately 20% farther from the Sun, it would experience runaway glaciations that have left Mars sterile.
  7. Low-eccentricity orbit outside spin-orbit and giant planet resonances (which allows a planet to maintain a safe orbit. A few large Jupiter-mass planetary neighbors in large circular orbits (which protects the habitable zone from too many comet bombardments). If the Earth were not protected by the gravitational pulls of Jupiter and Saturn, it would be far more susceptible to collisions with devastating comets that would cause mass extinctions. As it is, the larger planets in our solar system protect the Earth from the most dangerous comets.
  8. The outside spiral arm of the galaxy (which allows a planet to stay safely away from supernovae).
  9. Near the co-rotation circle of the galaxy, in a circular orbit around the galactic center (which enables a planet to avoid traversing dangerous galaxy parts).
  10. Within the galactic habitable zone (which allows a planet access to heavy elements safely away from the dangerous galactic center).
  11. During the cosmic habitable age (when heavy elements and active stars exist without too high a concentration of dangerous radiation events).
  1. The polarity of the water molecule makes it uniquely suitable for life. If it were greater or smaller, its heat of diffusion and vaporization would make it unsuitable. This results from higher-level physical constants and various features of subatomic particles.
  2. WHAT ABOUT ALL THOSE OTHER PARAMETERS?
  3. One can take either a maximal or a minimal approach when discussing fine-tuned parameters. Those who take the maximal approach seek to create as long a list as possible. For instance, one popular Christian apologist listed thirty-four different parameters in one of his early books and maintained a growing list with ninety parameters. He also attaches exact probabilities to various “local” factors.
While a long (and growing) list sporting exact probabilities has rhetorical force, it also has a serious downside: many of the parameters in these lists are probably derived from other, more fundamental parameters, so they are not independent. For instance, the rate of supernova explosions may be a function of some basic laws of nature and not be a separate instance of fine-tuning. Suppose you multiply the various parameters legitimately to get a low probability. In that case, you want to ensure you are not “double booking,” listing the same factor twice under different descriptions. Otherwise, the resulting probability will be inaccurate. Moreover, in numerous cases, we do not know the exact probabilities. Others take a more conservative approach to avoid these problems, focusing mainly on distinct, well-understood, and widely accepted examples of fine-tuning. This is the approach taken here. While there are undoubtedly additional examples of fine-tuning, even this conservative approach provides more than enough cumulative evidence for design. After all, this evidence has motivated materialists to construct many universe scenarios to avoid the implications of fine-tuning.
  1. Electromagnetic force constant
  2. Strong nuclear force constant
  3. Weak nuclear force constant
  4. Cosmological constant
  1. Initial distribution of mass-energy
  2. The ratio of masses for protons and electrons
  3. Velocity of light
  4. Mass excess of neutron over proton
  1. Steady plate tectonics with the right kind of geological interior
  2. The right amount of water in the crust
  3. Large moon with a right rotation period
  4. Proper concentration of sulfur
  5. Right planetary mass
  6. Near the inner edge of the circumstellar habitable zone
  7. Low-eccentricity orbit outside spin-orbit and giant planet resonances
  8. A few large Jupiter-mass planetary neighbors in large circular orbits
  9. The outside spiral arm of the galaxy
  10. Near co-rotation circle of the galaxy, in a circular orbit around the galactic center
  11. Within the galactic habitable zone
  12. During the cosmic habitable age
  1. The polarity of the water molecule
  1. Gravitational force is constant (the large-scale attractive force that holds people on planets and planets, stars, and galaxies together) — too weak, planets and stars cannot form; too strong, and stars burn up too quickly.
  2. Electromagnetic force constant (the small-scale attractive and repulsive force that holds atoms, electrons, and atomic nuclei together) — If it were much stronger or weaker, we would not have stable chemical bonds.
  • Video: Life Can’t Exist Without Repair Mechanisms, and That Has a Problem for Origin-of-Life Theories
  1. Damage to the cell’s “factory” occurs on two levels: damage to the stored information (either during replication or by natural degradation over time) and damage to the manufacturing machinery (either from faulty production of new machinery or damage incurred during use). Each type of damage requires specific repair mechanisms demonstrating foresight—the expectation that damage will occur and the ability to recognize, repair, and/or recycle only those damaged components. All known life requires these mechanisms.
 
Vern Bender
AUTHOR ARETURNING CHRISTIANITY TO IWHAT IT ORIIIGIONALY WASND HISTORIAN
Exit mobile version