Site icon Vern Bender

IN 2006, over 250 new molectler machines were discovered.

  • Problem for Origin-of-Life Theories
  • A cell is often described as a factory — a quite extraordinary factory that can run autonomously and reproduce itself. The first cell required a lengthy list of components, layers of organization, and a large quantity of complex specified information, as described by previous episodes of Long Story Short. The latest entry in the series emphasizes yet another requirement for life: an abundance of specific repair mechanisms.
  1. Damage to the “factory” of the cell occurs on two levels: damage to the stored information (either during replication or by natural degradation over time) and damage to the manufacturing machinery (either from faulty production of new machinery or damage incurred during use). Each type of damage requires specific repair mechanisms that demonstrate foresight — the expectation that damage will occur and the ability to recognize, repair and/or recycle only those components that are damaged. All known life requires these mechanisms.
  1. If the mitochondria could not get “unstuck,” life as we know it would end. In the process of self-replication, a single cell must produce an entire library of proteins, placing a heavy burden on the cell’s mitochondria. But with a 2-4 percent rate of stuck mRNA strands, the average cell would have each of its mitochondria get stuck at least five times before the cell could replicate.9 Therefore, life could never replicate and metabolism would cease unless this problem was solved.
  1. The normal operation of enzymes or metabolites like co-enzymes or cofactors involves chemical reactions that follow specific paths. Deviations from the desired paths can occur from interferences like radiation, oxidative stress, or encountering the wrong “promiscuous” enzyme. These deviations result in rogue molecules that interfere with metabolism or are toxic to the cell. As a result, even the simplest forms of life require several metabolic repair mechanisms:
d his “high-quality colleagues” in the ID research community has exposed the problems that Darwinists need to be working on to solve.
About Dr. Myer, he says:
I encountered people like Stephen Meyer, who were not phony scientists, pretending to do the work. They were actually very good at what they did. And I believe Stephen Meyer is motivated by a religious motivation, but we rarely ask the question when somebody takes up science, “What are you really in it for? Are you in it for the fame?” That’s not a legitimate challenge to somebody’s work. And the fact is, Stephen Meyer is very good at what he does. He may be motivated by the thought that at the end of the search, he’s going to find Jesus. But in terms of the quality of his arguments, I was very impressed when I met him: his love for biology, his love for creatures, the weirder the better, he likes them, right? So that looked very familiar to me.
  • Add it all also became obvious to me in interacting with Stephen Meyer and many of his high-quality colleagues that they’re actually motivated, for whatever reason, to do the job that we are supposed to be motivated to do inside of biology. They’re looking for cracks in the theory. Things that we haven’t yet explained. And they’re looking for those things for their own reasons, but the point is we’re supposed to be figuring out what parts of the stories we tell ourselves aren’t true, because that’s how we get smarter over time.
Heying cites the 2019 public defection from Darwinism of Yale computer scientist David Gelernter, who pointed to Meyer’s writing as his primary reason for “Giving Up Darwin.” She admits she hasn’t kept up with the challenges from ID, but agrees that she should keep up, and that’s because challenges like those from ID can make the evolutionary establishment “smarter.” Ignoring the challenges makes the establishment dumber — stagnant and self-satisfied. I’m not familiar with most of the arguments that are coming out of the intelligent design movement. It hasn’t felt like it was my obligation to be familiar with them. Perhaps what you’re arguing is it is our responsibility.
  • I’m open to that battle and I expect that if we pursue that question, what we’re going to find is, oh, there’s a layer of Darwinism we didn’t get and it’s going to turn out that the intelligent design folks are going to be wrong. But they will have played a very noble and important role in the process of us getting smarter. And look, I think Stephen Meyer at the end of the day, I don’t think he’s going to surrender to the idea that there’s no God at the end of this process. But if we find a layer of Darwinism that hasn’t been spotted, that answers his question, I think he’s going to be delighted with it the same way he’s delighted by the prospect of seeing whale sharks.
Long before modern technology, students of biology compared the workings of life to machines.1 In recent decades, this comparison has become stronger than ever. As a paper in Nature Reviews Molecular Cell Biology states, “Today biology is revealing the importance of ‘molecular machines’ and of other highly organized molecular structures that carry out the complex physics-chemical processes on which life is based.”2 Likewise, a paper in Nature Methods observed that “[m]ost cellular functions are executed by protein complexes, acting like molecular machines. Molecular machines have posed a stark challenge to those who seek to understand them in Darwinian terms as the products of an undirected process. In his 1996 book Darwin’s Black Box: The Biochemical Challenge to Evolution, biochemist Michael Behe explained the surprising discovery that life is based upon machines: Shortly after 1950 science advanced to the point where it could determine the shapes and properties of a few of the molecules that make up living organisms. Slowly, painstakingly, the structures of more and more biological molecules were elucidated, and the way they work inferred from countless experiments. The cumulative results show with piercing clarity that life is based on machines — machines made of molecules! Molecular machines haul cargo from one place in the cell to another along “highways” made of other molecules, while still others act as cables, ropes, and pulleys to hold the cell in shape. Machines turn cellular switches on and off, sometimes killing the cell or causing it to grow. Solar-powered machines capture the energy of photons and store it in chemicals. Electrical machines allow the current to flow through nerves. Manufacturing machines build other molecular machines, as well as themselves. Cells swim using machines, copy themselves with machinery, ingest food with machinery. In short, highly sophisticated molecular machines control every cellular process. Thus, the details of life are finely calibrated and the machinery of life is enormously complex.7 Behe then posed the question, “Can all of life be fit into Darwin’s theory of evolution?,” and answered: “The complexity of life’s foundation has paralyzed science’s attempt to account for it; molecular machines raise an as-yet impenetrable barrier to Darwinism’s universal reach, Even those who disagree with Behe’s answer to that question have marveled at the complexity of molecular machines. In 1998, former president of the U.S. National Academy of Sciences Bruce Alberts wrote the introductory article to an issue of Cell, one of the world’s top biology journals, celebrating molecular machines. Alberts praised the “speed,” “elegance,” “sophistication,” and “highly organized activity” of “remarkable” and “marvelous” structures inside the cell. He went on to explain what inspired such words:
The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines. . . . Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts.9
Likewise, in 2000 Marco Piccolini wrote in Nature Reviews Molecular Cell Biology that “extraordinary biological machines realize the dream of the seventeenth-century scientists … that ‘machines will be eventually found not only unknown to us but also unimaginable by our mind.’” He notes that modern biological machines “surpass the expectations of the early life scientists. A few years later, a review article in the journal Biological Chemistry demonstrated the difficulty evolutionary scientists have faced when trying to understand molecular machines. Essentially, they must deny their scientific intuitions when trying to grapple with the complexity of the fact that biological structures appear engineered to the schematics of blueprints:
Molecular machines, although it may often seem so, are not made with a blueprint at hand. Yet, biochemists and molecular biologists (and many scientists of other disciplines) are used to thinking as an engineer, more precisely, a reverse engineer. But there are no blueprints … ‘Nothing in biology makes sense except in the light of evolution’: we know that Dobzhansky (1973) must be right. But our mind, despite being a product of tinkering itself, strangely wants us to think like engineers.
But do molecular machines make sense in the light of undirected Darwinian evolution? Does it make sense to deny the fact that machines show all signs that they were designed? Michael Behe argues that, in fact, molecular machines meet the very test that Darwin posed to falsify his theory, and indicate intelligent design.
Darwin knew his theory of gradual evolution by natural selection carried a heavy burden: “If it could be demonstrated that any complex organ existed which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.” … What type of biological system could not be formed by “numerous successive slight modifications”? Well, for starters, a system that is irreducibly complex. By irreducibly complex I mean a single system which is composed of several interacting parts that contribute to the basic function, and where the removal of any of the parts causes the system to effectively cease functioning.
  1. I. MOLECULAR MACHINES THAT SCIENTISTS HAVE ARGUED SHOW IRREDUCIBLE COMPLEXITY
  1. The cilium is a hair-like, or whip-like structure that is built upon a system of microtubules, typically with nine outer microtubule pairs and two inner microtubules. The microtubules are connected with non arms and a paddling-like motion is instigated with dynein motors.18 These machines perform many functions in Eukaryotes, such as allowing sperm to swim or removing foreign particles from the throat. Michael Behe observes that the “paddling” function of the cilium will fail if it is missing any microtubules, connecting arms, or lacks sufficient dynein motors, making it irreducibly complex.
The blood coagulation system “is a typical example of a molecular machine, where the assembly of substrates, enzymes, protein cofactors and calcium ions on a phospholipid surface markedly accelerates the rate of coagulation.”24 According to a paper in BioEssays, “the molecules interact with cell surface (molecules) and other proteins to assemble reaction complexes that can act as a molecular machine.”25 Michael Behe argues, based upon experimental data, that the blood clotting cascade has an irreducible core regarding its components after its initiation pathways converge.26 The ribosome is an “RNA machine”27 that “involves more than 300 proteins and RNAs”28 to form a complex where messenger RNA is translated into protein, thereby playing a crucial role in protein synthesis in the cell. Craig Venter, a leader in genomics and the Human Genome Project, has called the ribosome “an incredibly beautiful complex entity” which requires a “minimum for the ribosome about 53 proteins and 3 polynucleotides,” leading some evolutionist biologists to fear that it may be irreducibly complex.
  1. Gravitational force constant
  2. Electromagnetic force constant
  3. Strong nuclear force constant
  4. Weak nuclear force constant
  5. Cosmological constant
  1. Initial distribution of mass energy
  2. Ratio of masses for protons and electrons
  3. Velocity of light
  4. Mass excess of neutron over proton

“LOCAL” PLANETARY CONDITIONS

  1. Steady plate tectonics with right kind of geological interior
  2. Right amount of water in crust
  3. Large moon with right rotation period
  4. Proper concentration of sulfur
  5. Right planetary mass
  6. Near inner edge of circumstellar habitable zone
  7. Low-eccentricity orbit outside spin-orbit and giant planet resonances
  8. A few, large Jupiter-mass planetary neighbors in large circular orbits
  9. Outside spiral arm of galaxy
  10. Near co-rotation circle of galaxy, in circular orbit around galactic center
  11. Within the galactic, habitable zone
  12. During the cosmic habitable age
  1. The polarity of the water molecule

  1. Gravitational force constant (large scale attractive force, holds people on planets, and holds planets, stars, and galaxies together) — too weak, and planets and stars cannot form; too strong, and stars burn up too quickly.
  2. Electromagnetic force constant (small scale attractive and repulsive force, holds atoms electrons and atomic nuclei together) — If it were much stronger or weaker, we wouldn’t have stable chemical bonds.
  3. Strong nuclear force constant (small-scale attractive force, holds nuclei of atoms together, which otherwise repulse each other because of the electromagnetic force) — if it were weaker, the universe would have far fewer stable chemical elements, eliminating several that are essential to life.
  4. Weak nuclear force constant (governs radioactive decay) — if it were much stronger or weaker, life-essential stars could not form.(These are the four “fundamental forces.”)
  1. Cosmological constant (which controls the expansion speed of the universe) refers to the balance of the attractive force of gravity with a hypothesized repulsive force of space observable only at very large size scales. It must be very close to zero, that is, these two forces must be nearly perfectly balanced. To get the right balance, the cosmological constant must be fine-tuned to something like 1 part in 10120. If it were just slightly more positive, the universe would fly apart; slightly negative, and the universe would collapse.
As with the cosmological constant, the ratios of the other constants must be fine-tuned relative to each other. Since the logically-possible range of strengths of some forces is potentially infinite, to get a handle on the precision of fine-tuning, theorists often think in terms of the range of force strengths, with gravity the weakest, and the strong nuclear force the strongest. The strong nuclear force is 1040 times stronger than gravity, that is, ten thousand, billion, billion, billion, billion times the strength of gravity. Think of that range as represented by a ruler stretching across the entire observable universe, about 15 billion light years. If we increased the strength of gravity by just 1 part in 1034 of the range of force strengths (the equivalent of moving less than one inch on the universe-long ruler), the universe couldn’t have life sustaining planets.
  1. Initial Conditions. Besides physical constants, there are initial or boundary conditions, which describe the conditions present at the beginning of the universe. Initial conditions are independent of the physical constants. One way of summarizing the initial conditions is to speak of the extremely low entropy (that is, a highly ordered) initial state of the universe. This refers to the initial distribution of mass energy. In The Road to Reality, physicist Roger Penrose estimates that the odds of the initial low entropy state of our universe occurring by chance alone are on the order of 1 in 10/  This ratio is vastly beyond our powers of comprehension. Since we know a life-bearing universe is intrinsically interesting, this ratio should be more than enough to raise the question: Why does such a universe exist? If someone is unmoved by this ratio, then they probably won’t be persuaded by additional examples of fine-tuning.
In addition to initial conditions, there are a number of other, well- known features about the universe that are apparently just brute facts. And these too exhibit a high degree of fine-tuning. Among the fine-tuned (apparently) “brute facts” of nature are the following:
  1. Ratio of masses for protons and electrons — If it were slightly different, building blocks for life such as DNA could not be formed.
  2. Velocity of light — If it were larger, stars would be too luminous. If it were smaller, stars would not be luminous enough.
  3. Mass excess of neutron over proton — if it were greater, there would be too few heavy elements for life. If it were smaller, stars would quickly collapse as neutron stars or black holes.
  1. Steady plate tectonics with right kind of geological interior (which allows the carbon cycle and generates a protective magnetic field). If the Earth’s crust were significantly thicker, plate tectonic recycling could not take place.
  2. Right amount of water in crust (which provides the universal solvent for life).
  3. Large moon with right planetary rotation period (which stabilizes a planet’s tilt and contributes to tides). In the case of the Earth, the gravitational pull of its moon stabilizes the angle of its axis at a nearly constant 23.5 degrees. This ensures relatively temperate seasonal changes, and the only climate in the solar system mild enough to sustain complex living organisms.
  4. Proper concentration of sulfur (which is necessary for important biological processes).
  5. Right planetary mass (which allows a planet to retain the right type and right thickness of atmosphere). If the Earth were smaller, its magnetic field would be weaker, allowing the solar wind to strip away our atmosphere, slowly transforming our planet into a dead, barren world much like Mars.
  6. Near inner edge of circumstellar habitable zone (which allows a planet to maintain the right amount of liquid water on the surface). If the Earth were just 5% closer to the Sun, it would be subject to the same fate as Venus, a runaway greenhouse effect, with temperatures rising to nearly 900 degrees Fahrenheit. Conversely, if the Earth were about 20% farther from the Sun, it would experience runaway glaciations of the kind that has left Mars sterile.
  7. Low-eccentricity orbit outside spin-orbit and giant planet resonances (which allows a planet to maintain a safe orbit. A few large Jupiter-mass planetary neighbors in large circular orbits (which protects the habitable zone from too many comet bombardments). If the Earth were not protected by the gravitational pulls of Jupiter and Saturn, it would be far more susceptible to collisions with devastating comets that would cause mass extinctions. As it is, the larger planets in our solar system provide significant protection to the Earth from the most dangerous comets.
  8. Outside spiral arm of galaxy (which allows a planet to stay safely away from supernovae).
  9. Near co-rotation circle of galaxy, in circular orbit around galactic center (which enables a planet to avoid traversing dangerous parts of the galaxy).
  10. Within the galactic habitable zone (which allows a planet to have access to heavy elements while being safely away from the dangerous galactic center).
  11. During the cosmic habitable age (when heavy elements and active stars exist without too high a concentration of dangerous radiation events).
  1. The polarity of the water molecule makes it uniquely fit for life. If it were greater or smaller, its heat of diffusion and vaporization would make it unfit for life. This is the result of higher-level physical constants, and also of various features of subatomic particles.
  2. WHAT ABOUT ALL THOSE OTHER PARAMETERS?
  3. In discussing fine-tuned parameters, one can take either a maximal or a minimal approach.Those who take the maximal approach seek to create as long a list as possible. For instance, one popular Christian apologist listed thirty-four different parameters in one of his early books, and maintains a growing list, which currently has ninety parameters. He also attaches exact probabilities to various “local” factors.
While a long (and growing) list sporting exact probabilities has rhetorical force, it also has a serious downside: many of the parameters in these lists are probably derived from other, more fundamental parameters, so they’re not really independent. The rate of supernova explosions, for instance, may simply be a function of some basic laws of nature, and not be a separate instance of fine-tuning. If you’re going to legitimately multiply the various parameters to get a low probability, you want to make sure you’re not “double booking,” that is, listing the same factor twice under different descriptions. Otherwise, the resulting probability will be inaccurate. Moreover, in many cases, we simply don’t know the exact probabilities. To avoid these problems, others take a more conservative approach, and focus mainly on distinct, well-understood, and widely accepted examples of fine-tuning. This is the approach taken here. While there are certainly additional examples of fine-tuning, even this conservative approach provides more than enough cumulative evidence for design. After all, it is this evidence that has motivated materialists to construct many universe scenarios to avoid the implications of fine-tuning.
  1. Electromagnetic force constant
  2. Strong nuclear force constant
  3. Weak nuclear force constant
  4. Cosmological constant
  1. Initial distribution of mass energy
  2. Ratio of masses for protons and electrons
  3. Velocity of light
  4. Mass excess of neutron over proton
  1. Steady plate tectonics with the right kind of geological interior
  2. Right amount of water in crust
  3. Large moon with right rotation period
  4. Proper concentration of sulfur
  5. Right planetary mass
  6. Near inner edge of circumstellar habitable zone
  7. Low-eccentricity orbit outside spin-orbit and giant planet resonances
  8. A few large Jupiter-mass planetary neighbors in large circular orbits
  9. Outside spiral arm of galaxy
  10. Near co-rotation circle of galaxy, in circular orbit around galactic center
  11. Within the galactic habitable zone
  12. During the cosmic habitable age
  1. The polarity of the water molecule

  1. Gravitational force constant (large scale attractive force, holds people on planets, and holds planets, stars, and galaxies together) — too weak, and planets and stars cannot form; too strong, and stars burn up too quickly.
  2. Electromagnetic force constant (small scale attractive and repulsive force, holds atoms electrons and atomic nuclei together) — If it were much stronger or weaker, we wouldn’t have stable chemical bonds.
  • Video: Life Can’t Exist Without Repair Mechanisms, and That’s a Problem for Origin-of-Life Theories
  1. Damage to the “factory” of the cell occurs on two levels: damage to the stored information (either during replication or by natural degradation over time) and damage to the manufacturing machinery (either from faulty production of new machinery or damage incurred during use). Each type of damage requires specific repair mechanisms that demonstrate foresight — the expectation that damage will occur and the ability to recognize, repair and/or recycle only those components that are damaged. All known life requires these mechanisms.
 
Exit mobile version