Robert Shapiro is a professor of chemistry at New York University and an expert on DNA Research. In his elegant and thoughtful book Origins he writes;
“I decided to revisit one of the first origin-of-life displays that I had seen. The American Museum of Natural History in New York has carried an exhibition on this topic for the past twenty years. With the cases was a diagram of the
Millex-Urey apparatus, an account of the pre-biotic soup, and literature references for further reading. I remember the appearance of this display, fresh bright, and provocative shortly after its opening in the early 1960’s. It
occupied the same site decades later. The cases were filled with dust, however, and lighting was now so dim that the words could barely be made out. The sad fate of this display in a way represents the condition of the field itself.”
Robert Shapiro has no interest in religion and was hopeful that reasonable chemical experiments run to discover a probable origin for life would be successful. The fact that they have failed has made life difficult for the prebiotic chemist.
The prebiotic chemist operates under self-imposed constraints. He is attempting to simulate reactions that may have occurred on the early earth to find a plausible series of steps that may have led to the origin of life. A true prebiotic chemist, as much as possible, has to simulate the random environment of the early earth with its many different atoms repelling each other at very high speeds. The outer electron shells of atoms have negative charges which causes the atoms to oppose one another.
Prebiotic simulations of a cell would have to factor in the countless trillions of incorrect, chance sequences of atoms, the trillions of incorrect, chance bonds between atoms, the trillions of failed, chance folding patterns between atoms, and the trillions of failed attempts to obtain function with atoms.
All prebiotic chemists admit that a complete cell with its trillion atoms is beyond chance. So they start small with the various parts of the cell – the proteins, RNA, and DNA molecules. Each of these subunits are unique and contain a vast number of atoms that must act with exquisite perfection to perform their combined catalytic, regulatory, and hereditary functions in the cell.
One small protein with a sequence of 200 amino acids and 4000 atoms is beyond chance. Chance experiments would have to sift through trillions of failed sequences, bonding patterns, and folding patterns for this one small protein. Then chance must throw the dice (atoms) to obtain the other 100,000 unique proteins that are needed to do the work within a cell in order to keep it alive.
The watchmaker can select the conditions that serve his purpose and can use his intelligence to narrow choices. He can think ahead and keep account of the arrangement of parts that work and dismiss arrangements that don’t work. The watchmaker has an idea of the finished product and works toward a goal.
The random atoms in the primeval earth could not think ahead and were at the mercy of pure dumb luck. They have no notion of the finished product. Therefore, they cannot pick and save partial sequences that might lead to the functional, final product. If some kind of miracle of chance gets the first six amino acids to line up in the correct sequence it is not fit to function until all two hundred amino acids are sequenced, bonded, and folded correctly. Because there are twenty unique amino acids that can be used to build a protein, the odds are very great that the seventh amino acid needed to continue the sequence will not be found. The partial sequence has no protection and would find itself awash in the seas of the pre-biotic earth; its fate would be unkind. It would perish without further issue. For in this random sea, it would encounter only hosts of unrelated chemicals, and not the subunits it needs to reproduce itself.
The rules of evolution are very strict as Darwinists so often tell us. These rules work against random chance. One of the rules often mentioned by Darwinists is that evolution is blind and cannot anticipate future results. Another rule of evolution says that an organism must be functionally fit to survive. A partial protein has zero function and would perish according to the harsh rules of evolution.
Because Darwinists working with laboratory experiments have not come close in their attempt to find the reactions that might have led to life, they have championed computer programming as a way forward. After all, laboratory experiments are very difficult and very time consuming. Working in three-dimensional space with the three-dimensional atoms that might have been around during prebiotic times have gotten them only failure. Switching to the two-dimensional printouts of the computer is a lot less messy and leads to nice, neat answers that show up on the computer screen. Of course, two-dimensional computer printouts might lead to problems since proteins and the other molecules of life are incredibly compex thee-dimensional objects with layer upon layer of amino acids.
Furthermore, if there are thousands of unique proteins in a cell, each one has a different three-dimensional arrangement which is vital to its lock-and-key fit with the molecule it’s catalyzing. Darwinists have put this fact aside since the public reading their books might be impressed with the two-dimensional technology of the computer. At first I thought these many “science” authors were joking. No, they were serious. “Comic authority” is being imposed on innocent readers in the name of science as computers demonstrate “natural selection”.
David Berlinski received his Ph.D. from Princeton University and has taught mathematics at a number of universities in America, France and Austria. He has written many books and papers on systems analysis, logic, and mathematics. Professor Berlinski wrote, “it is Richard Dawkins grand intention in his book, The Blind Watchmaker, to demonstrate as one reviewer enthusiastically remarked, how natural selection allows biologists to dispense with such notions as purpose and design”.
This is done with the blind stabs of a monkey at a typewriter correctly typing a chosen Shakespearean target sentence – “Methinks it is like a weasel”. Naturally a computer is needed to eliminate the countless wrong “mutations” and save only the “mutations” that approach the target sentence. Dawkin’s target is a six-word sentence containing twenty-eight English letters (including the spaces). If there are twenty-six keys, the chance of getting “M” as the first letter is one in twenty-six. The odds against success rise to 1/26 x 1/26 or one in 676 for getting the first two letters “Me” in the correct order and 1/26 x 1/26 x 1/26 or one in 17,576 to get “Met”. The improbability explodes exponentially with each punch of the typewriter. The target occupies an isolated point in space of 10,000 million, million, million, million, million, million possibilities. Getting the six numbers in the national lottery on a weekly basis might seem easy compared to the combinatorial inflation leading to Dawkin’s target.
How does Dawkins get around this? Dawkins assumes the position of the “Head Monkey” or the computer which is “programmed” to survey what the monkey has typed in order to choose the result “which however slightly most resembles the target phrase”. The process under way is one in which stray successes are spotted and then saved. Successes are conserved and then conserved again. The estimable “Head Monkey” or computer program conserves certain alphabetical changes because he knows where the experiment is going. This is forbidden knowledge; the Darwinian Mechanism is blind, a point stressed by Darwinian theorists themselves.
What Dawkins shows is design. The computer programmer selects the target; the program then looks to the finished product and compares distances. Saving key letters and putting them aside is “inside information” that could lead to the arrest of a stock trader if he knows the results of a transaction ahead of time. The computer program is appealing to information that a biological system can not possess. Dawkins echoes the thoughts of other Darwinists when he writes, “The universe we observe has precisely the properties we should expect if there is at bottom no design, no purpose, no evil and no good, nothing but pointless indifferences”. Of course, Dawkins shows the opposite of “pointless indifference” when he creates his computer program with a specific target in mind.
It was not Dawkins purpose to show that intelligent design is the only method that works for the design of a cell. However, he has managed to do just that when he became the omniscient originator of his complex computer programs.
When one wakes up in the morning one enters a field of design. Design that is very complex and very specified. Your bed is designed and so is your toothbrush, toothpaste, faucet, light switch, sink, shower, and even the house you live in. Not even a thimble would exist without design.
A cell has all the hallmarks of design: awesome complexity and specificity, along with mesmerizing functional integration. The functions of 100,000 proteins, 200,000 RNA molecules, 20,000 ribosomes, 30,000 genes, and many other components are all exactly coordinated into one living cell. We know of nothing else in the universe that even comes close to this unfathomable sophistication. (Except for the gathering of cells into organs and whole body plans).
“A cell is more complex than a galaxy, if the galaxy has no life in it”. This was written by Marcel-Paul Schutzenberger, an eminent French mathematician, who also says, “randomness is the enemy of order”. He also writes, “the cell’s cascading interactions with feedback loops, express and organizational complexity we do not know how to analyze”.
Whenever the biologist looks in a cell, there is specified complexity beyond specified complexity. It is here that the door of doubt begins to swing. Chance and complexity are countervailing forces; they work at cross-purposes.
Oxford University has given professor Richard Dawkins its trust in making him the holder of the newly endowed Charles Simonyi Chair of Public Understanding of Science. How does he show this trust? By writing that anyone who denies evolution is either, “ignorant, stupid or insane” and by saturating his books with designed computer programs that “dispense with the notion of purpose and design”. (Dawkins words). Of course, most scientists know that he is using purpose and design to form his programs. Oxford and Dawkins should be embarrassed for such lack of logic. Mathematician David Berlinski says that if computer simulations demonstrate anything, they subtly demonstrate the need for an intelligent agent to elect some options and exclude others.
Professor of Biochemistry Michael J. Behe writes; “The fact that a distinguished scientist (Dawkins) overlooks simple logical problems that are easily seen by a chemist suggests that a sabbatical visit to a biochemistry laboratory might be in order”. Dawkins visit to the laboratory would demonstrate that simulating reactions that may have occurred on the early earth with its many different atoms opposing each other at high speed is demanding work. Origin of life experiments don’t proceed very far until intelligence is factored in.
Am I being a little bit too hard on Professor Dawkins? Maybe. However, he is the leader of the pack in the rush to use computer simulations to demonstrate evolution. Many have followed his lead. However, science is based on observable evidence. Origin of life experiments have failed in the three-dimensional space of the prebiotic laboratory and are much more pathetic in the created two-dimensional space of the computer. Using created programs to disprove creation would be comical, if these simulations were not imposed on innocent readers.
Molecular biologist Michael Denton writes:
“It is the sheer universality of perfection, the fact that everywhere we look, to whatever depth we look, we find an elegance and ingenuity of an absolutely transcending quality, which so mitigates against the idea of chance. In practically every field of fundamental biological research ever-increasing levels of design and complexity are being revealed at an ever-accelerating rate. The inference to design is a purely scientific induction based on a ruthlessly consistent application of the logic of analogy.”
Lee Kleinschmidt 2013