Está en la página 1de 177

KEY ARTICLES

Refuting Evolution 2 - Argument: Common design points to common ancestry .4


Genetics and creation demographic events .....5
Evolutionists abandon the idea of 99% DNA similarity between humans and chimps ..6
DNA: marvellous messages or mostly mess? ..6

DO GENE DUPLICATION AND POLYPLOIDY PROVIDE A MECHANISM FOR EVOLUTION

Do new functions arise by gene duplication? ...9


Does gene duplication provide the engine for evolution? .14
Dawkins and the origin of genetic information

..17

WHAT ABOUT JUNK DNA

Junk DNA: evolutionary discards or Gods tools? .18


The slow, painful death of junk DNA 26
No joy for junkies .27
Large scale function for endogenous retroviruses ...28
Hox (homeobox) GenesEvolutions Saviour? ..29
Hox Hype ..29

HOW DOES GENETICS POINT TO DESIGN

Cell systemswhats really under the hood continues to drop jaws ...30
Meta-information .32
Splicing and dicing the human genome ...33
Genetics: no friend of evolution 34
Astonishing DNA complexity uncovered ..36
Astonishing DNA complexity update .37
Evidence for the design of life: part 1Genetic redundancy ..37
The design of life: part 3an introduction to variation-inducing genetic elements ...40
The design of life: part 4variation-inducing genetic elements and their function ...45

INFORMATION THEORY

Refuting EvolutionChapter 9 Is the design explanation legitimate? .50


Scientific laws of information and their implicationspart 1 .52
Implications of the scientific laws of informationpart 2 .. 57
Variation, information and the created kind .60

ARE THERE EVOLUTIONARY PROCESSES THAT LEAD TO INFORMTION INCREAS?

Bears across the world ..63

Was Dawkins Stumped? .65

The adaptation of bacteria to feeding on nylon waste 67

New plant coloursis this new information? ..68

Is antibiotic resistance really due to increase in information? ...69


WHAT IS THE DIFFERENCES BETWEEN ORDER AND COMPLEXITY

The treasures of the snow ..71


HOW DOES INFORMATION THEORY SUPPORT CREATION?

Information, science and biology ... 73

The marvellous message molecule .78

More or less information? / Has a recent experiment proved creation? .79

Lifes irreducible structurePart 1: autopoiesis ..81


DNA INFORMATION

Information Theorypart 1: overview of key ideas ... 85


Information Theorypart 2: weaknesses in current conceptual frameworks 89
Information Theorypart 3: introduction to Coded Information Systems ...93
Information Theorypart 4: fundamental theorems of Coded Information Systems Theory ..96
Genetic code optimisation: Part 1 ..101
Evidence for the design of life: part 1Genetic redundancy .108

Evidence for the design of life: part 2Baranomes 111


The design of life: part 3an introduction to variation-inducing genetic elements .116
The design of life: part 4variation-inducing genetic elements and their function .121
And then there was life ..126
Cell systemswhats really under the hood continues to drop jaws ..126
Transposon amplification in rapid intrabaraminic diversification 128
Myriad mechanisms of Gene regulation 133
More marvellous machinery: DNA scrunching .134
The genetic puppeteer ..135

MUTATIONS

Can mutations create new information? . 137


Refuting Evolution 2 - Argument: Some mutations are beneficial .141
Beetle bloopers ..142
The evolution trains a-comin ..143
Ancon sheep: just another loss mutateon 144
Bacteria evolving in the lab? .145
A cat with four ears (not nine lives) .. 146
Sickle-cell anemia does not prove evolution! ...147
Evolution of a new master race? 148
The 'werewolf' gene . 148
Evolution in a Petri dish? .149
Breathtaking new frog surprise ...149

CAN MUTATION BE THE MECHANISM FOR EVOLUTION?

Hox (homeobox) GenesEvolutions Saviour? 150


Gain-of-function mutations: at a loss to explain molecules-to-man evolution ..150
Are gain of function mutations really downhill and so not supporting of evolution? ..152

ARE MUTATIONS EVER BENEFICIAL?

DAILY

CCR5-delta32: a very beneficial mutation

...153

The mutant feather-duster budgie .154


Lost World of Mutants discovered .154
New eyes for blind cave fish? .155
Christopher Hitchensblind to salamander reality ..156
Cant drink milk? Youre normal! ..158
At last, a good mutation? ..159
A-I Milano mutationevidence for evolution? ...159
Special tools of life .160

Mutations: evolutions engine becomes evolutions end! . 162


Meiotic recombinationdesigned for inducing genomic change ...166
Teenage mutant ninja people ..168
Critic ignores reality of Genetic Entropy . 169
Genetic entropy and simple organisms ..172
The diminishing returns of beneficial mutations 174
Pesticide resistance is not evidence of evolution .175

KEY ARTICLES
Refuting Evolution 2
A sequel to Refuting Evolution that refutes the latest arguments to support evolution (as presented by PBS and Scientific
American).
by Jonathan Sarfati, Ph.D. with Michael Matthews
Argument: Common design points to common ancestry
Evolutionists say, Studies have found amazing similarities in DNA and biological systemssolid evidence that life on earth
has a common ancestor.
Common structures = common ancestry?
In most arguments for evolution, the debater assumes that common physical features, such as five fingers on apes and
humans, point to a common ancestor in the distant past. Darwin mocked the idea (proposed by Richard Owen on the PBS
dramatization of his encounter with Darwin) that common structures (homologies) were due to a common designer rather
than a common ancestor.But the common Designer explanation makes much more sense of the findings of modern
geneticists, who have discovered just how different the genetic blueprint can be behind many apparent similarities in the
anatomical structures that Darwin saw. Genes are inherited, not structures per se. So one would expect the similarities, if
they were the result of evolutionary common ancestry, to be produced by a common genetic program (this may or may not
be the case for common design). But in many cases, this is clearly not so. Consider the example of the five digits of both
frogs and humansthe human embryo develops a ridge at the limb tip, then material between the digits dissolves; in frogs,
the digits grow outward from buds (see diagram below). This argues strongly against the common ancestry evolutionary
explanation for the similarity.
Development of human and frog digits
Stylized diagram showing the difference in developmental patterns of frog and human
digits.

p.

Left: In humans, programmed cell death (apoptosis) divides the ridge into five regions
that then develop into digits (fingers and toes). [From T.W. Sadler, editor, Langmans
Medical Embryology, 7th ed. (Baltimore, MD: Williams and Wilkins, 1995), p. 154157.]
Right: In frogs, the digits grow outward from buds as cells divide. [From M.J.
Tyler, Australian Frogs: A Natural History (Sydney, Australia: Reed New Holland, 1999),
80.]

The PBS program and other evolutionary propagandists claim that the DNA code is universal, and proof of a common
ancestor. But this is falsethere are exceptions, some known since the 1970s, not only in mitochondrial but also nuclear
DNA sequencing. An example is Paramecium, where a few of the 64 codons code for different amino acids. More examples
are being found constantly.1 The Discovery Institute has pointed out this clear factual error in the PBS program. 2 Also, some
organisms code for one or two extra amino acids beyond the main 20 types. 3The reaction by the PBS spokeswoman,

Eugenie Scott, showed how the evolutionary establishment is more concerned with promoting evolution than scientific
accuracy. Instead of conceding that the PBS show was wrong, she attacked the messengers, citing statements calling their
(correct!) claim so bizarre as to be almost beyond belief. Then she even implicitly conceded the truth of the claim by citing
this explanation: Those exceptions, however, are known to have derived from organisms that had the standard code.To
paraphrase: It was wrong to point out that there really are exceptions, even though its true; and it was right for PBS to imply
something that wasnt true because we can explain why its not always true.But assuming the truth of Darwinism as
evidence for their explanation is begging the question. There is no experimental evidence, since we lack the DNA code of
these alleged ancestors. There is also the theoretical problem that if we change the code, then the wrong proteins would be
made, and the organism would dieso once a code is settled on, were stuck with it. The Discovery Institute also
demonstrated the illogic of Scotts claim.4 Certainly most of the code is universal, but this is best explained by common
design. Of all the millions of genetic codes possible, ours, or something almost like it, is optimal for protecting against
errors.5 But the exceptions thwart evolutionary explanations.
DNA comparisonssubject to interpretation
Scientific American repeats the common argument that DNA comparisons help scientists to reconstruct the evolutionary
development of organisms:Macroevolution studies how taxonomic groups above the level of species change. Its evidence
draws frequently from the fossil record and DNA comparisons to reconstruct how various organisms may be related. [SA 80]
DNA comparisons are just a subset of the homology argument, which makes just as much sense in a young age framework.
A common Designer is another interpretation that makes sense of the same data. An architect commonly uses the same
building material for different buildings, and a car maker commonly uses the same parts in different cars. So we shouldnt be
surprised if a Designer for life used the same biochemistry and structures in many different creatures. Conversely, if all living
organisms were totally different, this might look like there were manydesigners instead of one.Since DNA codes for
structures and biochemical molecules, we should expect the most similar creatures to have the most similar DNA. Apes and
humans are both mammals, with similar shapes, so both have similar DNA. We should expect humans to have more DNA
similarities with another mammal like a pig than with a reptile like a rattlesnake. And this is so. Humans are very different
from yeast but they have some biochemistry in common, so we should expect human DNA to differ more from yeast DNA
than from ape DNA.So the general pattern of similarities need not be explained by common-ancestry (evolution).
Furthermore, there are some puzzling anomalies for an evolutionary explanationsimilarities between organisms that
evolutionists dont believe are closely related. For example, hemoglobin, the complex molecule that carries oxygen in blood
and results in its red color, is found in vertebrates. But it is also found in some earthworms, starfish, crustaceans, mollusks,
and even in some bacteria. An antigen receptor protein has the same unusual single chain structure in camels and nurse
sharks, but this cannot be explained by a common ancestor of sharks and camels. 6 And there are many other examples of
similarities that cannot be due to evolution.
Debunking the molecular clock
Scientific American repeats the common canard that DNA gives us a molecular clock that tells us the history of DNAs
evolution from the simplest life form to mankind:Nevertheless, evolutionists can cite further supportive evidence from
molecular biology. All organisms share most of the same genes, but as evolution predicts, the structures of these genes and
their products diverge among species, in keeping with their evolutionary relationships. Geneticists speak of the molecular
clock that records the passage of time. These molecular data also show how various organisms are transitional within
evolution. [SA 83]Actually, the molecular clock has many problems for the evolutionist. Not only are there the anomalies and
common Designer arguments I mentioned above, but they actually support a creation of distinct types within ordered
groups, not continuous evolution, as non-creationist microbiologist Dr Michael Denton pointed out in Evolution: A Theory in
Crisis. For example, when comparing the amino acid sequence of cytochrome C of a bacterium (a prokaryote) with such
widely diverse eukaryotes as yeast, wheat, silkmoth, pigeon, and horse, all of these have practically the same percentage
difference with the bacterium (64 69%). There is no intermediate cytochrome between prokaryotes and eukaryotes, and no
hint that the higher organism such as a horse has diverged more than the lower organism such as the yeast.The same
sort of pattern is observed when comparing cytochrome C of the invertebrate silkmoth with the vertebrates lamprey, carp,
turtle, pigeon, and horse. All the vertebrates are equally divergent from the silkmoth (2730%). Yet again, comparing globins
of a lamprey (a primitive cyclostome or jawless fish) with a carp, frog, chicken, kangaroo, and human, they are all about
equidistant (7381%). Cytochrome Cs compared between a carp and a bullfrog, turtle, chicken, rabbit, and horse yield a
constant difference of 1314%. There is no trace of any transitional series of cyclostome fish amphibian reptile
mammal or bird.Another problem for evolutionists is how the molecular clock could have ticked so evenly in any given
protein in so many different organisms (despite some anomalies discussed earlier which present even more problems). For
this to work, there must be a constant mutation rate per unit time over most types of organism. But observations show that
there is a constant mutation rate per generation, so it should be much faster for organisms with a fast generation time, such
as bacteria, and much slower for elephants. In insects, generation times range from weeks in flies to many years in cicadas,
and yet there is no evidence that flies are more diverged than cicadas. So evidence is against the theory that the observed
patterns are due to mutations accumulating over time as life evolved.
Genetics and creation demographic events
by C.W. Nelson
With the relatively recent mapping of the human genome, 1 new questions can be raised concerning potential genetic
evidence for creation events (specifically demographic events; that is, events affecting population) such as Creation and the
global Flood. Evidence for a Mitochondrial Eve2,3 suggests that the historical record of one man and one woman at the
beginning might be accurate, and this idea has already been discussed in the context of creation. 4 When actual measured
mutation rates are used with the mitochondrial DNA data, the time frame for Mitochondrial Eve reduces to fit with the biblical
Eve.5,6 Single nucleotide polymorphisms and linkage disequilibrium also provide relevant data concerning past populations,
and could serve as quite objective evidence for such demographic events as a global flood, for instance. I outline a number
of research findings and ideas here.
Genetic variation and the population bottleneck
By comparing DNA from different humans around the world, it has been found that all humans share roughly 99.9% of their
genetic materialthey are almost completely identical, genetically.7 This means that there is very little polymorphism, or
variation. Much evidence of this genetic continuity has been found. For example, Dorit et al.8examined a 729-base pair
intron (the DNA in the genome that is not read to make proteins) from a worldwide sample of 38 human males and
reported no sequence variation. This sort of invariance likely results from either a recent selective sweep, a recent
origin for modern Homo sapiens, recurrent male population bottlenecks, or historically small effective male population sizes
any value of Q [lowest actual human sequence diversity] > 0.0011 predicts polymorphism in our sample [and yet none

was found] . The critical value for this study thus falls below most, but not all, available estimates, thus suggesting that
the lack of polymorphism at ZFY [a locus, or location] is not due to chance.After citing additional evidence of low variation
on the Y chromosome, they note in their last paragraph that their results are not compatible with most multiregional models
for the origin of modern humans. Knight et al.9 have had similar research results:We obtained over 55 kilobases of
sequence from three autosomal loci encompassing Alu repeats for representatives of diverse human populations as well as
orthologous sequences for other hominoid species at one of these loci. Nucleotide diversity was exceedingly low. Most
individuals and populations were identical. Only a single nucleotide difference distinguished presumed ancestral alleles
from descendants. These results differ from those expected if alleles from divergent archaic
populations were maintained through multiregional continuity. The observed virtual lack of
sequence polymorphism is the signature of a recent single origin for modern humans, with
general replacement of archaic populations.These results are quite consistent with a recent
human origin and a global flood. Evolutionary models of origins did not predict such low
human genetic diversity. Mutations should have produced much more diversity than 0.1%
over millions of years. And yet this is exactly what we would expect to find if all humans were
closely related and experienced a relatively recent event in which only a few survived.
Research is needed to determine what variation should actually be present in the human
genomewhat would we expect within an evolutionary framework, and how does that
compare with what we find? These results could have a great impact on biological evolution,
population genetics, and could provide telling results about the age of the humankind. It
could also affect the so-called molecular clock.Another piece of evidence involves single
nucleotide polymorphisms (hereafter SNPs), which are mutations common to the human
genome (meaning that many humans share them), being present in the human population at
a frequency of roughly 1%.7 These provide great insight into both medical research and
population genetics. Many humans share large blocks of SNPs (called haplotypes),
suggesting that all humans could have descended from a relatively recent demographic
event.
Linkage disequilibrium (or LD) supports the same conclusion. Genes are located on
chromosomes in cells, and these genes may either be far away or close to each other. All of
the genes that are located on one chromosome are said to be linked. When cells divide
through meiosis, crossing over (or genetic recombination) often occurs. This involves two
chromosomes aligning and swapping segments of DNA, resulting in genes getting shuffled
around. The closer two genes are together, the more likely that they will be inherited together
because they are close, it is unlikely that they will be separated during crossing
over. When this holds true, genes are said to be in linkage disequilibriuma state where
they are not thoroughly mixed, but tend to be inherited together.10 Likewise, if
genes are thoroughly mixed, they are in equilibrium. LD has provided much evidence for a
population bottleneck, because humans contain long-range LD, or LD that extends quite far
in the genome, meaning that many genes tend to be inherited together. This type of
evidence has been found in Northern Europe, for example. In fact, data gathered by
Reich et al.,11 suggests that in general, blocks of LD are large in humans, because many
genes are closely associated. The explanation of this can have significant implications:
Crossing
over
(or
Why does LD extend so far? LD around an allele [or variant form of a gene] arises because
genetic recombination)
of selection or population historya small population size, genetic drift or population mixture
during meiosis, results
and decays owing to recombination [crossing over], which breaks down ancestral
in shuffled genes.
haplotypes [blocks of SNPs]. The extent of LD decreases in proportion to the number of
generations since the LD-generating event. The simplest explanation for the observed longrange LD [such as what we find in humans] is that the population under study experienced an extreme founder effect or
bottleneck: a period when the population was so small that a few ancestral haplotypes gave rise to most of the haplotypes
that exist today.11This study concluded with the possibility that 50 individuals may have founded the entire population of
Europe. This evidence is also quite consistent with a historical global flood. Research is needed on the implications of this
data for the flood. Certainly, humankind has undergone a relatively recent (tens of thousands of years at most, within an
evolutionary time frame) population bottleneck. However, it must be further investigated as to the proportionality of
evolutionary dates to the creation model, and as to how the molecular clock can be adequately explained in such a context.
Data aiding this understanding has already been published.5We should also seek to understand genetic evidence in the
context of the tower of Babel event. Evidence exists that, after the bottleneck, the [human] population rebounded in a
series of separate, rapid expansions on different continents.12

Evolutionists abandon the idea of 99% DNA similarity between humans and chimps
by Daniel Anderson
In a recent Science article, several evolutionary scientists openly admitted that the claim
of 99% DNA similarity between humans and chimpanzees is a myth. 1 Since 1975, this
misleading statistic has been touted (e.g., see box) as clear cut evidence that humans
and chimps are closely related on the evolutionary tree of life. 2 However, more and more
genetic research has revealed that the percentage of DNA similarity has been vastly
overstated.
Revealing quotes
Author Jon Cohen wrote But truth be told the 1% difference wasnt the whole story
Cohen also wrote about recent studies raising the question of whether the 1% truism
should be retired.UCSD zoologist, Pascal Gagneux, said For many, many years, the 1%
difference served us well because it was underappreciated how similar we were. Now its

totally clear that its more a hindrance for understanding than a help.Svante Pbo, after admitting he didnt think there was
any way to actually calculate a precise percentage difference, said In the end, its a political and social and cultural thing
about how we see our differences. In other words, as creationists have long stated, scientific interpretations are often driven
by philosophical presuppositions.
More recent studies highlight greater genetic differences
Last year, a study of gene copy numbers revealed a 6.4% difference. 3In 2005, scientists discovered that the chimpanzee
genome was 12% larger than the human genome.In 2003, scientists calculated a 13.3% difference in sections of our
immune systems.4One study has even revealed a 17.4% difference in gene expression in the cerebral cortex. 5Creation
geneticist, Dr Rob Carter, recently stated on the USA-nationally syndicated Janet Parshall Show, that our genomes are at
least 812% different.
Another icon falls by the wayside
Just this year, we have witnessed the downfall of two icons of evolution. Not only has the idea of 99% genetic similarity
between humans and chimps been abandoned, but also the myth of so-called Junk DNA has been debunked
see Astonishing DNA complexity uncovered and Astonishing DNA complexity Update. As has so often been observed since
Darwin published his Origin of Species, evolutionary icons eventually collapse under the weight of the empirical data.
DNA: marvellous messages or mostly mess?
by Jonathan Sarfati
2003 is the 50th anniversary of the discovery of the double helix structure of DNA. Its discoverers, James Watson, Francis
Crick and Maurice Wilkins, won the Nobel Prize for Physiology and Medicine in 1962 for their discovery. [2011 update: this
online version has been updated with animations and links to further amazing discoveries about the multiple codes in
DNA.]The amazing design and complexity of living things provides strong evidence for a Designer.
Information technology
One aspect of this sustenance is that the recipe for all these structures on the famous double-helix molecule DNA were
programed.1 This recipe has an enormous information content, which is transmitted one generation to the next, so that living
things reproduce after their kinds .Leading atheistic evolutionist Richard Dawkins admits:[T]here is enough information
capacity in a single human cell to store the Encyclopaedia Britannica, all 30 volumes of it, three or four times over. 2Just as
the Britannica had intelligent writers to produce its information, so it is reasonable and even scientific to believe that the
information in the living world likewise had an original compositor/sender. 3 There is no known non-intelligent cause that has
ever been observed to generate even a small portion of the literally encyclopedic information required for life. 4The genetic
code (see The programs of life below) is not an outcome of raw chemistry, but of elaborate decoding machinery in
the ribosome. Remarkably, this decoding machinery is itself
encoded in the DNA, and the noted philosopher of science Sir
Karl Popper pointed out:Thus the code can not be translated
The unity of life
except by using certain products of its translation. This
Many evolutionists claim that the DNA code is
constitutes a baffling circle; a really vicious circle, it seems, for
universal, and that this is proof of a common
any attempt to form a model or theory of the genesis of the
ancestor. But this is falsethere are exceptions,
5,6
genetic code.
some known since the 1970s. An example is
So, such a system must be fully in place before it could work at
Paramecium, where a few of the 64 (4 or 4x4x4)
all, a property called irreducible complexity. This means that it is
possible codons code for different amino acids.
impossible to be built by natural selection working on small
More examples are being found constantly.1 Also,
changes.DNA is by far the most compact information storage
some organisms code for one or two extra amino
system in the universe. Even the simplest known living organism
acids beyond the main 20 types. 2 But if one
has 482 protein-coding genes. This is a total of 580,000 letters, 7
organism evolved into another with a different
humans have three billion in every nucleus. (See The
code, all the messages already encoded would be
programs of life, for an explanation of the DNA letters.)The
scrambled, just as written messages would be
amount of information that could be stored in a pinheads volume
jumbled if typewriter keys were switched. This is a
of DNA is equivalent to a pile of paperback books 500 times as
huge problem for the evolution of one code into
high as the distance from Earth to the moon, each with a
another.Also, in our cells we have power plants
8
different, yet specific content. Putting it another way, while we
called mitochondria, with their own genes. It turns
think that our new 40 gigabyte hard drives are advanced
out that they have a slightly different genetic code,
technology, a pinhead of DNA could hold 100 million times more
too.Certainly most of the code is universal, but this
information.The letters of DNA have another vital property due to
is best explained by common designone
their structure, which allows information to be transmitted: A pairs
designer. Of all the millions of genetic codes
only with T, and C only with G, due to the chemical structures of
possible, ours, or something almost like it, is
the basesthe pair is like a rung or step on a spiral staircase.
optimal for protecting against errors. 3 But the
This means that the two strands of the double helix can be
created exceptions thwart attempts to explain the
separated, and new strands can be formed that copy the
organisms by common-ancestry evolution.
information exactly. The new strand carries the same information
as the old one, but instead of being like a photocopy, it is in a
sense like a photographic negative. The copying is far more
precise than pure chemistry could manageonly about 1 mistake in 10 billion copyings, because there is editing (proofreading and error-checking) machinery, again encoded in the DNA. But how would the information for editing machinery be
transmitted accurately before the machinery was in place? Lest it be argued that the accuracy could be achieved stepwise
through selection, note that a high degree of accuracy is needed to prevent error catastrophethe accumulation of noise
in the form of junk proteins. Again there is a vicious circle (more irreducible complexity).Also, even the choice of the letters
A, T, G and C now seems to be based on minimizing error. Evolutionists usually suppose that these letters happened to be
the ones in the alleged primordial soup, but research shows that C (cytosine) is extremely unlikely to have been present in
any such soup.9 Rather, Dnall Mac Dnaill of Trinity College Dublin suggests that the letter choice is like the advanced
error-checking systems that are incorporated into ISBNs on books, credit card numbers, bank accounts and airline tickets.
Any alternatives would suffer error catastrophe.10
Introns

DNA is not read directly, but first the cell makes a negative copy in a very similar molecule called RNA, 11 a process
called transcription. But in all organisms other than most bacteria, there is more to transcription. This RNA, reflecting the
DNA, contains regions called exons that code for proteins, and non-coding regions called introns. So the introns are
removed and the exons are spliced together to form the mRNA (messenger RNA) that is finally decoded to form the
protein. This also requires elaborate machinery called a spliceosome. This is assembled on the intron, chops it out at the
right place and joins the exons together (see also this animation of the spliceosome machinery). This must be in the right
direction and place, because, as shown above, it makes a huge difference if the exon is joined even one letter off. Thus,
partly formed splicing machinery would be harmful, so natural selection would work against it. Richard Roberts and Phillip
Sharp won the 1993 Nobel Prize in Physiology and Medicine for discovering introns in 1977. It turns out that 9798% of the
genome may be introns and other non-coding sequences, but this raises the question of why introns exist at all. [Update,
2011: now we know there is a splicing code; see related
articles below.]
Junk DNA?
Dawkins and others have claimed that this non-coding
DNA is junk, or selfish DNA. Supposedly, no intelligent
designer would use such an inefficient system, therefore it
must have evolved, they argue. This parallels the
19th century claim that about a hundred vestigial organs
exist in the human body,12 i.e. allegedly useless remnants
of our evolutionary history.13 But more enlightened
evolutionists such as Scadding pointed out that the
argument is logically invalid, because it is impossible in
principle to prove that an organ has no function; rather, it
could have a function we dont know about. Scadding also
reminds us that as our knowledge has increased the list of
vestigial structures has decreased.14,15,16While Dawkins
has often claimed that belief in a designer is a cop-out,
its claims of vestigial or junk status that are actually copouts. Such claims hindered research into the vital function
of allegedly vestigial organs, and they do the same with non-coding DNA. Actually, even if evolution were true, the notion
that the introns are useless is absurd. Why would more complex organisms evolve such elaborate machinery to splice
them? Rather, natural selection would favour organisms that did not have to waste resources processing a genome filled
with 98% junk. And there have been many uses discovered for so-called junk DNA, such as the overall genome structure
and regulation of genes. Some creationists believe that this DNA has a role in rapid post-Flood diversification of the
kinds.17Some non-coding RNAs called microRNAs (miRNAs) seem to regulate the production of proteins coded in other
genes, and seem to be almost identical in humans, mice and zebrafish. The recent sequencing of the mouse
genome18surprised researchers and led to headlines such as Junk DNA Contains Essential Information. 19 They found that
5% of the genome was basically identical but only 2% of that was actual genes. So they reasoned that the other 3% must
also be identical for a reason. The researchers believe the 3% probably has a crucial role in determining the behaviour of
the actual genes, e.g. the order in which they are switched on. 20Also, damage to introns can be disastrousin one example,
deleting four letters in the centre of an intron prevented the spliceosome from binding to it, resulting in the intron being
included.21 Mutations in introns also interfere with imprinting, the process by which only certain genes from the mother or
father are expressed, not both. Expression of both genes results in a variety of diseases and cancers. 22Another intriguing
discovery is that DNA can conduct electrical signals as far as 60 letters, enough to code for 20 amino acids. This is a typical
length for molecular switches that turn on adjoining genes. Theoretically, the electrical signals could travel indefinitely.
However, single or multiple pairings between A and T stop the signals; that is, they are insulators or electronic hinges in a
circuit. So, although these particular regions dont code for proteins, they may protect essential genes from electrical
damage from free radicals attacking a distant part of the DNA. 23 So times have changedAlexander Httenhofer of the
University of Mnster, Germany, says:
Five or six years ago, people said we were wasting our time. Today, no one regards people studying non-coding RNA as
time-wasters.24
Advanced operating system?
Dr John Mattick of the University of Queensland in Brisbane,
More than just a super hard drive
Australia, has published a number of papers arguing that the
Actually, DNA is far more complicated than simply
non-coding DNA regions, or rather their non-coding RNA
coding for proteins, as we are discovering all the
negatives, are important components of a complicated genetic
time.1 For example, because the DNA letters are
network.25,26 These interact with each other, the DNA, mRNA and
read in groups of three, it makes a huge difference
the proteins. Mattick proposes that the introns function as nodes,
which letter we start from. E.g. the sequence
linking points in the network. The introns provide many extra
GTTCAACGCTGAA can be read from the first
connections, enabling what in computer terminology would be
letter, GTT CAA CGC TGA A but a totally
called multi-tasking and parallel processing.In organisms, this
different protein will result from starting from the
network could control the order in which genes are switched on
second letter, TTC AAC GCT GAA
and off. This means that a tremendous variety of multicellular life
This means that DNA can be an even more
could be produced by rewiring the network. In contrast, early
compact information storage system. This partly
computers were like simple organisms, very cleverly designed
explains the surprising finding of The Human
27
[sic], but programmed for one task at a time. The older
Genome Project that there are only about 35,000
computers were very inflexible, requiring a complete redesign of
genes, when humans can manufacture over
the network to change anything. Likewise, single-celled
100,000 proteins.
organisms such as bacteria can also afford to be inflexible,
because they dont have to develop as many-celled creatures do.
Evolutionary interpretation
Mattick suggests that this new system somehow evolved (despite the irreducible complexity) and in turn enabled the
evolution of many complex living things from simple organisms. However, the same evidence is better interpreted from a
young age framework. This system can indeed enable multicellular organisms to develop from a simple cellbut this is the
fertilized egg. This makes more sense; the fertilized egg has all the programming in place for all the information for a
complex life-form to develop from an embryo.It is also an example of good design economy pointing to a single designer as
opposed to many. In contrast, the first simple cell to allegedly evolve the complex splicing machinery would have no introns

needing splicing.But Mattick may be partly right about diversification of life. Creationists also believe that life diversified
after the Flood. However, this diversification involved no new information. Some creationists have proposed that certain
parts of currently non-coding DNA could have enabled faster diversification, 28 and Matticks theory could provide still another
mechanism.
Hindering science
A severe critic of Matticks theory, Jean-Michel Claverie of CNRS,
the national research institute in Marseilles, France, said
something very revealing:
The circle of life
All living things have encyclopedic information
content, a recipe for all their complex machinery
and structures.This is stored and transmitted to
the next generation as a message on DNA
I dont think much of this work. In general, all these global ideas
letters, but the message is in the arrangement,
dont travel very far because they fail to take into account the
not the letters themselves.The message requires
most basic principle of biology: things arose by the additive
decoding and transmission machinery, which itself
addition of evolution of tiny subsystems, not by global design. It is
is part of the stored message.The choices of the
perfectly possible that one intron in one given gene might have
code and even the letters are optimal.Therefore,
evolvedby chancesome regulatory property. It is utterly
the genetic coding system is an example of
improbable that all genes might have acquired introns for the
irreducible complexity.
future property of regulating expression.
Two points to note:
This agrees that if the intron system really is an advanced operating system, it really would be irreducibly complex, because
evolution could not build it stepwise.It illustrates the role of materialistic assumptions behind evolution. Usually, atheists such
as Dawkins use evolution as proof for their faith; in reality, evolution is deduced from their assumption of materialism! E.g.
Richard Lewontin wrote, we have a prior commitment, a commitment to materialism. Moreover, that materialism is
absolute, for we cannot allow a Divine Foot in the door. 29 Scott Todd said, Even if all the data point to an intelligent
designer, such an hypothesis is excluded from science because it is not naturalistic.30
Similarly, while many use junk DNA as proof of evolution, Claverie is using the assumption of evolution as proof of its
junkiness! This is again a parallel with vestigial organs. In reality, evolution was used as a proof of their vestigiality, and
hindered research into their function. Claveries attitude could likewise hinder research into the networking capacity of noncoding DNA.
Summary
Junk DNA (or, rather, DNA that doesnt directly code for proteins) is not evidence for evolution. Rather, its alleged junkiness
is a deduction from the false assumption of evolution.
Just because no function is known, it doesnt mean there is no function.
Many uses have been found for this non-coding DNA.
There is good evidence that it has an essential role as part of an elaborate genetic network. This could have a crucial role in
the development of many-celled creatures from a single fertilized egg, and also in the post-Flood diversification (e.g. a
canine kind giving rise to dingoes, wolves, coyotes etc.).

The programs of life


Information is a measure of the complexity of the arrangement
of parts of a storage medium, and doesnt depend on what
parts are arranged. For instance, the printed page stores
information via the 26 letters of the alphabet, which are
arrangements of ink molecules on paper. But the information is
not contained in the letters themselves. Even a translation into
another language, even those with a different alphabet, need
not change the information, but simply the way it is presented.
However, a computer hard drive stores information in a totally
different wayan array of magnetic on or off patterns in a
ferrimagnetic disk, and again the information is in the patterns,
the arrangement, not the magnetic substance. Totally different
media can carry exactly the same information. An example is
this article youre readingthe information is exactly the same
as that on my computers hard drive, but my hard drive looks
vastly different from this page. In DNA, the information is stored
as sequences of four types of DNA bases, A,C,G and T. In one
sense, these could be called chemical letters because they
store information an analogous way to printed letters. 1 There are huge problems for evolutionists explaining how the letters
alone could come from a primordial soup.2 But even if this was solved, it would be as meaningless as getting a bowl of
alphabet soup.The letters must then link together, in the face of chemistry trying to break them apart. 3 Most importantly, the
letters must be arranged correctly to have any meaning for life.A group (codon) of 3 DNA letters codes for one protein letter
called an amino acid, and the conversion is called translation. Since even one mistake in a protein can be catastrophic, its
important to decode correctly. Think again about a written languageit is only useful if the reader is familiar with the
language. For example, a reader must know that the letter sequence c-a-t codes for a furry pet with retractable claws. But
consider the sequence g-i-f-tin English, it means a present; but in German, it means poison. Understandably, during the
postSeptember-11 anthrax scare, some German postal workers were very reluctant to handle packages marked Gift.

DO GENE DUPLICATION AND POLYPLOIDY PROVIDE A MECHANISM FOR EVOLUTION


Do new functions arise by gene duplication?
by Yingguang Liu and Dan Moran
Evolution requires a simple form of life to have morphed into increasingly complex organisms. Since the basis for biological
complexity is genetic complexity, some biologists propose that the complicated genomes in modern organisms arose from

one or a few genes in a common ancestor through duplication, with subsequent neofunctionalization through mutation and
natural selection. Here we examine the known mechanisms of gene duplication in the light of genomic complexity and postduplication events, and argue that: (1) gene duplications are aberrations of cell division processes and are more likely to
cause malformation or diseases rather than selective advantage; (2) duplicated genes are usually silenced and subjected to
degenerative mutations; (3) regulation of supposedly duplicated gene clusters and gene families is irreducibly complex, and
demands simultaneous development of fully functional multiple genes and switching networks, contrary to Darwinian
gradualism.
Figure
1. Equal (a) and
unequal (b) crossing-over.
Black and white colours represent homologous
chromosomes. Only one sister chromatid of each
chromosome is shown. After unequal crossing-over, one
chromosome gains an extra repetition of ABC genes
while the other chromosome loses DNA and becomes
shorter.
Natural selection merely modified, while redundancy
created.1It might be said that all of the new genes
arose from redundant copies of the pre-existed [sic]
genes.2Regardless of how the first gene came into
being, it is taught in textbooks that gene duplication is
the major force driving evolution.3,4 Gene duplications do
indeed add extra material to the genome, for example,
by aberrations in the division of chromosomes during
mitosis or meiosis, or by erroneous DNA replication.
Evolutionists argue that with subsequent mutation and natural selection, one or all copies of a duplicated gene eventually
encode new proteins (a process called neofunctionalization). Over millions of years, small simple genomes thus are
believed to have evolved into large, complex ones, giving rise to the multiplicity of life forms both living and extinct.One
frequently cited evidence for gene duplication comes from gene sequence analyses. Sequence comparisons have revealed
that some genes in modern organisms are more similar to each other than to other genes, and so they are classified into
families. Gene families are especially abundant in large genomes. Family members within a genome, the paralogs, are
believed to be products of gene duplications that have occurred in the past. Furthermore, functional domains of many
proteins encoded by apparently unrelated genes also bear structural and functional similarities. All of these are used as
evidence that the thousands of genes discovered so far (and those yet to be discovered) have evolved from a fewmaybe
oneancestral gene(s).5In this article we examine the major mechanisms proposed for gene duplication and evaluate their
likely contribution to the history of life in the light of recent evidences on post-duplication events and gene regulation
mechanisms.
Mechanisms of gene duplication
Polyploidy
Polyploidy refers to an increase in the number of sets of chromosomes per cell. Normally, most eukaryotic cells are diploid
(with two sets of chromosomes, 2n, one from the male parent and one from the female parent) while the sex cells are
haploid (with one set of chromosomes, 1n). A cell with 3n or more is polyploid. Polyploidy may arise naturally when a cell
fails to divide after DNA replication. If the cell with doubled genome is involved in the generation of sex cells (meiosis),
polyploid organisms may be subsequently produced upon fertilization. Alternatively, polyploidy can be artificially induced by
treating cells with chemicals such as colchicine.Since all genes are duplicated simultaneously in a polyploid cell, the
stoichiometric relationships between genetic products are preserved. For this reason, polyploidy is the least detrimental and
therefore the best surviving duplication mutation. 6 Polyploidy is seen in ferns, flowering plants and some lower animals. 7,8 It
is usually associated with hermaphroditism, parthenogenesis (mother producing young asexually), or species without
disparate sex chromosomes.8 In most dioecious
(possessing either male or female organs) animals and
humans, however, polyploid embryos typically suffer
generalized malformation and die during development.8 It
is not only sex determination per se (as was proposed by
Muller9 ), but more importantly, the delicate balancing
between homologous genes, that is disrupted in polyploid
individuals of higher animals. For instance, parental
imprinting (differences in the expression of maternal and
paternal genes) by DNA methylation may be disrupted as
the cell endeavours to silence extra chromosomes by
extensive methylation (see below under After
duplication).Autopolyploidy (all chromosome sets are from
the same species) can result in useful variation of
quantitative traits such as biomass, organ size, flowering time, drought tolerance, etc. But crucially, polyploid organisms
have an intrinsic mechanism to maintain genetic stability by silencing extra copies of genes (inhibiting their
expression).10 Silencing of homeologs (genes duplicated by polyploidy) is nonrandom, genetically programmed, and organspecific. It is a universal phenomenon seen in both plants and animals.7,11 Silencing of inferior alleles may be accountable for
the advantageous phenotypes of some polyploid species. Alternatively, superior alleles may take dominance even though
inferior ones are expressed simultaneously. In other words, there are no new genetic products, but old genes with altered
expression levels under the control of pre-existing programs.
Figure
2. (a) Xenopus globin
gene
clusters.50,51Grey:
tadpole;
Dark:
adult.
53
(b) Human globin gene clusters. Light grey: embryonic; Dark grey: fetal; Dark: fetal/adult () or adult only ( and ); White:
pseudogenes. Intergenic spacer sequences are omitted.Allopolyploidy results when the sets of chromosomes are derived
from two or more distinct, though related species. Unlike allodiploid hybrids such as the mule, allopolyploid organisms may
be fertile and give rise to new species. However, the hybrid species display merely a new combination of pre-existing
parental traits encoded by pre-existing genes. For example, some strains of the Triticale, synthetic allopolyploids from wheat
and rye, combine the high yield of wheat and the adaptability of rye. Another artificial hybrid species between the tall fescue
grass (Festuca arundinacea) and the short Italian ryegrass (Lolium multiflorum) shows quantitative traits (e.g. height) that
are intermediate between the parental species.12 The historical Raphanobrassica, hybrid between cabbage and radish, has

the roots of cabbages and leaves resembling that of a radish.In allopolyploids there may be interactions between genes
from different parents.13Disharmonious interactions between homeologous genes are thought to be the reason for most
cases of hybrid sterility in allodiploid animals. 14 In plants, neoallopolyploid genomes are often unstable, displaying sterility,
lethality, and phenotypic instability.15
Trisomy
In contrast to polyploidy, aneuploid cells (having a chromosome number that is not a multiple of the haploid) with one extra
chromosome (trisomy) have a severely imbalanced genome. Consequently, the organism will manifest defective
phenotypes. Aneuploidy is the result of failure to segregate a pair of homologous chromosomes during meiosis I or failure to
segregate sister chromatids during meiosis II (meiotic nondisjunction). When a sex cell with one extra chromosome unites
with a normal haploid sex cell, the zygote will be trisomic for that particular chromosome. Much knowledge about trisomy
has been accumulated clinically in humans. Autosomal trisomies have more dramatic effects than sex chromosome
trisomies. From the familiar Down syndrome (21 trisomy) to the less common Edward syndrome (18 trisomy) and Patau
syndrome (13 trisomy), autosomal trisomies always hinder the development of the central nervous system and manifest
mental retardation in live births. Developmental defects of other organs are also common. Trisomies involving other
autosomes are rare, and are seen only in spontaneous abortions and in vitro fertilizations.16Triplo-X females (karyotype
XXX) have only mild symptoms (tallness and menstrual irregularities). While men with Klinefelter syndrome (karyotype XXY)
show symptoms varying from infertility to severe structural deformation, XYY males are generally normal except for tallness
and acne.17 The reason that sex chromosome trisomies show less severe symptoms than autosomal trisomies may lie in the
fact that the X chromosome has a well established intrinsic inactivation mechanism to silence one homolog in the normal
woman; while the Y chromosome is small with few genes.
Unequal crossing-over
Crossing-over refers to the exchange of fragments between homologous chromosomes during the initial stages of meiosis.
Normally the exchange is equal as the genes line up based on sequence homology (synapsis). However, because of the
numerous sequence repetitions in eukaryotic chromosomes, the lining up may be inaccurate, causing deletion in one
chromosome and duplication in the other (figure 1). The mechanism is believed to be the major cause of deletions of red or
green pigment genes in the X chromosome resulting in colour blindness and deletions of globin genes causing various
forms of thalassemias.18,19 Repeated duplications have been associated with cancer.20 Duplication of a large segment of
chromosome 15 in human beings can cause mental retardation and other symptoms while smaller duplications are
asymptomatic or cause minor disorders such as panic attacks. Presumably, small segmental duplications are successfully
managed by the cells silencing programs. However,
segmental duplications within protein-coding sequences
may interrupt gene structure, causing frame-shift
mutations.21
Figure 3. Viral genes are expressed sequentially in a
highly regulated hierarchy. Each set of viral genes
encode transcription factors that turn on the next set of
genes by interacting with their corresponding
promoter/enhancer sequences.
Unequal crossing-over may have been the major
mechanism in altering the number of genes in repetitive
clusters. Gene clusters such as the human green
pigment genes and the human immunoglobulin heavy
chain genes that vary in numbers within the population
certainly manifest recent duplications.22,23 Clusters of
identical rRNA and histone genes also vary in number
within the species, presumably via unequal crossing-over.2428 Recently, it has been found that copy-number polymorphisms
of this kind are more abundant than previously realized. 29,30However, it is unlikely that gene clusters originated through
unequal crossing-over, because: (1) unequal crossing-over depends on pre-existing clustering. Although it may change the
number of repetitions within clusters, unequal crossing-over is not the ultimate cause of their being; (2) multiplicity of
identical genes in the clusters is often required for the cell to function properly. For instance, to meet the need of the cell to
produce large numbers of ribosomes in a short time, all cells contain multiple copies of rRNA genes in tandem arrays. In the
large oocyte (egg) of amphibians, the rRNA genes have to be further amplified approximately 2000-fold, resulting in about a
million copies per cell, to maintain the number of ribosomes at about 1012.31 Likewise, multiple histone genes are required for
the cell to synthesize histones rapidly during S phase of the cell cycle. But diversification and neofunctionalisation of these
identical copies is actually prevented, not promoted, by as yet unknown mechanisms.32
Transposition
Transposons are mobile genetic elements that can change their positions within the genome (the process is known as
transposition). While some transpositions occur by a cut and paste mechanism, others go by a copy and paste
mechanism, resulting in duplications. Unlike unequal crossing-over that produces tandem gene arrays, transpositions cause
duplications dispersed randomly throughout the genome. Transposons that duplicate via an RNA intermediate, known as
retrotransposons, are abundant in eukaryotic cells.Despite the abundance of transposons and retrotransposons in complex
genomes (e.g. 45% of the human genome), their function remains elusive. Traditionally, they have been considered as
selfish DNA because random insertion of transposons disrupts other genes, causing deleterious mutations. A classical
example is the Drosophila retrotransposon, the P element, which induces chromosomal breaks and causes
sterility.33 Consequently, it seems to be beneficial to the organism for transposition events to be suppressed. Indeed,
transposition is rare in the human cell. (Therefore, the vast majority of the human transposable elements must have been
present in the genome since ancient times.) However, in mice, Drosophila (fruit-fly), and Arabidopsis (plant), transposition is
still responsible for many mutations.34Recently, Peaston and associates discovered that retrotransposons are actively
transcribed in mouse oocytes and early embryos, providing alternative promoters and first exons to a subset of host
genes.35 This report suggests that transposons function as regulatory elements during early development. From this point of
view, transposition-induced mutation may be a side effect, instead of the intended function, of these repetitive genetic
elements.
After duplication

Figure 4. The major immediate early gene (mIE) of the


human cytomegalovirus is regulated by a network of viral
and cellular factors. IE1 and IE2 are products of the gene
through alternative splicing. IE1 acts as a positive feedback
signal to accelerate initial transcription, while IE2 provides a
negative feedback mechanism by binding to a cis-repression
signal (crs) later in infection. Viral proteins pp71 and ppUL35
interact with each other. pp71 also binds to a host cell
protein, hDaxx. IE1, IE2, the enhancer, pp71 and ppUL35
are all critical for effective viral replication.In order for
evolution to harness gene duplications to produce complex
genomes, it was originally proposed that one or more copies
of the duplicated gene will acquire advantageous mutations
(neofunctionalization).5,36,37 This was thought to be the only
mechanism to generate new genes from existing
ones.38 However, biologists are now becoming more and more convinced theoretically and empirically that most duplicated
gene copies undergo degenerative, rather than constructive, mutations, ending up in nonfunctionalization.As stated above,
the first event awaiting a duplicated gene is silencing. The best studied mechanism of silencing is through methylation of
cytosine bases in CG islands around promoters. 39 Subsequently, methylated cytosines tend to be spontaneously
deaminated and are substituted with thymine bases. 39,40 The phenomenon is known as CG depletion. Duplicated genes are
especially prone to CG depletion. 3941 Without selective constraint, silenced duplicates may also undergo other mutations.
Indeed, extensive genomic change can be detected within a few generations after synthetic polyploidy.42 Using silent
mutations (mutations that do not affect translated protein structures) to reflect time, Lynch and Conery calculated that
duplicated genes are lost exponentially with time and are nonfunctionalized by the time silent sites have diverged by only a
few percent.6
On the other hand, mutations in functioning gene family members are limited by purifying selection. In paralogous genes
that evolutionists believe were created by ancient duplication events, only about 5% of amino acid-changing mutations are
able to rise to fixation.6 There is a recent report that mutation rates in
gene family members are actually lower than in singletons (genes
without paralogs).43 In contrast, differences in amino acid sequences
between modern paralogous genes are generally large, e.g. 58%
between human and globins, 28% between human and
globins, 75% between human globin and myoglobin.Faced with this
dilemma, some evolutionists theorized that mutations leading to
neofunctionalization must have happened within a brief period of time
immediately after duplication (in spite of the fact that the frequent
mutations
observed
in
recent
duplicates
are
mostly
degenerative).43 Realizing the impossibility of neofunctionalization,
Lynch and Conery argued that gene duplication only passively
contributes to the generation of biodiversity by building up
reproductive barriers as duplicates are silenced stochastically.6 In
other words, gene duplication does not produce new genes because
silencing and subsequent degradation of duplicated genes cannot
provide new information.Meanwhile, several other models have been
proposed concerning the fate of duplicated genes. One theory states
that both the original and duplicated gene copies each lose only part
of
their
function
through
degenerative
mutations
(subfunctionalization). If each gene copy retains a different fraction of
its original function, the duplicates may complement each other and
function together as one gene. If the regulatory elements of
duplicated genes subfunctionalize (while the protein-coding regions
are somehow spared from degeneration), they may be expressed at
different stages/tissues. The theory is known as duplicationdegeneration-complementation (DDC) model.4446 The DDC model
may allow partial preservation of duplicated genes, but it fails to
explain the evolution of new genes or new regulatory elements. (Let
alone the complicated mechanisms of tissue/organ-specific regulation. See below under Gene Regulation).
Figure 5. Proposed initial coagulation network (a)and proposed intermediate coagulation network after gene
duplication (b).75 Line arrows: activation; block arrows: conversion.
Recently, another model, called epigenetic complementation (EC), has been proposed by Rodin and colleagues. 47,48 The
theory states that if a gene is copied into a different position within the genome, it may be put under the control of a different
regulatory environment and therefore expressed in a different tissue or stage of life. Epigenetic silencing mechanisms (such
as cytosine methylation) work in such a way that one copy is silenced whenever or wherever the other copy is expressed.
According to this model, there is no need for mutation to alter the regulatory elements of the duplicates in order to achieve
complementation.The EC model does not explain the existence of clustered gene families with divergedfunctions for each
member. For example, the linked and globin genes in Xenopus laevis are expressed at different (tadpole and adult)
stages of life (figure 2).4951 But their temporal regulation is difficult to explain with differing epigenetic environments, since
the adult genes are sandwiched between tadpole genes. Rather, it can be better accounted for by differences in their
regulatory sequences that respond to stage-specific transcription factors.52,53Similarly, members of the clustered human
globin gene family are expressed in two stages (embryonic and adult) and the clustered globin gene family are expressed
in three stages (embryonic, fetal, and adult) (figure 2). Again, temporal regulation (especially silencing) is accomplished
genetically, rather than epigenetically, via distinct regulatory elements associated with the genes. 5456 Furthermore, there is
no change in regulation of the globin genes after the supposed separation of and genes onto different chromosomes in
mammals and birds. Both the gene of the family and the gene of the family are expressed during the embryonic
stage in human development, to form the 2 2 tetramer, even though they are on different chromosomes; while the and
genes are expressed simultaneously in adults.
Like the DDC model, the EC model still depends on mutation and natural selection for neofunctionalization.

Genome complexity
If the evolution-by-gene-duplication theory is correct then DNA content and gene number should increase proportionately
with organism complexity. However, this is not the case (Table 1). For example, the unicellular algae, Euglena, has a bigger
genome than some vertebrate animals such as zebrafish and chicken. Amphibians may have genomes larger than some
birds and mammals. The plant, Zea mays (corn), has more genomic DNA than does the human species. This phenomenon,
known as the C-value paradox, demonstrates that the amount of genomic DNA is certainly not a good index for biological
complexity.
Table 1. Genome characteristics of selected species.5759
Table 1 also shows that the number of genes within a
genome does not increase in proportion to the amount
of genomic DNA. As a general rule, larger genomes
have sparser genes. Prokaryotic genomes are much
more compact than eukaryotic genomes, e.g. 89%
of Haemophilusgenome consists of protein-coding
genes as compared to 11.5% in the human genome.
Consequently, the number of genes is an even poorer
indicator of genome complexity than haploid DNA
content. For example, human beings with 1014 cells
have a total gene number comparable to that
of Caenorhabditis elegans, which has only 959 somatic
cells. Likewise, Drosophila, with its 50,000 cells, has
only twice as many genes as the single-celled bakers
yeast.In other words, simpler organisms already have
DNA content and gene numbers comparable to that of
advanced species. Further gene duplication (and
mutation) will not help them climb up Darwins tree of
life.
Gene regulation
Of course, it is not only the number of cells, but also the
types of cells in an organism, that indicates complexity.
On the genetic level, differentiation into various cell
types is a result of the spatial and temporal regulation
of genes. Therefore, the genes for transcription factors,
which act as molecular switches in the genome, have
much to do with genetic complexity. Prokaryotic genes
are generally regulated as a group (polycistronic, i.e.
several genes are controlled by one transcription factor)
while eukaryotic genes are regulated individually
(monocistronic).Szathmary and associates proposed a
mathematical formula to calculate genome complexity
in terms of the interactions between genes (usually
through their encoded protein products including
transcription factors).60 He borrowed a parameter,
connectivity (C), from ecology which uses the term to describe trophic interactions in food webs:
C = 2 L/[N(N-1)]
L refers to the number of interactions among genes (it originally meant trophic links in ecology), while N refers to the
number of genes in a genome (originally the number of species in an ecosystem). C is equal to the number of actual
interactions out of all possible interactions.The most important aspect of genetic interaction that determines the value of C in
Szathmarys equation is the number of levels constituting a regulation hierarchy. In ecosystems, adding trophic levels
generates more connectivity than increasing the number of species. Like a food chain, a gene regulation pathway can have
multiple levels of interactions, whereby upstream transcription factors regulate downstream transcription factors.The concept
of irreducible complexity61 applies to gene regulation systems. An irreducibly complex system is one in which all the
essential parts must be present at the same time, and thus could not have been built up slowly over millions of years in a
step-wise Darwinian fashion. In order for a gene regulation unit to function, many genetic elements, including trans-acting
elements that encode the transcription factors, cis-acting elements that respond to the transcription factors, and the
structural genes, have to be present simultaneously. Although there are examples of functional overlaps between pathways,
multiple unique elements are usually required for each pathway. Knocking out any of the elements will frequently result in
dysfunction, even loss of life.In the simplest case, many viruses have three sets of genes regulated as a cascade (figure 3).
The immediate-early () genes have promoter elements (binding sites for RNA polymerase or some transcription factors)
similar to those of the host cell and are transcribed by a host cell RNA polymerase. The products of immediate-early genes
are mostly transcription factors that interact with the cis-acting regulatory elements (promoter/enhancer) of early () genes.
The early gene products, in their turn, activate the late () genes, by interacting with their cis-acting elements. The early
genes also encode enzymes to replicate the viral DNA, so that the late genes are multiplied before their expression, allowing
for rapid accumulation of late gene products toward the end of infection. This scenario enables the virus to divert the
resources of the host cell to the production of new viruses effectively.A specific example of a regulation network is the major
immediate-early gene (mIE) of the human cytomegalovirus (HCMV) which encodes two major products, IE1 and IE2, by
alternative splicing (figure 4). The two proteins act synergistically to activate the genes. Adjacent to the gene is a 1.1-Kb
cis-regulatory sequence called the major immediate-early enhancer-promoter (MIEP), which contains concentrated binding
sites for multiple cellular transcription factors. One of the products of the mIE gene, IE1, functions as an autoregulatory
trans-activator that recruits a cellular protein, NF-kB, which binds to the enhancer and activates transcription. The IE2
product of the gene, on the other hand, represses the gene by binding to a cis-repression sequence (crs, see figure
4).62 The virus also carries several other viral proteins into the host cell for effective transcription of mIE. Among these are
ppUL35 and pp71, which interact with each other in the infected cell. 63,64Meanwhile, pp71 interacts with a cellular protein,
hDaxx, which is required for mIE transcription. 65Because the viral genome is relatively small and easy to manipulate, HCMV
provides a good model in which to study the effects of knocking out a gene from the genome. Deletion of the sequences that
encode IE2, or the proximal portion of the enhancer, from the HCMV genome completely inactivates the virus. 66,67 Deletion
of any of the genes that encode IE1, pp71, or ppUL35 renders the virus incapable of replicationin vitro at low multiplicity of

infection (MOI), which resembles natural human infection.6870 All these regulatory factors have to be present and functional
at the same time for HCMV to survive (if it cannot replicate it becomes extinct)..Virus genomes are far simpler in the
complexity of their regulation than prokaryotes and eukaryotes, so it follows that their regulatory systems are also irreducibly
complex. For evolution to have occurred via gene duplication, both the gene and its cis-regulatory elements have to be
duplicated simultaneously. Furthermore, since gene family members often have distinctly different expression patterns, both
the gene and the cis-regulatory elements have to mutate concertedly in order to confer a selective advantage to the
organism. For example, the and globins have to acquire higher oxygen affinity than the and globins in order for the
embryonic hemoglobin tetramer 2 2 to extract oxygen from the maternal 2 2tetramers. Meanwhile, the regulatory elements
of the embryonic and adult globins have to develop binding affinity for the transcription factors expressed during their
respective developmental stages. Most importantly, a delicate globin switching mechanism, known to involve numerous
trans-acting factors and multiple levels of regulation, has to be developed. In the case of the human -like globin switching,
which is the best understood, some of these factors are universal, while others are erythroid-specific. 5456,71Deletion of the
regulatory elements or a member of the gene family will result in thalassemia.Another example of clustered gene families
whose expression follows a temporal pattern is the immunoglobulin heavy chain family produced by B lymphocytes. There
are five classes and each has properties that cannot be replaced by others. All B lymphocytes start by secreting IgM and
switch to IgG, IgE, or IgA within a few days via a complex switching mechanism. 7274 The most important aspect of class
switch is targeting of DNA recombination enzymes to specific sites. Gene duplication theory would require coordinated
mutations in the structural genes and the cis-regulatory elements, and a unique recombination mechanism different from the
known mechanisms.Michael Behe used the blood clotting factors to illustrate irreducible complexity. 61 Dozens of proteins
activate or inhibit each other in the blood coagulation and subsequent clot-dissolving pathways. Accidental deletion of
factors leads to diseases such as hemophilia. Since many factors share similar functional domains, they are thought to have
evolved by ancient gene duplication events, including polyploidy during the Cambrian explosion. 7577 However, these
duplications have to be followed by coordinated mutations that work just right. A proposed functional intermediate blood
clotting pathway75 in figure 5 shows how much coordinated change is required.
Conclusion
The majority of gene duplications are meiotic or mitotic aberrations, resulting in malformations or diseases. Plants can
tolerate duplications, especially polyploidy, better than animals due to differences in their styles of reproduction. To maintain
genomic stability, all cells have built-in mechanisms to silence duplicated genes, after which they become subject to
degenerative mutations.Clusters of identical genes need complicated mechanisms to prevent diversification in order for
them to work in unison. Likewise, gene families whose members perform distinct functions are maintained by purifying
selection. While duplication may alter the number of members in gene families, it is not their ultimate origin. Current models
explaining the preservation and neofunctionalization of duplicated genes encounter obstacles one way or the other.Evolution
by gene duplication predicts a proportional increase in genome size with organism complexity but this is contradicted by the
evidence. It is not genome size but intergenic regulatory sequences and gene regulation hierarchies that determine
complexity. Gene regulation networks are irreducibly complex and constitute an insurmountable barrier for the theory.
Does gene duplication provide the engine for evolution?
by Jerry Bergman
Proponents of the gene-duplication hypothesis of evolution argue that a mutation can cause the duplication of a gene that
allows one copy of the gene to mutate and evolve to perform a novel function, while allowing the other copy of the gene to
continue to perform the original genes function. Gene duplication is now widely believed by Darwinists to be the main
source of all new genes. A review of the evidence shows that there are numerous problems and contradictions in this theory
and the empirical evidence indicates that gene duplication has a role in variation within kinds but not in evolution. Darwinists
therefore have nothing more to go on than to depend heavily upon extrapolations from gene similaritiesa circular
argument founded upon the assumption of evolution, and yet another example of evolutionary story telling.
The adverse effects of gene duplication, such as Downs syndrome, are well known. Although
the methodology is available, evidence of functionally useful genes as a result of duplication is
yet to be documented.
One of biologys greatest mysteries is how an organism as simple as a one-celled bacterium
could give rise to something as complicated as a human. 1 How life evolved from a few
primordial genes to the tens of thousands of genes in higher organisms is still a major issue in
Darwinism. The current primary hypothesis is that it occurred via gene duplication. 26 Shanks
concluded that duplication is the way in which organisms acquire new genes. They do not
appear by magic; they appear as the result of duplication.7 Ernst Mayr, one of the most
respected Darwinists of the 20 th century, agrees saying,Such a new gene is called
a paralogous gene. At first, it will have the same function as its sister gene. However, it will
usually evolve by having its own mutations and in due time it may acquire functions that differ
from those of its sister gene. The original gene, however, will also evolve, and such direct
descendants of the original gene are called orthologous genes.8Ohno goes further, concluding
that gene duplication is the only means by which a new gene can arise (emphasis mine), a view that Li concludes is largely
valid.9Furthermore, Ohno argues that not just genes but whole genomes have been duplicated in the past, causing great
leaps in evolutionsuch as the transition from invertebrates to vertebrates[which] could occur only if whole genomes
were duplicated. Kellis et al., agree that whole-genome duplication followed by massive gene loss and specialization has
long been postulated as a powerful mechanism of evolutionary innovation.10,11Evolution by gene duplication is a form of
exaptation.12-14 Exaptation is the putative evolutionary process by which a structure that evolved for some other purpose is
reassigned to its current role.
Evidence for gene duplication
Gene duplication does occur. For example, chromosomal recombination can result in the loss of a gene on one
chromosome and the gain of an extra copy on the sister chromosome. Gene duplication can involve not only whole genes,
but also parts of genes, several genes, parts of a chromosome, or even entire chromosomes.All of these conditions are well
known because they are important causes of disease (including cancer) and can even cause death. Eakin and Behringer
conclude:Spontaneous duplication of the mammalian genome occurs in approximately 1% of fertilizations. Although one or
more whole genome duplications are believed to have influenced vertebrate evolution, polyploidy of contemporary mammals
is generally incompatible with normal development and function of all but a few tissues. Most often, divergence of ploidy
from the diploid (2n) norm results in a disease state. 15Li has noted that polyploidy (having more chromosomes than the

usual diploid number) is likely to cause a severe imbalance in gene product, and their chance of being incorporated into the
population is small.16 He concludes that for both vertebrates and invertebrates only when single genes, or a few genes, are
duplicated is the possibility to evolve new genes created.The gene-duplication idea has been researched for more than 30
years. Although first discussed by Haldane in 1932 and Miller in 1935, it was not discussed in detail until 1970 in Susumu
Ohnos book, Evolution by Gene Duplication.17 When Ohno proposed the idea many of his colleagues then considered his
proposal outrageous.10 Gene duplication could not be evaluated experimentally, though, until the development of molecular
biology techniques. Even now the primary evidence for gene duplication having a role in evolution must be inferred from
gene similarity (i.e. an argument from homology). In the words of Hurles:The primary evidence that duplication has played a
vital role in the evolution of new gene functions is the widespread existence of gene families. Members of a gene family that
share a common ancestor as a result of a duplication event are denoted as being paralogous, distinguishing them from
orthologous genes in different genomes, which share a common ancestor as a result of a speciation event. Paralogous
genes can often be found clustered within a genome, although dispersed paralogues, often with more diverse functions, are
also common.18Because two genes are similar, though, does not prove that one was produced as a result of duplication.The
ideal method to prove the origin of functionally useful genes as a result of gene duplication would be to use the same
techniques that have been used to prove the adverse effects of gene duplication. A child with an abnormality such as
Downs syndrome (trisomy 21) is studied for genetic differences compared to the population as a whole and, especially,
compared to his or her parents. If neither parent has a trisomy 21, and the cause, an extra chromosome 21, is determined to
be a result of non-disjunction, it can be concluded that gene duplication has caused the abnormality. In the opposite case, if
a child that has an exceptional ability is determined to have a gene not found in his parents and genetic studies of the family
genetic history lend evidence of gene duplication and mutations in the childs genetic inheritance, this is powerful evidence
for gene duplication having produced the advantageous trait. This method can be used to trace the process for several
generations so as to determine cases that involve more than one mutation. So far, however, no one seems to have done
this research, or if they have, the results have not supported the gene duplication theory and were not published
Chromosome doubling in plants
Chromosome abnormalities, such as triploidy, are usually harmful in most animals, especially higher animals. Conversely,
polyploidy in plants is very common and can, in many circumstances, benefit the plant, although few researchers argue that
it plays a significant role in large scale evolution.19 Some evidence exists that polyploidy is a mechanism that produces
variety within created kinds, similar to the effects of crossing over that occurs during meiosis. The specific effects of
polyploidy depend on the environment and the plant. Polyploidy increases cell size, causing a reduction of the surface-tovolume ratio that can reduce the rate of some cell functions, including metabolism and growth. Conversely, some polyploids
are more tolerant to drought and nutrient-deficient soils. In addition, some polyploids have greater resistance to pests and
pathogens.20 However, in all of these cases, a fitness cost exists, meaning that in many environments polyploidy is a
disadvantage.Much more research is needed for a proper understanding of plant polyploidy in order to determine under
what specific conditions it is harmful and, conversely, under what specific conditions it is beneficial. As its biological function
seems to be primarily to produce variety, it is not normally lethal (or even regularly lethal), as are most examples of animal
polyploidy.Some invertebrates can tolerate polyploidy. Male bees, for example, have a haploid number of chromosomes and
females a diploid number. This does not cause the females to evolve faster, however, as the gene duplication theory might
predict. In the rare cases of polyploidy in vertebrates, most examples involve unusual species that demonstrate a
parthenogenetic mode of reproduction, lack heteromorphic sex chromosomes or have an environmentally induced sexdetermining system.21Artificial gene duplication for experimental purposes has been developed in mice, but it has not
provided any evidence for evolution because it is lethal:The production of tetraploid (4n) embryos has become a common
experimental manipulation in the mouse. Although development of tetraploid mice has generally not been observed beyond
mid-gestation [i.e. it is fatal], tetraploid:diploid (4n:2n) chimeras are widely used as a method for rescuing extra-embryonic
defects [i.e. a genetic defect that is normally fatal can be artificially made to survive in the chimera].22
Problems with the gene-duplication theory
The statistical challenge
Statistical evaluation of the predictions of the gene duplication theory does not appear to be favourable to it. For example,
the theory predicts a positive correlation between organismal complexity and gene number, genome size and/or
chromosome number. All of these predictions are contradicted by the evidence.In regard to gene number, humans have
about 25,000 genes,23 while rice has 50,000.24 In terms of genome size, the largest known genome does not occur in man,
but rather in a bacterium! Epulopiscium fishelsoni carries 25 times as much DNA as a human cell, and one of its genes has
been duplicated 85,000 times yet it is still a bacterium. 25In terms of chromosome number, the descending rank order of
diploid numbers for a selection of animals is as follows: Cambarus clarkii (a crayfish) 200, dog 78, chicken 78, human
46, Xenopus laevis (South African clawed frog) 36, Drosophila melanogaster (fruit fly) 8, Myrmecia pilosula (an ant) 2. These
results do not fit the predictions of the gene duplication theoryperhaps they imply that flying on your own wings or in
airplanes (fruit fly and human, respectively) needs less chromosomal input than lying around in swamps (frog and crayfish,
respectively).Another statistical challenge has been noted by evolutionist genetics professor Steve Jones who concluded
that an inverse relationship exists between the amount of DNA on one hand, and, on the other, both lethargic lifestyles and
the speed at which organisms can evolve: the more DNA, the slower it is able to evolve. It takes a great deal of energy and
resources to duplicate DNA, and the less of it an organism has, the faster it can reproduce (and the more efficient it is).
Jones notes that all weeds have small genomes, while more established plants are packed with DNA and can take a month
to make a single egg cell.26 Another example Jones cites is lungfish, which are stuffed with DNA (most of it with no
apparent function) and their evolution has stalled altogether bacteria are speedy and have no excess genetic material,
while salamanders, torpid as they are, are filled with DNA.26 In his view, natural selection selects against gene duplication.
The evo-devo challenge
Male bees have a haploid number of chromosomes whereas female bees
are diploid. This however, does not cause females to evolve faster, as
predicted by gene duplication theory.An important alternative to the
Darwinists exclusive focus on genes is emerging in evo-devo
(evolutionary development theory). They claim (with a great deal of
experimental evidence behind them) that the content of the genome is not
the primary determinant of identity; it is the epigenetic control system that
decides how the genes are used. A surprisingly small number of genes
tool kit genesare the primary components for building all animals,
and these genes emerged before the Cambrian explosion [emphasis
added].27 That means the essential genes have not changed significantly
over time, contradicting the central claim of neo-Darwinism. The function

of these genes can be compared to keys on a piano keyboard. The kind of music that is played (i.e. whether an embryo
turns into a man or a mouse) is determined, not so much by the keys themselves, but by the player who strikes the keys and
by the musical score that the player follows. If this is true, then arguments about gene duplication are irrelevant because
evolution occurs somewhere else (i.e. in the playing and in musical score).
The functional challenge
Because whole genome duplication in animals is usually lethal, Ohno originally concluded that only two whole genome
duplications had occurred throughout history; later he argued that a total of three had occurred. 28But Darwinists have
admitted that even the process of single gene duplication is poorly understood. Lynch and Conery note that, although gene
duplication has generally been viewed as a necessary source of material for the origin of evolutionary novelties, the rates of
origin, loss, and preservation of gene duplicates are not well understood. 29Behe and Snoke have pointed out that
evolutionists must assume that multiple mutation events are required to produce a new functional gene, and each of the
mutations must not be deleted until the gene has evolved to the degree that positive selection occurs.30 Meanwhile however,
a duplicated gene may produce either defective proteins that can be toxic or fatal, or, at the least, will tax the cells resources
and waste amino acids and energy. Because of this, natural selection acts ongene duplications, most often by deleting them
from the gene pool or by degrading them into non-functional pseudogenes. This is because fully functional duplicated
genes, in combination with the corresponding parent gene, produce abnormally abundant quantities of transcripts. This
over-expression often alters the fragile molecular balance of gene products on a cellular level, ultimately resulting in
deleterious phenotypic consequences.31Zhang, in a study of gene duplication, concluded that many duplicated genes
become degenerate, nonfunctional pseudogenes and, in only rare cases, a new function may evolve, as is believed to
have occurred in the douc langur monkey.32 These langurs have two copies of an RNA-degrading enzyme gene, while other
monkeys have only one copy. The extra copy aids the langur in digesting its specialized diet of leaves. Pseudogenes are
considered by some to be damaged genes, and by others a source of new genes,33 and recent work suggests that they may
be functional.10Yet another functional problem, noted by geneticist Manfred Schartl, is thatit would be very difficult for the
first tetraploid fishthose with four rather than the usual two copies of each chromosometo engage in sexual
reproduction.28
Although the globin gene family is the most commonly sited example of evolution by gene duplication, there is no evidence
to support this. Moreover, it is known that the various globin variants
of hemoglobin are designed to meet the differing demands for oxygen
metabolism during the various stages of embryological, fetal and
neonatal (and later) development.Another putative mechanism
is partial duplication, which results in a gene mosaic. This condition,
called a patchwork gene, often consists of several different regions
that are similar to other genes. Likewise, because of this similarity it is
assumed that the gene segments haphazardly combined until a rare
combination occurred that was beneficial, so that this gene was
selected. The most common hypothetical example is the LDL (LowDensity Lipoprotein) receptor. This relationship is hypothesized
because part of the LDL receptor is similar to the epidermal growth
factor hormone.Some theorize that this part of the gene evolved from
a partial duplication of the epidermal growth factor gene. But how was
the function of the LDL receptor maintained until this gene evolved?
Without functional LDL receptors, a cell cannot effectively take in
lipids, causing not only a supply deficiency in the cell, but also excess
LDL in the blood, resulting in vascular problems from stroke, to
embolisms, to heart disease. An example is hypercholesterolemia, a disease caused by defective lipid receptors. The
victims often have strokes and heart attacks before their teens, even if on a low-fat diet.
Gene Families?
A group of genes that is closely related and theorized to have evolved by successive duplication is called a gene family, and
an even larger group of genes that has structural similarities is titled a gene superfamily. No evidence of ancient genes
exists to empirically document the theorized evolution of any gene family or superfamily. Instead, a gene family is
determined merely by making comparisons among existing genes, noting those that are similar.But any arbitrary collection
of itemswords, ideas, or physical objectscan be grouped together to form families and super families, and no
exception exists for genes. An automobile and a lawnmower, for example, both belong to the four-wheeled machine family
but this does not necessarily imply common ancestry. We are therefore not compelled to believe that because some genes
have similar components that they evolved from a common ancestor.The first genes speculated to have evolved as a result
of gene duplication were therefore the alpha and beta hemoglobin chains used to carry oxygen in erythrocytes. 9 The globin
gene family is now the most commonly cited example of evolution by gene duplication. Myoglobin, a monomeric protein
found mainly in muscle tissue where it serves as an intracellular storage site for oxygen, is hypothesized to have evolved
into the tetrameric hemoglobin. Hemoglobin consists of two dimers, each one containing an alpha globin and a non-alpha
globin. The ancestral non-alpha globin, called beta globin, supposedly gave rise to modern gamma, delta, and epsilon globin
genes, and duplication of the alpha globin produced the epsilon and zeta globin genes. These globin variants are all used
during different stages of embryological, fetal and neonatal (and later) development. The alpha, zeta and epsilon globin
chains are produced in the early embryo and, during about the third month, the latter chains are replaced by the gamma
chain and then later by the adult beta or delta chains at birth.But all of this supposed evolution is based on nothing more
than speculation. In real life, the multiple uses of globin molecules in oxygen metabolism is no more an indicator of blind
replication than is the multiple use of cogwheels in a clockwork mechanism. Just as each cogwheel is specifically structured
and located to do a particular job, is functionally integrated with its fellows to optimally do that job, and is precisely regulated
to do it at the right time, so are the globin molecules designed to meet the differing demands for oxygen metabolism during
the development of the organism. The site of hemoglobin synthesis also changes from yolk sac to liver to bone marrow
during development, so differing environments and transport systems are also involved. Disruption to hemoglobin synthesis
leads to a wide range of diseases, and neo-Darwinists have been unable to explain how development could have
proceeded successfully before the complex system was all in place.Another example of duplication is believed to be the
evolution of the Human Major Histocompatibility Complex (MHC). But further study has likewise disputed some of these
claims:Regions that are paralogous to the MHC on chromosomes 1, 9, and 19 have been proposed to result from ancient
chromosomal duplications, although this has been disputed based on phylogenetic analysis.34
The gene duplication rate problem

Is gene duplication common enough to provide an adequate source for evolution? The rate can be as high as 17% in some
bacteria to 65% in the plant Arabidopsis but these are extreme examples.32 One empirical study by Lynch and Conery used
steady-state demographic techniques to accurately determine the number of duplicate genes. This study evaluated seven
completely sequenced genomes. From their research, they estimated that the average rate of duplication of a eukaryotic
gene to be on the order of 0.01/gene/million years, which is of the same order of magnitude as the mutation rate per
nucleotide site. The researchers concluded from their study that the origin of a new function appears to be a very rare fate
for a duplicate gene (emphasis mine).35Another study by Behe and Snoke 30 evaluated gene duplication by using
mathematical modeling and published gene-duplication data. Their model assumes the simplest route to produce a new
gene function: a duplicated gene that is free from purifying selection and subject to point mutation, and the minimum number
of biologically relevant modifications required to create a novel function. Because the minimum number of changes
necessary for most new gene functions is greater than one altered amino acid, and the number of changes needed in DNA
for each altered amino acid varies between one and three, definitive estimates are difficult to obtain. Nonetheless, a
reasonable estimate can be obtained in attempting to evaluate the validity of the duplication-mutation model. Behe and
Snoke concluded that, even given liberalestimates, fixation of features requiring changes in multiple residues requires both
population sizes and numbers of generations so large that they seem prohibitive. They concluded that gene duplication,
coupled with point mutations, does not appear to be a promising mechanism for producing new proteins that require more
than a single point mutation.Standish concludes that the Behe-Snoke paper does not exclude the possibility thatmore
complex mechanisms involving larger mutations and/or selection of intermediate states acting on duplicated genes may
serve as engines of new gene production. The problem is that these other mechanisms appear to be even more complex
and thus less probable than the conceptually simple duplication-point mutation model Behe and Snoke examined. While
their paper suggests that other potential mechanisms should be rigorously examined before discarding gene duplication and
modification as a potential mechanism of evolution, it clearly demonstrates that even the most superficially reasonable
sounding Darwinian mechanisms should be carefully evaluated before they are accepted as truly reasonable [emphases
added].36This study (and others) indicate(s) that gene duplication does not appear to provide Darwinists with a significant
source of new genes. Although many, if not most, genes are assumed to have arisen by gene duplication, a clear lack of
evidence exists for gene duplication as the source of specific genes.12 Another major problem is distinguishing adaptations
from exaptations. In others words, how do we know a gene resulted from duplication, and not by some other means such as
independent evolution?37
The indefinite regress problem
Gene duplication is a supposed method of exaptationthe takeover of an existing function to serve another purpose. Gould
believed exaptation was so important that the defining notion of quirky functional shift [i.e. exaptation] might almost be
equated with evolutionary change itself in textbook parlance, the origin of evolutionary novelites. 38 But this kind of
argument is fundamentally flawed. If all evolutionary novelties arise from something else that was itself exapted from
something else, then an indefinite regress results. The problem with an indefinite regress is that explanation A depends on
an earlier explanation B that you have not given, and explanation B itself depends upon an earlier explanation C that you
likewise have not given. While you may appear to be explaining something, there is no actual explanatory contentit is no
explanation at all.
The conservation problem
Multiple information conservation mechanisms are at work in all living
organisms, ranging from natural selection eliminating the unfit, through
various reproductive and chromosomal controls, to error correction
routines and DNA repair mechanisms, including (it appears) restoration
from non-DNA sources. As a result, many, if not most, genes are
evolutionarily conserved, meaning that they are very similar in many
unrelated organisms, both simple and complex, modern and ancient.
Many genes in the assumed earliest forms of life are very similar to
those in the most advanced forms. These facts argue strongly against
gene duplication as a mechanism of evolution, because they indicate
that most genes were optimally functional from the beginning.
Conclusions
The proposition that large scale evolution has occurred via gene
duplication is contradicted by numerous lines of evidence. Little
evidence currently exists to support the belief that gene duplication is a significant source of new genes, supporting one
University of South Carolina molecular evolutionists conclusion that scientists can not prove that [genome duplication]
didnt happen, but [if it did], it didnt have a major impact. For me, its a dead issue. 10It also is clear that the evidence for
gene duplication at present is totally inferential, and not empirical or experimental. Chromosome duplication can produce
useable varietybut only within what are most likely created kindsin plants and invertebrates, and single gene duplication
appears to do likewise in rare cases in vertebrates, but otherwise gene duplication generally causes disease and deformity.
The existing experimental evidence does not support gene duplication as a source of new genes for at least populations of
fewer than one billion.30 According to Hughes, Everything weve looked at [fails to] support the hypothesis. 39 Darwinists
promote gene duplication as an important means of evolution, not because of the evidence, but because they see no other
viable mechanism to produce the required large number of new functional genes to turn a microbe into a microbiologist. In
other words, evolution by gene-duplication is yet another example of just-so story-telling.
Dawkins and the origin of genetic information
Is it legitimate to demand of evolutionists an explanation for the origin of genetic information?
Some amoebae have a huge amount of DNA in each cell, much more than humans. Does that mean they are more
biologically complex? Hardly. It just shows we have a lot to learn yet.The leading antichristian and eugenicist Clinton R.
Dawkins is not without his defenders. One questions us on the issue of genetic information, a question Dawkins had
immense difficulties with. Don Batten responds with instructive points about the latest discoveries about information and
meta-information (information about information), as well as pointing out the confusion between amount of DNA and amount
of information it holds.
I would like to respond to the Skeptics choke on frog article regarding, among others, Richard Dawkins.
The idea that biological complexity equates to genetic complexity is completely wrong. Charging evolutionists to describe a
mutation which would add information to an organisms genome is an irrelevant question. In fact, there ARE actually such

mutations, which will increase the volume of a genome and even add genes (they are due to the activity of some viruses
and of translocons, and to chromosomal recombination).
However, the evolution of organisms from simple to complex has nothing to do with how many genes an organism has or
how large its genome is. In science, we even have a name for the fact that the number of a species genes has no relation to
the relative complexity of that organism: it is called the C value paradox. As an example, humans have approximately
20,000 to 25,000 genes. Rice has somewhere around 37,000 genes. If an organisms evolutionary complexity actually had
anything to do with how large its genome is, or even with how many protein-coding genes it contains, then rice would be the
considered the paragon of evolutionary AND creationary mechanisms.
Nicole
USA
Dear Nicole,
Thanks for your query, which does afford us the opportunity to correct a misconception. You are correct: the complexity of an
organism is not to be measured simply by counting the number of protein coding genes. Life is far more complex than that. I
dont think we have ever suggested that the status of an organism is to be measured by the number of such genes and that
therefore humans would necessarily have the most.However, the evolution of a microbe into a complex organism such as
rice or a human does require the addition of new genes. For example, the simplest single-celled organism has about 500
protein-coding genes and humans have over 20,000. So, if we began as microbes in some primordial soup, as evolutionary
theory posits, then a lot of new genes had to be added by mutationsthe only game in town for the evolutionist. There have
to be a lot of mutations that add such new genes, not just twiddle with the existing ones. For example, the genes that make
nerves and all the enzymes that enable nerves to operate are absent from microbes. They have to be created de novo if we
evolved from them. There are many gene families in humans that are completely missing from microbes, so there has to be
a viable mechanism for adding this genetic information if evolution is to be feasible. And mutations (accidental changes) of
one form or other are the only mechanism for Darwinism.So the question to Richard Dawkins was a legitimate one. Indeed,
Dawkins himself says that it is the information in living things that evolution has to explain. He candidly admits this in the rest
of the interview that is included on the documentary. In The Blind Watchmaker Dawkins clearly outlines the problem of
information in living things. Of course, being a true believer in evolution by necessity of his atheism, he has to believe that
mutations and natural selection can do the job and he spends the rest of the book with various story-telling ploys to make a
case for the adequacy of evolution to create the required information.Is rice more complex than a human because it has
more protein-coding genes? It might be, because it is an autotroph, meaning that it is capable of creating all its own energyrich biochemical building blocks using the energy from sunlight (in photosynthesis). In contrast, humans are heterotrophs,
ultimately depending on plants to live. We are incapable of making many of the complex biochemicals needed for life; we
get them from plants. There are many genes involved in photosynthesis and the biosynthesis of the essential amino acids,
for example, that we do not have (the origin of photosynthesis is another conundrum for evolutionistssee Shining light on
the evolution of photosynthesis and Green power: Gods solar power plants amaze chemists). But we also have many
genes that rice does not have: ones for making muscle fibres, nerves, hemoglobin, etc. So rice and humans are not really
comparable; its like comparing apples with jellyfish.Comparison of rice with humans is a red herring. No one has proposed
that humans evolved from rice, or vice versa. Evolutionists (such as Dawkins) readily admit that the evolution of humans
(and rice) involved the addition of a lot of new genetic information to simpler organisms that supposedly made themselves in
the beginning (another unanswerable problem for evolutionistssee Origin of Life Q&A). If someone wants to argue that all
the information needed to make a human was there in the beginning, it just makes the origin of life even more immensely
difficult to explain!
Life is more than genes
But life is more than genes. The very concept of a gene as the basic unit of heredity that controls everything is being
seriously questioned. The ENCODE project in particular has blown away the idea that life is just about protein-coding genes,
although there were prior indications that this was incorrect (see No joy for junkies). In short, the rest of the DNA of humans
and other complex organisms is not junk, but incredibly important. Basically, it controls how the genes workfor example,
why it is that hemoglobin is only produced in red blood cells when all cells have the genes for hemoglobin protein. And it
controls the incredible sequencing of genes so that orderly embryo development occurs. See also Meta-information: An
impossible conundrum for evolution.So life is much more than genes. When we take into account the total DNA, rice has
466 million base pairs and humans have 3 billion (six times as much), which might be better for our egos than the
comparison of the number of genes. However, even the number of base pairs, or the picograms of DNA per nucleus, is no
adequate measure of genetic complexity. The smallest flowering plant genome is only about 0.1 picograms (flowering plant
range 0.10127.0 pg), whereas the largest alga is 19.6 picograms (algal range 0.0119.6 pg). 1 Clearly, flowering plants are
much more complex than algae, so there is more to complexity than a simple comparison of genome sizes.Protozoa (for
example amoebae) range from tiny to huge in their nuclear DNA amounts, some of them greatly exceeding the human
number of base pairs.2 It is not really understood why this is so. It could have something to do with cell size, where
organisms with large cells have a form of endoreduplication, where the DNA multiplies up to be able to provide enough
mRNA transcripts to supply the large cells protein requirements. Specialized, enlarged plant cells do this (I have measured
the relative amounts of DNA in the nuclei of such cells using microfluorimetry). Actual genome decoding does not suggest
that protozoan genomes are large in terms of numbers of different genes, although at present the largest amoeba genomes
have not been sequenced. Typical sequenced genomes of protozoans seem to be of the order of about 25 million base
pairs.3 I expect that the large protozoan genomes, when they are sequenced, will reveal large-scale duplication of genes,
such that that total number of different genes will be of the same order as other protozoans. In support of this, Amoeba
dubia, the one with the largest reported amount of DNA, is the largest sized amoeba cell known, being visible to the naked
eyeup to a millimetre in length. This compares with 0.009 mm diameter for a human red blood cell. Considering that the
volume of a cell scales with greater than the square of the radius, the volume of Amoeba dubia cells is huge (~10,000x)
compared to human cells. This almost certainly has something to do with the huge amount of DNA it contains.
Your statement,
The idea that biological complexity equates to genetic complexity is completely wrong,
begs the question, which is, What do you mean by biological complexity and what do you mean by genetic complexity? As
we have seen, genetic complexity is far more than just counting the number of protein-coding genes. Much of it is only just
beginning to be discovered.
Adding information?
In fact, there ARE actually such mutations, which will increase the volume of a genome and even add genes (they are due
to the activity of some viruses and of translocons, and to chromosomal recombination).
I think you meant to say transposons, not translocons, which are quite different (and a huge problem in themselves for
evolution to explain, but thats another story). Actually, movement of DNA with transposons or viruses does not create

any new information; it only transfers it around, as we have explained beforethis does not explain the origin of the genetic
information. But there is now strong evidence that transposons are not parasitic DNA or endogenous retroviruses at all,
willy-nilly shifting chunks of DNA around at random, but are involved in the regulation of gene activity during embryo
development, for example (see the No joy for junkies article).Recombination during meiosis also does not
create new information; it just selects from the existing alleles, giving different combinations in the offspring. Darwin made
the mistake of thinking that variety in offspring meant new features arising spontaneously, whereas we now know, following
the pioneering work of the famous creationist scientist, Gregor Mendel, that the variety is due to the recombination of
existing genes, not the creation of new ones. See Genetics: no friend of evolution. If Darwin had known what we know about
genes and mutations, he might not have become a Darwinist.Evolutionists also claim that genes can be duplicated and this
is an increase in information. But if you write an essay of 5,000 words and it needs to be 10,000 words, you wont get any
credit for photocopying (duplicating) the 5,000 to get the 10,000. Thats what evolutionists are claiming when they say that
virus transfer or duplication increases information. See also Does gene duplication provide the engine for evolution?
The question to Professor Dawkins was quite legitimate, as he himself readily admits in his voluminous works. Evolution has
to explain the origin of enormous quantities of information in living things (but it cant).
I hope this helps answer your questions.
Sincerely,
WHAT ABOUT JUNK DNA
Junk DNA: evolutionary discards or Gods tools?
by Linda K. Walkup
Summary
Junk DNA is thought by evolutionists to be useless DNA leftover from past evolutionary permutations. According to the
selfish or parasitic DNA theory, this DNA persists only because of its ability to replicate itself, or perhaps because it has
randomly mutated into a form advantageous to the cell. The types of junk DNA include introns, pseudogenes, and mobile
and repetitive DNAs. But now many of the DNA sequences formerly relegated to the junk pile have begun to obtain new
respect for their role in genome structure and function, gene regulation and rapid speciation. On the other hand, there are
examples of what seem to be true junk DNAs, sequences that had lost their functions, either to mutational inactivation that
could have occurred post-Fall, or by time limits set on their functions.Criteria are presented by which to identify legitimate
junk DNA, and to try to decipher the genetic clues of how genomes function now and in the past, when rates of change of
genomes may have been very different. The rapid, catastrophic changes in the earth caused by the Flood may also have
been mirrored in genomes, as each species had to adapt to post-Flood conditions. A new creationist theory may explain how
this rapid diversification came about by the changes caused by repetitive and mobile DNA sequences. The so-called junk
DNAs that have perplexed creationists and evolutionary scientists alike may be the very elements that can explain the
mechanisms.The last decade of the 20th century has seen an explosion in research into the structure and function of the
DNA in genomes of a wide range of organisms. As of April 2000, the whole genomes, or full DNA complements of over 600
organisms have been sequenced or mapped.1The sequence of the fruit fly genome, just completed, has over 130 million
base pairs (bp) and is the largest genome sequenced so far.2 The first complete human chromosome has been
sequenced,3 and the Human Genome Project expects to complete its work sometime in 2003, as does the Mouse Genome
Project. Researchers in the new field of genomicsthe comparison of the structures, functions and hypothetical
evolutionary relationships of the worlds life-formsare working furiously to deal with the huge inflow of data. Now more
than ever, scientists can see at the most basic level the similarities and differences of organisms, and are seeking to
understand how the blueprints of cells are decoded and regulated.A major goal of genomic studies is to understand the role,
if any, of the various classes of so-called junk DNA. Junk or selfish DNA is believed to be largely parasitic in nature,
persisting in the genomes of higher organisms as evolutionary remnants by their ability to reproduce and spread
themselves, or perhaps because they have supposedly mutated into a function the cell can use.
Origin of the junk DNA hypothesis
The idea that a large portion of the genomes of eukaryotes*4 is made up of useless evolutionary remnants comes from the
problem known as the c-value paradox, c meaning the haploid* chromosomal DNA content. There is an extraordinary
degree of variation in genome size between different eukaryotes, which does not correlate with organismal complexity or the
numbers of genes that code for proteins. For instance, the newt Triturus cristatus has around six times as much DNA as
humans, who have about 7.5 times as much as the pufferfish Fugu rubripes.5 The c-value between different frog species
can differ by as much as 100-fold.6 Early DNA-RNA hybridisation* studies and recent genome sequencing results have
confirmed that >90% of the DNA of vertebrates does not code for a product. Much of this variation is due to non-coding (i.e.
not producing an RNA or protein product), often very simple,
repeated sequences. With the discovery that many of these
sequences seemed to have arisen from mobile DNAs which are
able to reproduce themselves, the selfish or parasitic DNA
hypothesis was born.7,8 This said that these sequences served no
function in the host organism, but were simply carried on the
genome by their ability to replicate or spread copies of themselves
within and even between genomes.Plasterk stated it this way when
he wrote about transposons*, one of the junk DNA types:This
ability to replicate is a sufficient raison detre for transposons; they
have the same reason for living as, say, the readership of Cell:
none. They exist not because they are good, pretty, or intelligent,
but because they survive. 9
Just as Plasterk was wrong about our reason for living, he is wrong
about the purposes of these DNA sequences. Recent research has
begun to show that many of these useless-looking sequences do
have a function, and that they may have played a role in
intrabaraminic10 (within-kind) diversification.
Types of junk DNA
There are four major kinds of junk DNA:
introns, internal segments in genes that are removed at the RNA
level;
pseudogenes, genes inactivated by an insertion or deletion;
Figure 1. Only portions of a eukaryotic gene code
for a protein product.

satellite sequences, tandem arrays of short repeats; and


interspersed repeats, which are longer repetitive sequences mostly derived from mobile DNA elements.
Introns
After most eukaryotic genes and a very few prokaryotic* (bacterial) genes are transcribed, or copied into RNA, there are
segments that are cut out of the messenger RNA (mRNA) before it is used as a template to make a protein (Figure 1).
Introns in fact form the majority of the sequence of most genes, as was seen when human chromosome 22 was sequenced
(Table 1). Why are these RNA pieces present if they are only to be discarded? Evolutionary theory tries to explain these as
vestigial sequences, or that they are useful only as sites at which recombination can safely take place to reshuffle exons
(coding or protein making segments) into new proteins or new forms of these proteins. Their ubiquity in eukaryotes argues
that they are not post-Fall aberrations, but designed features.What then, could these throwaway segments be doing? There
are several possibilities emerging from recent research. One general regulatory role may be to slow down the rate of
translation*, as the splicing* process does take time. Alternative splicing allows greater diversity, as certain exons can be
skipped and spliced out to allow a different protein to be made from the same mRNA, as is seen in some viruses and in the
generation of diversity in antibodies. Another example is the CD6 gene, which is involved in T cell stimulation. Variable
splicing of exons gives rise to at least five different forms of the protein, which allows regulation of its activity. 11Another
observed mechanism by which introns can regulate gene activity is through the binding of the snipped-out intron RNA to
DNA or RNA. There are now a few examples of the role of introns in regulating the genes they are in, as well as other
genes. One interesting example is the lin-4 gene intron from the nematodeCaenorhabditis elegans. A developmental control
gene was found to reside in the intron of another gene (Figure 2). 12,13 The small RNA encoded by lin-4 binds to the mRNA of
another developmental gene, lin-14, blocking its ability to make protein. The binding site in lin-14 was in another supposedly
useless stretch of RNA, the 3' untranslated region (3UTR*) found after the last coding region. It was later found that lin4 RNA also binds to the 3UTR in another gene in the developmental pathway, lin-28.14 In fact, more and more cases of
3UTRs performing gene regulatory activities have been observed. 15,16There are examples of protein-encoding genes within
introns of other genes that have been recently discovered. For example, on human chromosome 22, the 61-kilobase (kb)
TIMP3 gene, which is involved in macular degeneration, lies within a 268-kb intron of the large SYN3 gene, and the 8.5-kb
HCF2 gene lies within a 27.5-kb intron of the PIK4CA gene.3
Some introns also play a role in mRNA editing, a process
where the A (adenine) residues in the mRNA are changed to G
(guanine).17 Self-complementary* or
exon-complementary
intron sequences, can bind to each other to form a hairpin loop
structure, allowing the sequence of the RNA to be changed
after transcription* from the DNA. Thus introns can cause new
messages to arise from a gene without altering its DNA coding
sequences.The most general function of introns may be to
stabilize closed chromatin* structures in, and around, genes
and their associated regulatory DNA elements. 18,19 An
isochore* is an approximately 300-kb segment of DNA whose
base pair composition is uniform above a 3-kb level, for
example 67% A-T bp.20 The general ability of an isochore to be
transcribed is dependent on the accessibility of its DNA, i.e.
how tightly histones* and other DNA-binding proteins wrap up
the DNA. This is seen as being at least partially dependent on
the A-T or G-C bp content of a segment of DNA. Though this
content can be skewed somewhat by the choice of triplet
codons*used in the coding DNA (since the code is
redundant21), exons are still constrained in their ability to vary
the bp content. The presence of introns throughout genes
allows the proper levels to be maintained, and indeed introns
reflect the general isochore type much more closely than the
coding regions. The presence of introns may well be a
condition for at least some forms of sectorial repression like
superrepression, where large sections of chromatin are altered
Table 1. Types and amounts of DNA sequence classes of
to turn off groups of cell-type-specific genes or developmental
the sequenced euchromatin* of human chromosome 22
genes. It was shown, for example, that the gene for rat growth
3
(after Dunham et al.).
hormone, when deprived of its introns, was no longer able to
1. All other types of interspersed repeats seen, not
form its normal more condensed structure when reinserted
detailed
here. back into cells.22It is important to know whether the specific
2. Tandem repeats from 2 to 5 bp in length (microsatellite
sequence of an intron is required for its function when
DNA). It is estimated that most of the remaining DNA not
constructing phylogenetic or family trees, or when determining
sequenced is satellite DNA, as tandem repeats are mostly
baraminic* placement of an organism. In evolutionary studies,
located in the heterochromatin, which was not sequenced.
DNA sequence comparisons are used to try to build
3. Includes all tandem and interspersed repeat types.
phylogenetic trees to trace ancestors to descendants. Since
introns are generally believed to be free from the constraints of
functionality when mutations cause changes in their sequence,
introns in a particular gene are often compared between organisms, with the bp differences seen between their sequences
supposedly indicating the degree and time of divergence since they last shared a common ancestor. In some instances, the
assumption that an intron is likely to have mutated freely and extensively during the presumed millions of years of
evolutionary history has proved wrong. Koop and Hood found that the DNA of the T cell receptor complex, a crucial immune
system protein, is 71% identical between humans and mice over a stretch of 98-kb of DNA. This was an unexpected finding,
as only 6% of the region encodes protein, while the rest consists of introns and non-coding regions around the gene. 23 Does
it follow then that we have a recent common ancestor with mice? Since this does not fit in with evolutionary theory, the
authors conclude instead that the region must have specific functions that place constraints on the fixation of mutations. This
illustrates that DNA sequence comparisons to establish evolutionary relationships are not the independent tests that they
are claimed to be. If the data do not support the desired evolutionary theory, ad hoc explanations of altered rates of
mutation, functional constraints, etc., can be brought in to explain away discrepancies.24

Another example of selective interpretation of DNA sequence comparison


data using introns is the study of an intron in an important sperm
maturation gene on the Y chromosome of humans.25,26 It was hoped that
the ancestry of modern humans could be traced by sequencing this 729bp intron from 38 different men from different ethnic groups. Surprisingly,
all 38 men had exactly the same sequence, which was then interpreted
as a recent common ancestor (27,000270,000 years ago) for the whole
human race, or possibly that the intron had functional constraints on its
mutability. This latter premise was rejected by the authors because the
sequence of the same intron in chimp, gorilla and orangutan was
progressively more different. These data would strongly support the
young age view that there was a severe bottleneck in the human
population when the Flood reduced the varieties of Y chromosomes to
the one shared by the survivors. Apes would not be expected to have
exactly the same sequence as humans, as they are from separate
created kind(s). The fact that they do have a similar intron argues for a
function for this sequence, and the intron may have been originally
created slightly different for proper function in an ape versus a
human.Thus evidence is mounting to support the important role of introns
Figure 2. Interaction of two junk RNAs
in gene regulation and chromosome structure, which would remove
regulates a developmental gene.
8.15%27 of the junk DNA of the human genome from the trash heap.
Pseudogenes*
Occasionally located near functional genes or gene families, there are sequences that very closely resemble other functional
genes, but have been inactivated in someway. Some have a mobile element inserted in their open reading frames (ORFs*),
others seem to be processed genes, i.e. they look as though the RNA from another gene has been reverse transcribed
(RNA used as a template to make DNA) and reinserted back into the DNA (Figure 3). A processed pseudogene* thus
precisely lacks the introns, possesses 3'-terminal poly-(A) tracts*, and lacks the upstream promoter* sequence required for
transcription of the corresponding parent gene. Pseudogenes are common in mammals, but virtually absent in
Drosphila.28 Nineteen percent of the coding sequences identified in human chromosome 22 were designated as
pseudogenes, because they had significant similarity to known genes or proteins but had disrupted protein coding reading
frames. 82% appeared to be processed pseudogenes.3 Many pseudogenes have additional mutations in them, presumably
because there is no functional constraint on their mutation. For example, the human beta-tubulin gene family consists of 15
20 members, of which five have these pseudogene hallmarks. 29 Some pseudogenes affect gene activity by binding
transcriptional factors that activate the normal gene. Whether this is intentional design or something the organism has
simply adjusted to is difficult to say. Many pseudogenes do seem to fit the profile of true junk DNAs.
Repetitive
DNA
sequences,
including
mobile
DNA
sequences
Repetitive DNA sequences form a
substantial fraction of the genomes
of many eukaryotes (Table 1, Table
2).30,31 This class includes satellite
DNA (very highly repetitive,
tandemly repeated sequences),
minisatellite and microsatellite
sequences (moderately repetitive,
tandemly repeated sequences),
the
new
megasatellites
(moderately repetitive, tandem
repeats of larger size) and
transposable or mobile elements
(moderately repetitive, dispersed
sequences that can move from site
to site; see Table 2).When first
discovered, they did not seem to
confer any benefit to the host
organism, as their ability to move
about the genome and/or cause
recombination between different
homologous copies has often
resulted in deleterious mutation
and disease. We now know that at
least some of these sequences
carry out important functions.
Satellite sequences
The functionality of a sequence of
2 or 3 bp repeated a thousand or
so times is not immediately
apparent. In addition, the lengths
and
compositions
of
these
repetitions often vary wildly
Figure 3. Comparison of integrated mobile DNA sequence structures.
between
species,
between
organisms of the same species, or
even between cells of the same organism. But greater understanding has come as scientists realize how DNA acts not only
as the information source for the cell, but also as the library in which it is housed. 32 It is beginning to be seen that the
dispensability of sequences is not an indicator of their non-functionality, and that in many cases, repetitive sequences tend
to fill functions collectively rather than individually.Satellite sequences vary in their repeat size and in their array size (Table

2). Microsatellites are the smallest, at a repeat size of as little as 2 bp, and the newly discovered megasatellite sequences,
which actually can contain ORFs, are 410 kb long. 33 The actual sequence repeated differs from species to species, and
repeats can differ slightly from one another. The number in an array can vary between individuals, which is why forensic
DNA fingerprinting techniques use mini- and microsatellite differences to identify individuals.

Table 2. Types of eukaryotic repetitive DNA sequences.

Functions of satellite sequences


The first recognised function of these types of sequences was in organising the centromeres, the constricted sites on each
chromosome where the chromosomes attach to cellular tethers and are pulled apart during meiosis and mitosis. These
sequences help condense the DNA region they are in into heterochromatin*.One hypothesis of the collective functionality of
repeat sequences is that long stretches of noncoding sequences act as tethers, permitting placement of groups of genes
into different zones in the cell nucleus.19 Transcriptionally inactive heterochromatin and the heterochromatin-like telomeric
sequences (sequences at the end of chromosomes), may associate their
respective chromatin segments much of the time with the nuclear
periphery. Very long runs of gene-poor, AT-rich isochores*, would be the
tethers that permit the gene-rich, GC-rich isochores to distribute
themselves into the appropriate nuclear zones for transcription and RNA
processing.The importance of the sequences of satellite DNA is reflected
when these sequences are mutated. A mutation in a minisatellite just
after the end of the Harvey ras gene (which encodes a growth regulatory
protein) may contribute to as many as 10% of all cases of breast, colorectal and bladder cancer, and acute leukemia. The mutant minisatellites
Figure 4. Mutations in minisatellite DNA can
bind a transcriptional regulatory factor,34 which causes an abnormal
result in cancer. Mutated satellite DNA near
increase in transcription of the Harvey ras gene (Figure 4).
the Harvey rasgene (a major regulator of cell
Retroviruses* and retroelements*
growth)
can
bind
a
protein
that
These class I mobile elements reproduce themselves through an RNA
increasesras activity.
intermediate which, in a reversal of the usual DNA to RNA transcription,

is reverse transcribed to DNA by the reverse transcriptase* enzyme encoded on intact elements. One of the remarkable
findings of the human genome project is that a high percentage (35.40%) of human nuclear DNA consists of dispersed
retroelements (Table 1).35 Short and long interspersed elements, SINEs and LINEs, make up the majority of this class of
DNA, with Alu and LINE-1 (L1), respectively, being most abundant in humans.36 L1 elements encode their own reverse
transcriptase, that probably is also responsible for the spread of SINEs, which lack this enzyme. HIV-1, the AIDS virus,
human endogenous retroviruses* (HERVs), and solitary long terminal repeats (LTRs*) apparently derived from HERVs, are
also part of this class of retroelements (Figure 3, Table 2).Most eukaryotic retrotransposons* move only sporadically in the
genome. An exception is the hybrid dysgenesis seen in Drosophila, where if flies containing a retrotransposon are mated to
flies not containing the particular retrotransposon, the element transposes with a high frequency, resulting in death or
mutation of many of the progeny. Host factors, many not well characterized as yet, seem to keep the transposition rate in
check (see below).
Functions of retroelements
Do these abundant elements have functions, or have hapless eukaryotic genomes been parasitized by selfish DNA? There
are more and more examples of these elements performing important functions. One example is the Alu family. This 300-bp
sequence (named for the enzyme used to identify it) occurs almost a million times in the human genome, up to 3.5% of the
total DNA (Table 2). It is estimated, and has been seen in many cloned genes, that there are 4 or 5 Alu elements in every
gene. Despite their number, they have been generally considered parasitic DNA, with occasional deleterious effects on the
genome when they exercise their ability to retrotranspose to sites in and near genes, or recombine with each other
abnormally. Such disruptions have caused neurofibromatosis, or elephant mans disease. 16 Mutations in the Alusequence
also have been associated with cancer. Alu sequences have been found to affect the functions of at least 8 different genes
(Table 2).37,38 Though Alu sequences do have internal promoters for RNA polymerase III (an enzyme which transcribes
genes encoding RNAs needed for translation of mRNA into protein), normally very little RNA is produced from all
these Alu sequences. However, under certain stressful conditions such as a viral infection, these transcripts increase
dramatically and affect protein synthesis levels to help the cell deal with the stress. 39 Thus, though individual Alu elements
have a very weak effect, hundreds of thousands of them together can affect protein synthesis.Epigenetic control
mechanisms, or modifications of gene activity that are due to modifications of the DNA itself and not its sequence (see
below), are associated with repeats. A repeat-induced process involving L1 retroelements has been hypothesised for Xchromosome inactivation, which is necessary to maintain proper gene dosage in females, who have two X chromosomes
(Table 2).40Endogenous retroviruses (that is, those that are obtained from inheritance rather than infection) can also affect
gene expression. The LTRs of two such viruses provides the sequence signal for the polyadenylation* of the mRNA of two
newly discovered human genes.41 An L1 repeat was found to provide the polyadenylation signal for the mouse thymidylate
synthase gene. Retrotransposons were also seen to help in repairing chromosomal breaks in yeast. Retroelements
modulate expression of many more genes.42
DNA transposons*
DNA transposons, or class II transposable elements, move from place to place by replicative transposition (that increases
the copy number) or by a simple cut-and-paste mechanism. Though in general not as common or in as high a copy number
as retroelements, they are still found in most organisms. Examples are the Drosophila P elements, bacterial transposons
such as Tn10 and Tn7, the Mu phage, and the ubiquitous mariner/Tc1 superfamily of transposons. The mariner/Tc1 family is
the most widespread, being found in most insects, flatworms, nematodes, arthropods, ciliated protozoa, fungi and many
vertebrates, including zebra fish, trout and humans.43 Copy number varies from two copies inDrosophila sechellia, to 17,000
in the horn fly Haematobia irritans, accounting for 1% of the genome. The vast majority of them appear to have been
inactivated by multiple mutations. The close homology between mariner/Tc1 elements found in species thought to have
diverged 200 million years ago has fuelled the hypothesis that these elements can transfer horizontally (that is, not by
normal inheritance) between different species, or even different phyla (see below). Again, the evolutionist gets to pick and
choose from his smorgasbord of explanations when the data do not fit the evolutionary tree.
Miniature inverted-repeat transposable elements (MITEs)
A recently discovered third class of mobile elements is the miniature inverted-repeat transposable elements (MITEs). 44
46
They are very small (125500 bp), and have short terminal inverted repeats. They were first found in plants, but have also
been found in nematodes, humans, mosquitoes and zebrafish.4750 They are found in the thousands and tens of thousands
per genome, and have been given colourful names (e.g. Tourist, Stowaway, Alien and Bigfoot) to reflect their apparent ability
to move about in the genome. Their mechanism of transposition is still unknown, but they appear to be DNA elements that
cannot move about on their own (non-autonomous). Though none seem to be presently active, they are believed to have
been mobile in the recent past because of the high levels of sequence similarity between elements in a particular family, and
the differences in insertion sites seen in closely related species. 51 MITEs are particularly interesting in terms of generating
genetic variation in that they are preferentially associated with genes
(see below).46,52
Effects of mobile and repetitive elements on gene expression
Mobile elements and repetitive elements can alter the structure and
regulate expression of the genome in several different ways. As
described earlier, transposition can disrupt genes by direct insertional
mutagenesis and can adversely affect transcription. Many
retrotransposons have strong constitutive (always on) promoters that can
cause inappropriate expression of downstream genes. If the promoter is
in the opposite direction of the gene, RNA complementary to the mRNA
of the gene can be made that can act as antisense RNA* that binds up
the mRNA, affecting translation.
Recombination between similar DNA strands is a necessary process for
repair of DNA breaks and allele*shuffling between homologous
chromosomes. But the presence of mobile and repetitive elements in
inappropriate positions can result in recombination products that are
Figure 5. Recombination between direct
deleterious, such as translocations*, inversions*, and other chromosomal
repeats causes the loss of the DNA between
rearrangements (Figure 5). For example, it was shown that a widespread
them.
chromosomal inversion commonly seen in Drosophila buzzatii is caused
by the recombination between two copies of a transposable element in
opposite orientations.53 There can even be an exchange of DNA between
non-homologous chromosomes: such as was seen in maize, in this case mediated by the recombination of one complete
and one partial copy of the Ac (Activator) transposable element.54

Target site selection in mobile DNA


Many of the retrotransposons and DNA transposons seem to have very little site-specificity in where they
integrate.55 Integration sites for most mammalian and Drosophila retroelements appear to be distributed more or less
randomly in the genome. Vertebrate retroviruses do have a general preference for insertion into regions with an open
chromatin configuration.56
However, there are some specific ones that do show target selectivity.51 R2 is a non-LTR retrotransposon that inserts
preferentially in the 28S ribosomal RNA genes of various insect species. Group II introns present in some yeast
mitochondrial genes (genes carried in the energy-producing organelles in the cell), are mobile elements very similar to poly
(A)-type retrotransposons. After copying themselves, they can reinsert precisely back into their spots between two exons.
Their ability to move argues for their spread into various genes at some point in time. The yeast retrotransposons Tyl and
Ty3, integrate preferentially upstream of genes transcribed by RNA polymerase III, which transcribes genes needed for
protein synthesis.Very recently, evidence has been found that certain P elements* containing regulatory sequences from
developmental genes, showed a high frequency of reinserting at the parent gene (homing) and preferential insertion at
another site containing regulatory genes.57The first example known of a host using the movement of a retrotransposon to its
advantage, was found in the telomere maintenance ofDrosophila. The telomeres, or chromosomal ends of Drosophila, are
maintained differently than any other known organism. Two retroposons, HeTA and TART, are present in multiple copies on
the telomeres, and will retropose specifically to the end of the telomere and heal a frayed chromosome.58
Observed regulation of mobile DNA
Epigenetic mechanisms, or reversible but heritable changes in chromatin structure, are seen to play a role in regulating
genes. Methylation of cytosine residues, modification of the DNA-binding histones, and production of antisense RNAs, are
some of the mechanisms by which gene expression can be modified without permanent genetic change to the gene
regulated.59 Methylation of the cytosine residues of DNA is used by the cell to turn off genes not currently needed. Cytosine
methylation inactivates the promoters of most viruses and transposons (including retroviruses and Alu elements). In fact,
transposons are so abundant, rich in CpG dinucleotides and heavily methylated, that we now know that the large majority of
5'-methylcytosine in the genome actually lies within these elements. 60 This prevents the movement of the elements under
normal circumstances. Thus transposable elements that integrate into promoters of genes can alter gene expression
patterns by attracting methylation or chromatin modifications to regulate the modified promoter. 53Drosophila, in general, are
very vulnerable to mutation by mobile element activity. From 5085% of all spontaneous mutations seen in the fruit fly are
due to transposon insertions.53 But Drosophila does have one type of host control in the recently identified gene
named flamenco.Flamenco normally acts to keep the gypsy retrotransposon in check. When flamenco is mutated, gypsy
transposes at a high frequency in germ line (reproductive) cells.50
Criteria for identifying junk DNA
There are several possible scenarios for the presence and function of the putative junk DNA sequences described above:
They all perform designed functions in present day organisms in their present form and location, though current research
has not revealed what those are as yet. This is unlikely, as it seems clear that in some individuals and species, the
placement or particular sequence of one of a family of non-coding DNAs can lead to deleterious effects such as cancer and
genetic disease. This would contradict the young age model of original perfect creation.All non-coding sequences could
have been created with functions, but some have lost their functions due to purposeful limitations, and/or accumulation of
mutations post-Fall. This would fit in with our observation of the rest of creation, where, though the perfection of design can
be seen, it has become obscured by consequences of the Fall, allowing death and suffering to enter the world.There is the
possibility that some of the elements, such as the mobile elements in particular, have never had designed functions. Rather,
they are pieces of degenerate DNA affected by the Fall that randomly move about and mutate genomes, causing only
deleterious effects.The ability of DNA sequences to rearrange and/or to move about in the genome or even between
genomes, was originally a heretical idea for both evolutionist and creationist, but now is one that is strongly supported as
being an integral part of gene regulation. Many systems utilizing similar recombination and rearrangement mechanisms are
necessary for important cellular functions, such as the process of DNA repair, rearrangement of DNA segments to form the
genes for the thousands of different antibodies, the yeast mating type switching system, the flagellar switching system
of Salmonella, and the antigen switching system of the malaria parasite. Therefore, the second scenario seems the most
likely.A working list of criteria needs to be developed to attempt to identify DNA sequences that may actually fit the category
of junk DNA. The presence of some junk DNA would be expected due to the fallen state of genomes. True junk DNA may
have one or more of the following characteristics:
The DNA element is present within another gene, insertionally inactivating it.
The DNA element is not found at that location in other members within the same species.
The effects of the presence of the element, if known, are deleterious, e.g. lead to cancer, genetic disease, etc.
The element can be deleted without any observed ill effects on the organism or many generations of its descendants.
The sequence of the element closely matches that of a mobile element, or contains a mobile element sequence.
For example, pseudogenes have many of these junk DNA characteristics, though their transformation into junk DNA may in
some cases have been intentionally arranged by the designer for the purpose of rapid diversification of created kinds.
The AGEing theory and diversification
There are, as described above, instances of functions for transposable DNAs, but until recently there has not been a
particular purpose ascribed to repetitive and mobile elements as a group. A new hypothesis formulated by genomicist and
creationist Wood addresses the past and present functions of mobile and repetitive DNA. 61Since these elements are
capable of rapid change of the genome, and can even be transmitted horizontally between species, he proposes that they
were designed to move about or recombine in the genomes of organisms to allow the rapid intrabaraminic diversification
seen in the 500 years or so after the Flood. He sees their role as being designed to act for a limited period of time, after
which they would be inactivated by mutation or repression by other regulatory elements. He proposes that such elements
should be renamed Altruistic Genetic Elements (AGEs) to emphasize that their purpose is different than that proposed for
selfish DNA.The AGEs are hypothesised to work by activating dormant genes or inactivating active genes, or by
horizontally transferring genetic information between species or possibly baramins with AGEs in the form of mobile
elements. The phenotypic changes would be primarily cosmetic, such as variations in size or coloration, or would involve
activation of a complex of genes needed to utilize a new environmental niche, like the Arctic foxs adaptation to cold. There
is a need for creationists to explain how a holobaramin such as the cat family, 62 could diversify into the many species of cats
that were present even in Jobs time in just a few thousand years or possibly a few hundred years. Currently observed
genetic mechanisms and natural selection are far too slow to explain this rapid speciation. A limited time period of AGE
activity could explain how this rapid diversification could occur.If, for example, the proposed AGEs were at work in the
diversification of the equines, we have the testable predication that differences in size, morphology and coloration could be
traced back to the genetic level by mobile or repetitive DNA elements located near genes controlling coloration.

Pseudogenes and relic retroviral sequences could then be the result of the action of an AGE gone wrong after its designed
activity began to fail. The AGEing theory could also solve the founding pair problemthat is, when a rare macromutation
occurs in an individual such that it cannot successfully hybridise with its parental species, this mutation is lost unless it can
mate with another animal with the same mutation.For this proposed AGEing process to work, at least three things must be
observed in putative AGEs:
They must show site specificity in where they insert, or evidence that they had such specificity in the past.Transmission of
AGEs between organisms horizontally and into germline DNA is required.We should see AGEs associated with genes
affecting size, morphology, coloration, and specialised environmental adaptation rather than housekeeping genes.As for the
first requirement, though many mobile elements are not specific in their target sites, there are examples of those that are, as
discussed above. Since AGE movement is supposed to have occurred largely in the past, we might expect to see only a few
with the intact capability.As for the second requirement, horizontal transmission*, the evidence for that occurring has
become very strong,63 and in the case of the P and gypsy elements in Drosophila, such transmission has actually been
observed occurring between species. Originally, no wild-caught D. melanogaster contained the P element and laboratory
stocks collected 60 years ago reflected this. Then gradually, more and more wild-caught flies contained the element
originally found in D. willistoni, until now all wild flies even in remote locations contain this element. 64Recently, it was also
shown that the copia retrotransposon from D. melanogaster was transferred to D. willistoni (probably via a parasitic
mite).65,66 There was also a report that gypsy-free fruit flies permissive for transposition of the gypsy retroposon could
incorporate gypsy into their germline DNA when larvae were fed on extract of infected pupae. 67 There is no obvious
evidence pointing to a functional change mediated by these horizontal transfers, but the principle is there.As for the third
requirement, are there any examples known now of mobile or repetitive elements that can cause these types of phenotypic
changes? In bacteria, there are many examples of transfers of antibiotic resistance mediated by transposons, 68 and the
horizontal transfer of genes, though in general prokaryotes have comparatively little junk DNA. Some evolutionary
researchers now propose that mobile elements may be involved in speciation. Mobility of a retroelement was activated in a
cross between two wallaby species, though the hybridisation resulted in only sterile males. 69 In maize, the original studies of
Nobel Prize winner Barbara McClintock demonstrated that the activity of the transposons in different corn kernel cells could
be followed by their effects on corn kernel coloration. In plants, there is additional strong evidence that movement of mobile
elements in the past has altered gene expression. Although retrotransposon sequences, for example, are seldom found
near genes in animals, recent analyses of plant mobile element insertion sites have revealed the presence of degenerate
retrotransposon insertions adjacent to many normal plant genes that act as regulatory elements. 70 In addition to
retrotransposons, MITEs are also found adjacent to many plant genes, where they also often provide regulatory sequences
necessary for transcription.71 Plants, as well as animals, would have had to adjust to the drastically-altered post-Flood world.
Other, more dramatic examples may exist, and further research will hopefully reveal them.
Why debunk junk DNA?
What is the relevance to creation science, and to people in general, of a better understanding of the function of these DNA
elements? Because of the publicity surrounding the Human Genome Project, there is increasing general interest in how our
genomes work, and what exactly they look like. There is more and more emphasis being placed on discovering our
evolutionary history through DNA, not fossils.The fact that functions are being found for junk DNAs fits in well with creation
science, but was not predicted by evolutionary theory, though of course the theory is being adjusted again to accommodate
the data. The intricate flexibility and specificity of these junk DNA sequences are a strong testimony to a designer who
plans and provides for the future of his creation.
Glossary
Allele
Antisense RNA

one of several alternate forms of a gene occupying a given locus on a chromosome. Return to text.
RNA made by copying the other DNA strand in a coding segment in the opposite direction; this RNA
will bind to the mRNA made from the coding or sense strand. Return to text.
Baramin
the creationist term for an original created kind; not synonymous with species. Organisms within the
same baramin may be of different species but can cross-hybridise, like the horse and the
donkey.Return to text.
Complementary
two strands of DNA or RNA are said to be complementary when they can form base pairs (A-T, GC) with each other, e.g. AATTCC and TTAAGG. Return to text.
Chromatin
the complex of DNA and protein in the nucleus of the interphase cell. Return to text.
Euchromatin
the less condensed chromatin in the nucleus that is more transcriptionally active than the
heterochromatin.Return to text.
Eukaryote
an organism with an organized nucleus. Return to text .
Haploid
half the set of the chromosome pairs; contains one copy of each chromosome pair and one of the
sex chromosomes; characteristic of gametes (sperm and egg cells). Return to text.
Heterochromatin
regions of the genome that are in a highly condensed state and are not usually transcribed.
Constitutive hetereochromatin is always in this condensed, inactive state, contains no genes, and is
usually found at the centromeres and teleomeres. Facultative heterochromatin is condensed only in
certain cell types, or at certain developmental stages when the genes contained in it need to be
turned off. Return to text.
Histones
a family of basic proteins found tightly associated with DNA in all eukaryotic nuclei; their binding
forms a bead structure called a nucleosome. Return to text.
Horizontal
when mobile elements or viruses are transferred between individuals by infection rather than by
transmission
inheritance (vertical transmission). Return to text.
Human
endogenousretroviruses that have become part of the human genome in the past by insertion into the germline
retroviruses (HERVs) cells. Return to text .
Hybridisation
the pairing of single-stranded complementary RNA and/or DNA strands to give an RNA-DNA or
DNA-DNA hybrid.Return to text.
Inversion
occurs when recombination between DNA segments causes the DNA between them to be flipped
into the opposite orientation at the same chromosomal locus. Return to text.
Isochore
LTR

an approximately 300 kb segment of DNA whose bp composition is uniform above a 3 kb level, for
example 67% A-T bp. This is believed to enable a certain level of co-regulation of all the DNA in the
isochore. Return to text .
long terminal repeat; the longer, more complex repeated sequences at the ends of some mobile
elements, which are required for them to transpose. Return to text.

ORF

open reading frame; a stretch of DNA or RNA that contains of series of triplet codons coding for
amino acids, without any protein termination codons, that is potentially translatable into
protein. Return to text.
P elements
DNA transposons found in fruit fly species that often have a high level of mobility. Return to text.
Promoter
a region of DNA involved in binding of RNA polymerase to initiate transcription. Return to text.
Poly-(A) tail
a sequence of adenine residues added to the 3 end of a mRNA after transcription in the process
called polyadenylation; believed to help stabilize mRNAs from being degraded. Return to text.
Pseudogene
a gene that has been inactivated in the past by an insertion or deletion of DNA. Return to text.
Prokaryote
an organism that lacks an organized nucleus, and has its DNA mostly in a single molecule; a
bacterium. Return to text.
Processed
a gene that has been apparently reverse-transcribed from its mRNA back into DNA and reinserted
pseudogene
into a chromosome. It thus lacks its introns, has a poly-A tail, and often is bounded by the
characteristic direct repeats associated with transposition. Return to text.
Retroelement
any sequence that transposes through an RNA intermediate. Return to text.
Retrotransposons
mobile elements that encode reverse transcriptase. Transpose through an RNA intermediate.
Classed
into
LTR-containing
and
poly
(A)containing:
LTR-containingsimilar to proviral form of vertebrate retroviruses and usually have 2 ORFs, gag
and pol (protease, integrase, reverse transcriptase, RNase H), e.g. Gypsy and tom.
Poly (A)containing retroelements or retroposons lack LTRs and have a 3 A-rich region. Have 2
ORFs, gag and pol. Some elements such as L1 and the I Factor of Drosophila, contain a reverse
transcriptase. L1 is found in yeast and humans. Return to text .
Retrovirus
a virus using RNA as its information storage system rather than DNA, integrates into host DNA as
part of its lifecycle in a way very similar to retrotransposons, but also has additional genes that code
for its packaging into virus particles for infection of other hosts. Return to text.
Reverse transcriptase an enzyme found in retroelements that will make a complementary DNA strand from an RNA
template. Return to text.
Splicing
two exons, or coding regions on a messenger RNA, are joined together when the intron (noncoding segment) between them is removed. Return to text .
Translation
the synthesis of protein on the messenger RNA template. Return to text.
Translocation
of a chromosome describes a rearrangement in which part of a chromosome is detached by
breakage and then becomes attached to some other chromosome. Return to text.
Transcription
synthesis of RNA on the DNA template. Return to text .
Transposase
the enzyme that cuts the target DNA and splices in the transposing sequence; called the integrase
in retroelements.
Transposon
any DNA sequence that can move about the genome, either by replicating itself, or by a cut-andpaste mechanism. In its simplest form, it is a transposase gene (see above) surrounded by a
sequence on either side repeated directly or in inverse form, e.g. ATTGCGC and CGCGTTA are
inverted repeats. Return to text .
Triplet codon
three nucleotides in an RNA or DNA that signal the insertion of a particular amino acid or
termination signal; e.g. AUG would be the code word for methionine. Return to text.
UTR
untranslated region; the parts of a messenger RNA before the first exon (5 prime UTR) and after
the last exon (3 prime UTR) that are not translated into protein (non-coding). Return to text.
Acknowledgements
The author wishes to thank Dr. Todd C. Wood for providing unpublished information on his AGEing theory and his rice
genome research. Thanks also to the editor for helpful discussions, information and patience with revisions.
The slow, painful death of junk DNA
by Robert W. Carter
So-called junk DNA has fallen on hard times. Once the poster child
of evolutionary theory, its status has been increasingly challenged
over the past several years. Functions for junk DNA have been cited
at other places on this website 1 and in the Journal of Creation2.
In The Great Dothan Creation Evolution Debate,3 my opponents
main argument, to which he returned again and again, rested on
junk DNA. I warned that this was an argument from silence, that
form follows function, and that this was akin to the old vestigial
organ argument (and thus is easily falsifiable once functions are
found). We did not have to wait long, however, because a new study
has brought the notion of junk DNA closer to the dustbin of
discarded evolutionary speculations. Faulkner et al. (2009)4 have put
junk DNA on the run by claiming that retrotransposons (supposedly
the remains of ancient viruses that inserted themselves into the
genomes of humans and other species) are highly functional after
all.
Background
Based on the work of J.B.S. Haldane (Haldane 1957) 5 and others,
who showed that natural selection cannot possibly select for millions
of new mutations over the course of human evolution, Kimura
(1968)6 developed the idea of Neutral Evolution. If Haldanes
Dilemma7 was correct, the majority of DNA must be non-functional.
It should be free to mutate over time without needing to be shaped
by natural selection. In this way, natural selection could act on the
important bits and neutral evolution could act randomly on the rest. Since natural selection will not act on neutral traits,
which do not affect survival or reproduction, neutral evolution can proceed through random drift without any inherent cost of

selection.8 The term junk DNA originated with Ohno


(1972),9 who based his idea squarely on the idea of Neutral
Evolution. To Ohno and other scientists of his time, the vast
spaces between protein-coding genes were just useless DNA
whose only function was to separate genes along a
chromosome. Can you see how the idea of junk DNA came
about? It is a necessary mathematical extrapolation. It was
invented to solve a theoretical evolutionary dilemma. Without it,
evolution runs into insurmountable mathematical difficulties.To
recap for emphasis: Junk DNA is not just a label that was tacked
on to some DNA that seemed to have no function; it is something
that is required by evolution. Mathematically, there is too much
variation, too much DNA to mutate, and too few generations in
which to get it all done. This was the essence of Haldanes work.
Without junk DNA, evolutionary theory cannot currently explain
how everything works mathematically. Think about it; in the
evolutionary model there have only been 36 million years since
humans and chimps diverged. With average human generation
times of 2030 years, this gives them only 100,000 to 300,000 generations to fix the millions of mutations that separate
humans and chimps. This includes at least 35 million single letter differences, 10 over 90 million base pairs of non-shared
DNA,10nearly 700 extra genes in humans (about 6% not shared with chimpanzees),11 and tens of thousands of chromosomal
rearrangements. Also, the chimp genome is about 13% larger12 than that of humans, but mostly due to the heterochromatin
that caps the chromosome telomeres. All this has to happen in a very short amount of evolutionary time. They dont have
enough time, even after discounting the functionality of over 95% of the genomebut their position becomes grave if junk
DNA turns out to be functional. Every new function found for Junk DNA makes the evolutionists case that much more
difficult.One of the important classes of junk DNA is retrotransposons, which were thought to be leftovers from ancient
virus infections where bits of DNA from the viruses had been randomly inserted into the DNA of humans (for example).The
idea that huge stretches of human DNA are useless junk left over from evolution is itself having to be progressively
junked.Enter Faulkner et al. (2009). Working in human and mouse, they discovered that between 6 and 30% of
RNAs13 start within retrotransposons. Their distribution is clearly not random. This was a shock in itself, but they added that
these RNAs are generally tissue-specific, as if there were different classes of retrotransposons involved in regulating gene
expression in different tissues. From the start, their conclusions do not seem to support the idea that retrotransposons are
evolutionary junk, but it gets better from there. It turns out that retrotransposons coincide with gene-dense regions and occur
in pronounced clusters within the genome, emphasizing the non-random distribution pattern. When they occur upstream of
protein coding genes, they provide an abundance of alternative start sites for transcription, producing abundant alternative
mRNAs and non-coding RNAs. On the downstream end, over one quarter of RefSeq (protein-coding) genes14 have a
retrotransposon in their 3 UTRs,15 and these reduce the amount of protein synthesized. They concluded that these 3 UTRs
are the site of intense transcriptional regulation. This is hardly something one would expect from junk DNA! Based on the
distribution of retrotransposons, they identified a whopping 23,000 candidate regulatory regions within the genome. In
addition, they found 2,000 examples of bidirectional transcription caused by the presence of retrotransposons (where the
DNA is read in both directions, not just one direction, which is thought to be the norm).At one point Faulkner et al. try to
downplay their results. They point out that only some retrotransposons contain active promoters and that only some of these
are functional. They do not advocate a universal function for retrotransposons. However, as Faulkner et al. also point out,
retrotransposons are highly abundant, with thousands of retrotransposon promoters immediately adjacent to protein coding
genes, influencing their regulation and, they assume, their evolution. They concluded that retrotransposons have a key
influence on transcription genome-wide, that they are multifaceted regulators of the functional output of the mammalian
transcriptome, that they are a pervasive source of transcription and transcriptional regulation, and that they must be
considered in future studies of the genome as a transcription machine.These results are stunning. With genome regulation
becoming more and more complicated, and with more and more of the genome being demonstrated to be functional, one
wonders how long evolutionists can hold to the idea of junk DNA? However, hold on to it they must, for without it they lose
one of their best arguments. But they just lost one of their favorite pieces of evidence: the presence of ancient deactivated
viruses in the genome. Rather than being functionless vestigial remnants of our past, retrotransposons turn out to be
functionally integrated into the amazingly complex regulatory apparatus of mammalian genomes!Id like to point out that
young-earth creationists do not require the entire genome to be highlyfunctional. While I suspect that direct and indirect
controls of transcription will eventually be found for most of it, there may be very large stretches of the genome that just add
temporal structure to the functional parts. Think of them as scaffolding in a three-dimensional genomic skyscraper. Even
these portions will be functional (because of a need for structure), though they may not contribute directly to genome
regulation, and their sequence specificity might be very weak. Well have to wait to see how it all works out in the end. For
now, let us take heart that one more weak link in the evolutionary line of arguments has been exposed.
No joy for junkies
by Don Batten
Before any sequencing of DNA had been done, evolutionists decided that fully 99% of
the human DNA must be inert or junk. They came to this conclusion because, according
to the calculations of population geneticists, if much more than 1% of the DNA sequence
of creatures such as humans actually mattered, then error catastrophe would have
resulted, because natural selection could not have eliminated the large number of
harmful mutations.1When the DNA sequencing turned up only about 35,000 proteincoding genes in humans, the evolutionists seemed vindicated, except that we already
knew that DNA codes for more than just proteins. For example, the transfer-RNAs and
ribosomal RNA are coded on the DNA. And various segments of DNA-coded RNA were
being implicated as co-factors in various chemical reactions and in gene activation or
suppression. But what else does all that DNA do?Bit by bit, the idea of junk DNA has
been unravelling. There have been reviews and notes in Journal of Creation25covering
some of the exciting developments.Recently, a large chunk of the remaining junk has
been implicated in the control of embryo development. Scientists at the Jackson

Laboratory, Maine, USA, found that a type of transposable element (TE), a major class of supposed junk or parasitic DNA,
activates during embryo development in mice.6 In a commentary on this work, Ricky James commented:Therefore, more
than one third of the mouse and human genomes, previously thought to be non-functional, may play some role in the
regulation of gene expression.7Note that this non-coding DNA only seems to function during egg and embryo development,
so studying TEs in other cells would not reveal their function. This might explain why the functions of non-coding DNA have
been so elusive.These developments underline, once again, how evolutionary premises impede the progress of science. In
the past, evolutionary notions led to over 100 human features being labeled vestigial, or leftovers of our supposed animal
ancestry.8 This was based on the similarity of these features to ones found in animals, combined with the lack of knowledge
about what the organs did. The lack of logic is astonishing: since we dont know what the organs do, they must be useless.
The same evolutionary logic has been applied to the DNA: we dont know what most of it does, so it must do nothing. So it
is labelled junk, pseudogenes, parasitic, retroviral inserts, etc.Thankfully, not everyone bought this idea. In the late
1980s, New Zealandborn Australian immunologist Malcolm Simons recognized patterns, or order, in the non-coding DNA
that indicated to him that the code must have a function, but others ridiculed the idea. 9 In the mid-1990s, he patented the
non-coding DNA (95%) of all organisms on Earth. The company he founded, Genetic Technologies, now reaps licence fees
from all technologies being developed to cure disease that involve the non-coding DNA. Its quite controversial, of course,
paying such licence fees. And since factors involved in all sorts of diseases, such as breast cancer, Crohns disease,
Alzheimers, heart disease, ovarian and skin cancer, are being found in the junk, Genetic Technologies is doing quite
well.10Theres much gold to be mined from the junk, it would seem.Leading geneticist Prof. John Mattick of the University of
Queensland in Brisbane, Australia, has proposed that the non-coding DNA was part of a sophisticated operating system,
with ample justification.11,12 Some critics rejected this on the grounds that such a system could not have evolved! Mattick
recently said that the failure to recognise the implications of the non-coding DNA will go down as the biggest mistake in the
history of molecular biology. 9 This mistake can be attributed to an evolutionary approach to biology.Creationists have long
argued that junk DNA is nothing of the sort. For example, Carl Wieland, Creation Ministries International (Australia), wrote,
Creationists have long suspected that this junk DNA will turn out to have a function. 13 Although there might be
a small amount of non-functional DNA due to damaging mutations that have occurred, it is inconceivable that most of the
human DNA would be created as having no function.
Large scale function for endogenous retroviruses
by Shaun Doyle
Endogenous retroviruses (ERVs) are some of the most cited evidences for
evolution. They are part of the suite of junk DNA that supposedly comprised the
vast majority of our DNA. ERVs are said to be parasitic retroviral DNA
sequences that infected our genome long ago and have stayed there ever
since. These short DNA strands are found throughout the human genome, and
make up about 5% of the DNA,1 or about 10% of the total amount of DNA that is
classified as transposable elements (i.e. 50%). 2However, the term endogenous
retrovirus is a bit of a misnomer. There are numerous instances where small
transposable elements thought to be endogenous retroviruses have been found
to have functions, which invalidates the random retrovirus insertion claim. For
instance, studies of embryo development in mice suggest that transposable
elements (of which ERVs are a subset) control embryo development.
Transposable elements seem to be involved in controlling the sequence and
level of gene expression during development, by moving to/from the sites of
gene control.3Moreover, researchers have recently identified an important
function for a large proportion of the human genome that has been labelled as
ERVs. They act as promoters, starting transcription at alternative starting points,
which enables different RNA transcripts to be formed from the same DNA
sequence.We report the existence of 51,197 ERV-derived promoter sequences
that initiate transcription within the human genome, including 1,743 cases where
transcription is initiated from ERV sequences that are located in gene proximal
promoter or 5 untranslated regions (UTRs).4
And,
Our analysis revealed that retroviral sequences in the human genome encode
tens-of-thousands of active promoters; transcribed ERV sequences correspond to 1.16% of the human genome sequence
and PET tags that capture transcripts initiated from ERVs cover 22.4% of the genome. 5So were not just talking about a
small scale phenomenon. These ERVs aid transcription in over one fifth of the human genome! These data illustrate the
potential of retroviral sequences to regulate human transcription on a large scale consistent with a substantial effect of ERVs
on the function and evolution of the human genome. 3 This again debunks the idea that 98% of the human genome is junk,
and it makes the inserted evolutionary spin look like a tacked-on nod to the evolutionary establishment. These results
support the conclusions of the ENCODE project, which found that at least 93% of DNA was transcribed into
RNA.Evolutionists have used shared mistakes in junk DNA as proof that humans and chimps have a common ancestor.
However, if the similar sequences are functional, which they are progressively proving to be, their argument evaporates.It
seems that evolutionist Dr John Mattick, director of the Institute for Molecular Bioscience at the University of Queensland,
Brisbane, Australia, was spot on in his assessment of the gravity of the junk DNA error:The failure to recognize the full
implications of thisparticularly the possibility that the intervening noncoding sequences may be transmitting parallel
information may well go down as one of the biggest mistakes in the history of molecular biology. 6Both creationists7 and
ID proponents8 predicted that transposable elements, such as endogenous retroviruses, would have a function. In 2000,
creationist molecular biologist Linda Walkup proposed that transposable elements could be created to facilitate variation
(adaptation) within the created kinds.7If the junk DNA is not junk, then it puts a big spanner in the work of molecular
taxonomists, who assumed that junk DNA was free to mutate at random, unconstrained by the requirements of functionality.
As Williams points out:The molecular taxonomists, who have been drawing up evolutionary histories (phylogenies) for
nearly every kind of life, are going to have to undo all their years of junk DNA-based historical reconstructions and wait for
the full implications to emerge before they try again.9

Hox (homeobox) GenesEvolutions Saviour?


by Don Batten
Some evolutionists hailed homeobox or hox genes as the saviour of evolution soon after they were discovered. They
seemed to fit into the Gouldian mode of evolution (punctuated equilibrium) because a small mutation in a hox gene could
have profound effects on an organism. However, further research has not born out the evolutionists hopes. Dr Christian
Schwabe, the non-creationist sceptic of Darwinian evolution from the Medical
University of South Carolina (Dept. of Biochemistry and Molecular Biology), What is the REAL message of the patterns
wrote:
of life?
Control genes like homeotic genes may be the target of mutations that would
The
Biotic
Message
conceivably change phenotypes, but one must remember that, the more
Walter
ReMine
central one makes changes in a complex system, the more severe the
peripheral consequences become. Homeotic changes induced
This book scientifically fights
in Drosophilagenes have led only to monstrosities, and most experimenters
evolutionists on their terms,
do not expect to see a bee arise from their Drosophila constructs. (Mini
on their issues, using their
Review: Schwabe, C., 1994. Theoretical limitations of molecular
testimony, and their ground
phylogenetics
and
the
evolution
of
relaxins. Comp.
Biochem.
rules. It dismantles many
Physiol.107B:167177).Research in the six years since Schwabe wrote this evolutionary illusions, and offers a new
has only born out his statement. Changes to homeotic genes cause creation theory of biology: Life was
monstrosities (two heads, a leg where an eye should be, etc.); they do not designed to shout that it had only ONE
change an amphibian into a reptile, for example. And the mutations do not designer, and to resist all other
add any information, they just cause existing information to be mis-directed to explanations. 538 pages, hardbound.
produce a fruit-fly leg on the fruit-fly head instead of on the correct body
segment, for example.Evolutionists, of course, use the ubiquity of hox genes See also review by Dr Don Batten
in their argument for common ancestry (Look, all these creatures share these ORDER YOUR COPY TODAY
genes, so all creatures must have had a common ancestor). However,
commonality of such features is to be expected with their origin from the same (supremely) intelligent designer. All such
homology arguments are only arguments for evolution when one excludes, a priori, origins by design. Indeed many of the
patterns we see do not fit common ancestry. For example, the discontinuity of distribution of hemoglobin-like proteins, which
are found in a few bacteria, molluscs, insects, and vertebrates. One could also note features such as vivipary,
thermoregulation (some fish and mammals), eye designs, etc. For more detail, see The Biotic Message.
Hox Hype
Has Macro-evolution Been Proven?
By David A. DeWitt, Ph.D
Associate Professor of Biology, and Associate Director, Creation Studies at Liberty University
From the hype of the press release, it would seem that evolution was finally proven once and for all and the creationists
should just give up and go home. But far from refuting creation, the scientific evidence is completely consistent with
creation!The press release from UCSD said in part:Biologists at the University of California, San Diego have uncovered the
first genetic evidence that explains how large-scale alterations to body plans were accomplished during the early evolution
of animals. The achievement is a landmark in evolutionary biology, not only because it shows how new animal body plans
could arise from a simple genetic mutation, but because it effectively answers a major criticism creationists had long leveled
against evolutionthe absence of a genetic mechanism that could permit animals to introduce radical new body
designs.Evolutionary biologists believe that the six-legged insect body plan evolved from crustacean-like ancestors
(including creatures like shrimp) that lost the large number of legs. 1 Such a radical change would require mutation(s) that
result in the suppression of leg development. McGinnis and coworkers believed that they found the mutation and the gene
responsible for this change. However, careful examination of their efforts reveals that the situation is much more
complicated.The scientists were investigating Ubx, a Hox gene which suppresses leg development in flies. Hox genes are
master control switches that control the body plan. Specific Hox genes may control where the head forms, where limbs
form, or a tail or even wings. These master switches work like circuit breakers and either turn on or turn off an array of other
genes. Hox genes can be expressed in abnormal locations and either prevent development of structures or promote their
development in very unusual places. For example Pax-6 expression controls the development of eyes. A fly with abnormal
expression could form an eye on a leg, the antenna or even abdomen. 2The researchers found that the Ubx gene from a fly
completely prevented leg development while the same gene from Artemia, a brine shrimp, only suppressed leg development
15%. They then mutated the Artemia Ubx gene and found that this version was much more effective at blocking leg
formation. They postulated that such a mutation probably occurred in the crustaceans that were the ancestors of six-legged
insects.3The fact that scientists can significantly alter the body plan does not prove macro-evolution nor does it refute
creation. Successful macro-evolution requires the addition of NEW information and NEW genes that produce NEW proteins
that
are
found
in
NEW
organs
and
systems.
For example, a single mutation that might prevent legs from forming is much different from a mutation that produces legs in
the first place. Making a leg would require a large number of different genes present simultaneously. Moreover, where do
the wings come from? Just because an organism loses a few legs doesnt convert a shrimp-like creature into a fly. Since
crustaceans dont have wings, where does the information come from to make wings in flies?Having the wings themselves
is not even enough. Researchers in another study have found that the subcellular location of metabolic enzymes is
important for the functional muscle contraction required for flight. 4 Indeed, the metabolic enzymes must be in very close
proximity with the cytoskeletal proteins that are involved in muscle contraction. If the enzymes are not in the exact location
in which they are needed within the cell, the flies cannot fly. This study bears out the fact that the presence of active
enzymes in the cell is not sufficient for muscle function; colocalization of the enzymes is required. It also requires a
highly organized cellular system.Therefore, changes in body planno matter how dramaticdo not automatically prove
macro-evolution. Losing structures, or misplacing their development, should not be equated with the increased information
that is needed to form novel structures and cellular systems.

HOW DOES GENETICS POINT TO DESIGN


Cell systemswhats really under the hood continues to drop jaws
By Brian Thomas
Two 2009 papers summarized recent discoveries of utterly unforeseen intricacy, adaptability, robustness and precision in
regulating gene expression, even in simple cells.
Gene expression in eukaryotic cells
I conservatively counted 24 recently discovered mechanisms that help regulate gene expression in eukaryotic cells, as
reviewed by Moore and Proudfoot.1 Here are just a few of them.
Figure 1. Widely regarded as the simplest
genome, Mycoplasma gene expression is
instead far more complicated than expected. It
performs functions that had been considered
the sole domain of higher eukaryotes. For
example, DNA is transcribed in both the sense
and antisense directions, indicating that
valuable genetic information is double-stacked.
RNA transcripts undergo post-translational
modifications, single enzymes have more than
one application, and when certain metabolic
breakdowns occur, the cell is able to formulate
a workaround solution. Illustration after.
sciencemag.orgChromatin is not loosely
wadded DNA inside cellular nuclei. Instead, it is
very precisely organized, with specific portions
dynamically looped outward. Each loop is
associated with a separate nuclear pore, and
can retract to a storage position when
appropriate. Robust and efficient machinery
ensures that the correct portions of chromatin
are unspooled from nearer the center of the
nucleus to an appropriate nuclear pore. Each pore is extremely active, with a host of interacting regulatory RNAs, proteins,
and ribonucleoproteins.2 These send and receive communications from and toward the farthest ends of the RNA and protein
manufacturing processes.RNA Polymerase does not typically transcribe DNA in fluid space, but is attached to a cadre of
proteins associated with each nuclear pore. This way, the rapidly emerging RNA transcript is already proximal to the pore,
through which much of it will exit to the cytoplasm. Further, cell biologists have determined that the first copy of a transcript
is like a practice run. This first, rough draft RNA transcript either serves as a quality control run, so that its integrity is
ensured prior to full manufacture and export from the nucleus, as a primer for the total set of transcript processing
machinery to be properly set, as a chemical communicator providing information to downstream processes, or all three.
Warming up for transcription
In addition, extracellular messages are transferred from the cell membrane to the nuclear pore sites via biochemical
cascades, and these influence whether or not a gene region will switch from being transcribed into these rough abortive
transcripts, or into full-length, properly marked and exported transcripts. It appears that transcription machinery is constantly
transcribing in an idle mode, but when the correct switches are tripped, the machinery fully engages. In full production
mode, RNA transcripts often become marked for translation to proteins. Some of the switching messengers are proteins that
are temporarily restrained by other proteins, which in turn can release them upon detection of certain cell signals carried by
yet more precisely interacting biochemicals. For example, even sugar moieties riding on proteins have been found to act as
a safety switch that regulates the microswitches which fine tune protein expression during cell division.3
Full-on eukaryotic transcription runs super-fast
When all systems are go, transcription proceeds with fully processive elongation of the full body of the gene. 1 Inside the
nucleus, the relevant DNA is pulled, like a loop of magnetic tape, across a nuclear pore. Some of the proteins involved in
this action are named Set1PAF, Spt6, FACT, Chd1, along with other histone proteins. This way, the emerging transcript is
under the constant watchful attention of a wide array of sensory, quality control, marking, and transporting machinery, all
kept near the pore by precise chemical interactions specified by exactly arranged biomolecular sizes, shapes, charges, and
polarities.It was known that transcripts in eukaryotic cells undergo cut-and-pasting as well as splicing. It is now known that
this occurs simultaneously with manufacture, and requires a separate host of proteins. However, those pre-mRNA splicing
proteins directly interact with the RNA polymerase assemblage, which all works together to react to pause-sites in the gene
it is transcribing. RNA polymerase acts like a molecular juggernaut,1 streaming RNAs out as though through a jet engine. It
must be slowed down in order for cutting and splicing machinery to have opportunity to insert. Since not all DNA pause sites
become RNA cut sites, and since the alternative combinations of cut and spliced mRNA transcripts can specify a wide
variety of regulatory or catalytic RNAs and proteins from just one gene, 4 it is apparent that somehow precise
communication occurs to discern which pause sites will result in cuts.In yeast, a model eukaryote, the THO/TREX protein
complex serves three roles: one in transcription, one in transcript-dependent recombination, and one in mRNA
export.1 And it does these while in constant communication with machine parts that are involved in transcript initiation as
well as parts involved in slowing and stopping transcription. It is therefore one of many proteins and protein complexes that
are being discovered with multiple functionsa clear sign of elegant engineering.
Process flow management in translation
The emerging RNA transcript then gets labeled with specific protein markers. The markers had already been gathered to the
nuclear pore site, and are presented to the nascent transcript just inside the nucleus. The immediacy of labeling thus is vital.
It guards against the dangers of having naked RNAs in the nucleus, as described below. The markers, too, serve multiple
purposes. The more splices in the transcript, the more markers are attached, and this eventually causes more efficient
translation because a transcript thus bedecked is more likely to have some surface exposed to cytoplasmic proteins vital to
translation. The markers also signal watchdog nuclear pore proteins to expedite the transcripts export.These same

watchdog proteins also serve to prevent naked transcripts from re-entering the nucleus. This is vital, for bits of RNA naturally
anneal to unzipped DNA. If this happened, it would quickly create havoc in the nucleus by both generating mutations and
gumming up the many nuclear processes that depend on accurate DNA recognition, clamping, spooling, unwinding, and
other processes.After export, the cytoplasmic machinery links each transcript to other machines. Some of these shepherd
the transcript toward a ribosome. Each time a transcript has been thus shepherded, some of its markers are removed, with
most being lost after its first round of translation. Eventually the transcript becomes naked and difficult for translational
machinery to detect, and subject to degradation. In this way, the freshest and highest quality transcripts are by far most
translated by the ribosome.
Eukaryotic gene expression is astonishing
Effective quality control mechanisms constantly cull corrupt transcripts. For example, if a transcript did not have the correct
signal sequence attached when it was first formed, due to gene mutation or an error in processing, the compromised
molecule would have been recognized immediately at the nuclear pore, and degraded by RNase enzymes. This ensures
that downstream processes are not gummed up with useless transcripts. Quality control is critical to forming the correct
products in the needed amounts, and at appropriate paces.Other systems produce a stockpile of quality transcripts in
strategic pockets within the cytoplasm. This way, there can be a tightly controlled burst of the desired [protein]
product.1There is no indication that the discovery pace of more mind-bogglingly brilliant cell processes will slow down
anytime soon. If none of the above made sense, then let the reader be edified by the glowing research summary:At every
point along the way, multifunctional proteins and [ribonucleoprotein] complexes facilitate communication between upstream
and downstream steps, providing both feedforward and feedback information essential for proper coordination of what can
only be described as an intricate and astonishing web of regulation.1
The simple Mycoplasma
Mycoplasma pneumoniae bacteria, long considered the simplest prokaryote, can no longer be described thus. It is a
parasitic bacterium that (M. pneumonia causes walking pneumonia) has a reduced genome size. It relies on its host for
certain nutrients that its ancestors apparently were able to manufacture. Thus, it has undergone significant genomic decay.
How Mycoplasma bacteria really work
The authors of a paper in Science endeavored to investigate how a cell actually accomplishes necessary processes using
the most basic subject of study.5 But they ran into a juggernaut of layered information-rich complexity that inspired their
assessment:Together, these findings suggest the presence of a highly structured, multifaceted regulatory machinery, which
is unexpected because bacteria with small genomes contain relatively few transcription factors revealing that there is no
such a thing as a simple bacterium.5Specifically, evolutionists Ochman and Raghavan cited research that found in many
cases the sense strand of protein-coding genes is transcribed, the complementary or anti-sense strand is also transcribed.
The resulting sense mRNA is eventually translated to protein, and the resulting antisense mRNA binds to the sense mRNA
to make a double stranded RNA. This slows its path toward translation, and is thus an important speed regulator. This was
previously only known to occur in eukaryotes.
Mycoplasma cells have eukaryotic complexity
In other experiments, different environmental growth conditions caused different lengths and segments of genomic DNA to
become transcribed. This implies a suite of chemical communication cascades from the cell wall inward, as well as the
ability to make alternate products from one gene. This, too, was a surprise, only known in eukaryotes.Like eukaryotic cells,
these simplest among prokaryotes have multifunctional proteins which can be used in different metabolic pathways as
backup machines. Other data strongly suggests that newly manufactured proteins can be altered by other cellular
machinery. Termed post-translational modification, this was taught dogmatically in my 1998 graduate biochemistry courses
as exclusive to eukaryotes.Also shocking was the discovery that over 90% of Mycoplasma proteins are involved in protein
complexes, again like eukaryotes. Another genome-wide survey found indirect evidence of tight gene expression regulation,
but nobody yet knows the mechanism for it.They finally argue that because Mycoplasma is still alive even after such
reduction in quality and quantity of its genome, it must have an underlying eukaryote-like cellular organization replete with
intricate regulatory networks and innovative pathways.5
Where did Mycoplasma get all this in the first place?
These authors then bravely ask, How did these remarkable layers of gene regulation and the highly promiscuous
[multifunctional] behavior of proteins in M. pneumoniae arise?4 But they instead explain that: the reduced efficacy of
selection that operates on the genomes of host-dependent bacteria reductions in long-term effective population size [from
the bottleneck that occurred when the bacteria first became host-dependent, and] the accumulation and fixation of
deleterious mutations in seemingly beneficial genes due to genetic drift, [together cause a] reducing genome size. 5If
selection, bottlenecks, and mutations only reduced the genome, then these processes are no help at all. What in nature
expanded the genome with ingeniously useful data that the remarkably robust yet genomically
truncated Mycoplasma retains plenty of?
Conclusion
At every level, scientists have uncovered more information. That information takes the form of three-dimensional shapes,
electronic and charge configurations, as well as raw coding sequence information. Communication pathways, routines and
subroutines, prioritizing, quality control, and process regulation plans are all stunningly effective and strikingly small.More indepth knowledge of these fantastically complicated cell features demands greater faith from naturalists in the belief that laws
of chemistry built cells. The more informational structures that are found, the greater the gap between the organization in
living system parts and the disorganization found in nonliving chemicals.A reminder of some inferences about information
would seem appropriate here. First, wherever precise regulation of processes due to expertly engineered machines and
codes are seen coming into existence, they always comes from persons. Stated negatively, these machines and codes are
never observed to originate from natural laws. Therefore, it is most parsimonious to infer that wherever similar machines,
processes, and codes are found, they, too, were not derived by nature, but instead by a person or persons.Second,
like spoken languages, biological language is irreducibly complex and yet without physical substance. It comes
complete with symbols, meanings for those symbols, and a grammatical structure for their interpretation. Remove any one
of these three fundamental features, and the informational system is lost. Physics has nothing to do with symbols or
grammar, and therefore nothing to do with the origin of life, which cannot exist without its coded information.6
If run-of-the-mill information always comes from a mind, then this cellular information, being extraordinary, came from a
mastermind.

Meta-information
An impossible conundrum for evolution
by Alex Williams
Published: 30 August 2007(GMT+10)
Cell division, once thought of as a fairly simple thing, is now
known to be an incredibly complex, orchestrated affair that
shouts
intelligent
design.
New genetic information?
Evolutionists have never been able to give a satisfactory answer
to the problem of where the new information comes from that
evolution requires for turning a microbe into a myxomycete or a
maze-mastering mammal. Their best guess is gene duplication
(which gives them an extra length of DNA, but it contains no new
information) followed by random mutations that are supposed to
turn the duplicated information into something new and
useful.They have no direct experimental evidence for this claim
(and there is much against it 1), so they have to rely on indirect
evidence such as the so-called gene families. Some genes are
similar in both structure and function to other genes, and
evolutionists point to these and say they originated by chance
copying and mutation from some common ancestral gene. But
this is just evolutionary speculation; it is not experimental
evidence.The globin gene family is a favourite example.
Hemoglobin carries oxygen in our blood and can be made up of
different combinations of different kinds of globin proteins. For
example, hemoglobin in human fetal blood contains a different
combination of globins to that in post-natal blood. Evolutionists claim this resulted from an original globin molecule that
duplicated in an early blood-using animal and mutated to form a family of different kinds of globins, which then allowed the
diversification of complexity in oxygen-using processes that we see in the animal world today. 2But this example is far better
explained by intelligent design.3 The human baby in his or her mothers womb has to compete for the oxygen in its mothers
blood supply with the demand for oxygen from two other sources: the placenta that feeds it, and from the mothers womb
that surrounds it. So the fetal hemoglobin has to have, amongst other things, a higher affinity for oxygen than the mothers
hemoglobin. In contrast, when the baby is born and can draw oxygen from the air in its own lungs, it no longer has any
competition so it requires a different kind of oxygen-uptake system. A wise designer would ensure that the hemoglobin could
change its form and function to cater for these very different conditions, and integrate this change into the other vast
complexities of the almost miraculous reproductive process. The idea that such complex interactive changes could all occur
by chance is rather hard to accept.
Information about Information
But the problem of information origin in biology is far bigger than most people realize. Information by itself is useless unless
the cell knows how to use it. Evolution not only requires new information, it also requires extra new information about how to
use that new information.Information about information is called meta-information. We can see how it works in making a
cake. If you want to make a cake, you need a recipe that contains: (a) a list of ingredients, and (b) instructions on how to mix
and cook the ingredients to produce the desired outcome. The list of ingredients is the primary information, and the
instructions on what to do with the ingredients is the meta-information.The human genome contains an enormous amount of
information, far more than we ever (until recently) imagined. 4 But we now know that most of it is not primary information
(protein-coding genes) but meta-informationthe information that cells need to have in order to turn those protein-coding
genes into a functional human being and maintain and reproduce that functional being. This meta-information is stored and
used in a variety of ways:DNA consists of a double-helixtwo long-chain molecules twisted around one another. Each
strand consists of a chain of four different kinds of nucleotide molecules (the shorthand symbols are T, A, G and C). About
3% of this in humans consists of protein-coding genes and the other 97% appears to be regulatory meta-information.DNA is
an information-storage molecule, like a closed book. This stored information is put to use by being copied onto RNA
molecules, and the RNA molecules put the DNA information into action in the cell. For every molecule of protein-producing
RNA (primary information), there are about 50 molecules of regulatory RNA (meta-information).Down the sides of the DNA
double-helix, several different kinds of chemical chains are attached in patterns that code meta-information for turning
unspecialized embryonic stem cells into the specialized cells that are needed in fingers, feet, toenails and tail-bones
etc.DNA is a very long thin molecule. If we unwound one set of human chromosomes, the DNA would be about 2 metres
long. To pack it up into the very tiny nucleus inside the very tiny human cell, it is coiled up in four different levels of chromatin
structure into 46 chromosomes. This coiling chromatin structure also contains yet further levels of meta-information . The
first level (the histone code) codes information about the cells history (i.e. it is a cell memory). 5,6 The three further levels of
coiling code further information, some of which is described below, and there is no doubt more that we have yet to
unravel.The amount of meta-information in the human genome is thus truly enormous compared with the amount of primary
gene-coding information.
Self-replicating molecules?
In his monumental work, The Ancestors Tale,7 Richard Dawkins traced the supposed ancestry of humanity back through all
the evolutionary ages to the very first supposed common ancestor of all life. He supposed this original ancestor to have
been an RNA-type of life form, although he admitted ignorance of the precise details.8 His choice of an original RNA life form
is well-founded because RNA is the only known molecule that can do all of the three basic functions of life: (a) store coded
information, (b) combine with itself and other RNAs to create molecular machines, and (c) self-replicate (but only in a very
limited manner under very special circumstances).However, recent studies showing how living cells actually replicate have
made this RNA world concept ludicrously unrealistic.A central problem in cell division (that is, what living cells actually do,
as opposed to Dawkins imagined self-replication) is that a large proportion of the whole genome is required for the normal
operation of the cellprobably at least 50% in unspecialized body cells and up to 7080% in complex liver and brain cells.
When it comes time for a cell to divide, not only does the DNA have to continue to sustain normal cell operations, it also has
to sustain the extra activity associated with cell division.

This creates a huge logistic problemhow to avoid clashes between the transcription machinery (which needs to
continually copy information for ongoing use in the cell) and the replication machinery (which needs to unzip the whole of the
DNA double-helix and replicate a zipped copy back onto each of the separated strands).The cells solution to this logistics
nightmare is truly astonishing.9 Replication does not begin at any one point, but at thousands of different points. But of these
thousands of potential start points, only a subset are used in any one cell cycledifferent subsets are used at different times
and places. Can you see how this might solve the logistics problem?A full understanding is yet to emerge because the
system is so complex; however, some progress has been made:The large set of potential replication start sites is not
essential, but optional. In early embryogenesis, for example, before any transcription begins, the whole genome replicates
numerous times without any reference to the special set of potential start sites.The pattern of replication in the late embryo
and adult is tissue-specific. This suggests that cells in a particular tissue cooperate by coordinating replication so that while
part of the DNA in one cell is being replicated, the corresponding part in a neighbouring cell is being transcribed. Transcripts
can thus be shared so that normal functions can be maintained throughout the tissue while different parts of the DNA are
being replicated.DNA that is transcribed early in the cell division cycle is also replicated in the early stage (but the
transcription and replication machines are carefully kept apart). The early transcribed DNA is that which is needed most
often in cell function. The correlation between transcription and replication in this early phase allows the cell to minimize the
downtime in transcription of the most urgent supplies while replication takes place.There is a pecking order of control.
Preparation for replication may take place at thousands of different locations, but once replication does begin at a particular
site, it suppresses replication at nearby sites so that only one copy of the DNA is made. If transcription happens to occur
nearby, replication is suppressed until transcription is completed. This clearly demonstrates that keeping the cell alive and
functioning properly takes precedence over cell division.There is a built-in error correction system called the cell-cycle
checkpoints. If replication proceeds without any problems, correction is not needed. However, if too many replication events
occur at once the potential for conflict between transcription and regulation increases, and/or it may indicate that some
replicators have stalled because of errors. Once the threshold number is exceeded, the checkpoint system is activated, the
whole process is slowed down, and errors are corrected. If too much damage occurs, the daughter cells will be mutant, or
the cells self-destruct mechanism (the apoptosome) will be activated to dismantle the cell and recycle its components.
An obvious benefit of the pattern of replication initiation being never the same from one cell division to the next is that it
minimizes the effect of any errors that are not corrected.
The impossible conundrum
Now comes the impossible conundrum. Keeping in mind the cake analogy, lets recall that the vast majority of information in
humans is not ingredient-level information (code for proteins) but meta-informationinstructions for using the ingredients to
make, maintain and reproduce functional human beings.Evolutionists say that all this information arose by random
mutations, but this is not possible. Random events are, by definition, independent of one another. But meta-information is,
by definition, totally dependent upon the information to which it relates. It would be quite non-sensical to take the cooking
instructions for making a cake and apply them to the assembly of, say, a childs plastic toy (if nothing else, the baking stage
would reduce the toy to a mangled mess). Cake-cooking instructions only have meaning when applied to cake-making
ingredients. So too, the logistics solution to the cell division problem is only relevant to the problem of cell division. If we
applied the logistics solution to the problem of mate attraction via pheromones (scent) in moths it would not work. All the
vast amount of meta-information in human beings only has meaning when applied to the gene content of the human
genome.Even if we granted that the first biological information came into existence by a random process, the metainformation needed to use that information could not come into existence by the same random (independent) process
because meta-information is inextricably dependentupon the information that it relates to.There is thus no possible random
(mutation) solution to this conundrum. Can natural selection save the day? No. There are at least 100 (and probably many
more) bits of meta-information in the human genome for every bit of primary (protein-coding gene) information. An organism
that has to manufacture, maintain, and drag around with it a mountain of useless information while waiting for a chance
correlation of relevance to occur so that something useful can happen, is an organism that natural selection is going
to select against, not favour! Moreover, an organism that can survive long enough to accumulate a mountain of useless
information is an organism that does not need useless informationit must already have all the information it needs to
survive!What kind of organism already has all the information it needs to survive? There is only one answeran organism
that was designed in the beginning with all that it needs to survive.
Splicing and dicing the human genome
Scientists begin to unravel the splicing code
by Robert W. Carter
Published: 1 July 2010(GMT+10)
What separates the genomes of simple organisms like sea anemones and jellyfish from
humans? Humans have approximately the same number of protein coding genes as these
lowly creatures,1 yet we are much more complex organisms. Ignoring the spiritual aspects
of humanity, this complexity difference must be coded within our genomes, but where?
Since we share many genes with many simpler organisms, the answer does not lie in
gene content alone. Rather, the differences are in the non-coding portions of the genome
(the so-called junk DNA2) and in the way the genes are used to create proteins.Several
decades ago, the one gene-one enzyme hypothesis was in vogue. It seemed
straightforward that a single protein gene coded for a single protein. In prokaryotic organisms (bacteria), this was easy to
show. The known bacterial genes had a defined starting and stopping place and the DNA letters in between spelled out a
discrete amino acid sequence. The eukaryotes (organisms with a nucleus; everything from yeast, to plants, to humans) do
not have a simple gene structure. Our protein genes are broken up into a series of exons (the parts that code for protein)
and introns (non-coding intervening sequences). To make a protein, the gene is first transcribed into RNA, then the introns
are spliced out, the exons are stitched together, and the remainder is translated into protein. Even though complex, the one
gene-one enzyme hypothesis was still applied to eukaryotic protein genes.Over time, however, it was realized that life was
not so simple, especially for the eukaryotes. The one gene-one enzyme hypothesis was particularly troubling for the higher
(more complex) eukaryotes. For example, the approximately 20,000-25,000 protein-coding genes in the human
genome3 are used to create 100,000-300,000 distinct proteins (the actual number is uncertain). The low number of genes in
the human genome was troubling for several reasons. 4 First, this means that we did not have that many more genes than
organisms much simpler than us. Second, we needed a way to create many proteins from few genes and nobody knew how
this could be done on such a large scale. And third, the complexity of the genomic computer program ratcheted up to even
more uncomfortable levels for those who thought we arose through random chance.Even before Human Genome

Project5 was complete, we knew that some proteins are manufactured through a process called alternate splicing, where
exons from different locations in the genome are combined to create many different proteins. From the ENCODE
project,6 we learned that alternate splicing is so pervasive that the definition of the word gene is currently under
debate.7 Thus, the one gene-one enzyme hypothesis turned out to be a gross oversimplification. However, the word and the
concept of a gene is so useful that for the rest of this article I will be referring to genes in the classic sense as a
contiguous stretch of DNA with a starting and ending location and a set of introns and exons that could potentially be
transcribed, spliced, and translated into a single protein. Each gene, however is made of parts that can be recombined with
parts from other genes in different locations in the genome to create proteins not coded by any specific gene.Alternate
splicing is a brilliant design concept that allows for a streamlined genetic program that takes up a fraction of the space
compared to a program that coded for each protein independently. But this added complexity comes at a price. It has been
conservatively estimated that each intron adds the same amount of complexity as approximately 30 additional DNA
letters.8 Thus, the mutation target for a gene is increased for each intron added. Consider that the average protein-coding
gene has 7-10 introns and that the total length of introns is often longer than the total length of protein coding DNA, and one
can see why this is a problem. It takes a lot to maintain such a system and the complexity makes it difficult for naturalistic
theories of origins. In fact, a sizeable proportion of human genetic disease has been attributed to mutations within intronexon splice sites.9 Introns are typically included in the junk DNA category, but they have specific sequences at the head and
tail ends that tell the splicing mechanism where to cut, etc., so they are not without function. (Exons also have splice signals
at their ends. Thus, some of the information for splicing out the introns is found within the protein-coding portion of the
genome. The protein-coding sections code for both protein sequence and splicing patterns at the same time!)The ENCODE
project made the significant discovery that nearly all of the genome was turned into RNA at some point in the life of a cell
and that multiple overlapping RNAs were often created from the same stretch of DNA. This was a tremendous blow to junk
DNA theorists.10However, perhaps more importantly, the ENCODE results also documented an amazing amount of alternate
splicing. So, here we were, knowing that a huge portion of the genome is active and that the protein-coding portions were
being used in complex combinations, but we still did not know how it all came together. Because of this, scientists have
been looking for a splicing code within the genome that controls the slicing and dicing of the protein genes. This splicing
code must account for 1) the complex combinations of exons needed to create hundreds of thousands of proteins from tens
of thousands of protein genes, 2) the variation in splicing from cell to cell needed to account for the different proteins
expressed in different cell types, and 3) changes in splicing patterns over time as the organism proceeds from fertilized egg
to adult (since not all genes are active at all stages in the life cycle). All this information must be coded in the genome, but it
also cannot interfere with the protein-coding domains. Thus, most of this information must reside within the introns and in
the spaces between genes.A paper recently appeared in Nature where the authors claimed to have discovered the
beginning of the splicing code. What they found is a marvel of complexity. Science labs across the world have been
generating tremendous amounts of data and they were able to capitalize on this new knowledge in a massive data mining
exercise. Specifically, vast databases have been compiled that tell us which genes are active in different cell lines and at
different stages of development. We also know of many DNA-binding factors and their specific sequence targets (usually a
short string of very precise letters that are targeted by proteins with whimsical names like Star, Nova, and Quaking-like).
With this knowledge, they were able to approach the issue statistically to document significant features that help to control
alternate splicing. They found many motifs (short DNA words of 5-10 letters each) before and after many exons that were
strongly associated with different cell types. In all, they could explain 60% of the alternate splicing patterns found in the
human genome just by the presence or absence of these motifs. Many of the motifs were known previously and are sites for
known DNA-binding proteins. Many other motifs were new to science.The median number of tissue-specific motifs
associated with splicing, per exon, ranged from 12 for the central nervous system and 19 for embryo. 11 There were
additional tissue-independent features associated with most or all exons and additional and abundant short motifs that were
not considered in the above counts. This means the splicing code is complex and that complex combinations of instructions
are needed to control how the many exons combine to produce the multitude of proteins found in the human body.They also
discovered features related to splicing much farther away from the protein-coding regions than they expected. Because of
technical limitations, most studies on transcription regulation have historically focused on a few dozen letters immediately
upstream or downstream of a target sequence. Here, they document features much further into non-coding regions than
previously known (up to 300 letters away). Thus, even more junk DNA has been subsumed into the functional DNA
category!But this is only the beginning. They have only scratched the surface and have already discovered amazing
complexity. They only managed a prediction accuracy of 60%. Therefore, much remains to be discovered. Where is the
missing information? Perhaps it will be found deeper into the non-coding DNA. Perhaps, because they did not consider the
3-D architecture of the DNA within the nucleus, additional features may be discovered much farther away or even on
different chromosomes! The possibilities are endless and we will certainly update you as more is learned.There is one final
implication of this work I would like to discuss. There are many pseudogenes in the genome that look like functional genes
but have mutations that prevent them from being turned into proteins. The presence of pseudogenes has been an enigma
since their discovery, but the idea has generally been used to attack creationists and other advocates of design. I believe the
arguments are spurious12 and we have written much about them in prior articles. 13 Even though functions have been found
for many pseudogenes, it is true that, if transcribed and spliced, a pseudogene cannot be translated into a protein. However,
now that we are aware of alternate splicing, future work may show that many of the pseudogene exons are incorporated into
functional proteins. If so, the entire pseudogene argument will collapse like a house of cards. But, only time will tell.For now,
let us be amazed at the amazingly engineered human genome. The genetic computer program is, to date, unsurpassed by
any human technology. The wisdom and foresight that went into it is nothing short of stunning. He engineered a string of
DNA as long as a person is tall that could withstand thousands of errors (mutations), adapt to changing environments
(through self-modifying code that turns different genes on and off, depending on conditions), and that can be packed into a
microscopic cell without forming knots! Now we learn that his program is a wonder of data compression and efficiency. It is
more sophisticated than anything we have ever contemplated.
Genetics: no friend of evolution
A highly qualified biologist tells it like it is.
by Lane Lester
Genetics and evolution have been enemies from the beginning of both concepts. Gregor Mendel, the father of genetics, and
Charles Darwin, the father of modern evolution, were contemporaries. At the same time that Darwin was claiming that
creatures could change into other creatures, Mendel was showing that even individual characteristics remain constant.
While Darwins ideas were based on erroneous and untested ideas about inheritance, Mendels conclusions were based on
careful experimentation. Only by ignoring the total implications of modern genetics has it been possible to maintain the

fiction of evolution.To help us develop a new biology based on creation rather than evolution, let us sample some of the
evidence from genetics, arranged under the four sources of variation: environment, recombination, mutation, and creation.
Environment
This refers to all of the external factors which influence a creature during its lifetime. For example, one person may have
darker skin than another simply because she is exposed to more sunshine. Or another may have larger muscles because
he exercises more. Such environmentally-caused variations generally have no importance to the history of life, because they
cease to exist when their owners die; they are not passed on. In the middle 1800s, some scientists believed that variations
caused by the environment could be inherited. Charles Darwin accepted this fallacy, and it no doubt made it easier for him to
believe that one creature could change into another. He thus explained the origin of the giraffes long neck in part through
the inherited effects of the increased use of parts. 1 In seasons of limited food supply, Darwin reasoned, giraffes would
stretch their necks for the high leaves, supposedly resulting in longer necks being passed on to their offspring.
Recombination
This involves shuffling the genes and is the reason that children resemble their parents very closely but are not exactly like
either one. The discovery of the principles of recombination was Gregor Mendels great contribution to the science of
genetics. Mendel showed that while traits might be hidden for a generation they were not usually lost, and when new traits
appeared it was because their genetic factors had been there all along. Recombination makes it possible for there to be
limited variation within the created kinds. But it is limited because virtually all of the variations are produced by a reshuffling
of the genes that are already there.For example, from 1800, plant breeders sought to increase the sugar content of the
sugar beet. And they were very successful. Over some 75 years of selective breeding it was possible to increase the sugar
content from 6% to 17%. But there the improvement stopped, and further selection did not increase the sugar content. Why?
Because all of the genes for sugar production had been gathered into a single variety and no further increase was possible.
Among the creatures Darwin observed on the Galpagos islands were a group of land birds, the finches. In this single
group, we can see wide variation in appearance and in life-style. Darwin provided what I believe to be an essentially correct
interpretation of how the finches came to be the way they are. A few individuals were probably blown to the islands from the
South American mainland, and todays finches are descendants of those pioneers. However, while Darwin saw the finches
as an example of evolution, we can now recognize them merely as the result of recombination within a single created kind.
The pioneer finches brought with them enough genetic variability to be sorted out into the varieties we see today.2
Mutation
Now to consider the third source of variation, mutation. Mutations
are mistakes in the genetic copying process. Each living cell has
intricate molecular machinery designed for accurately copying
DNA, the genetic molecule. But as in other copying processes
mistakes do occur, although not very often. Once
in every 10,000100,000 copies, a gene will
Photo by Ken Ham
contain a mistake. The cell has machinery for
correcting these mistakes, but some mutations
still slip through. What kinds of changes are
produced by mutations? Some have no effect at
all, or produce so small a change that they have
In a fallen world, predators like this tiger, by culling the
no appreciable effect on the creature. But many
more defective animals, may serve to slow genetic
mutations have a significant effect on their
deterioration by screening out the effects of mutation.
owners.
Based on the creation model, what kind of effect
would we expect from random mutations, from genetic mistakes? We would expect virtually all of those
which make a difference to be harmful, to make the creatures that possess them less successful than
before. And this prediction is borne out most convincingly. Some examples help to illustrate
The naked rooster
this.Geneticists began breeding the fruit fly, Drosophila melanogaster, soon after the turn of the century,
mutationno feathers
and since 1910 when the first mutation was reported, some 3,000 mutations have been identified. 3 All of
are produced. Such
the mutations are harmful or harmless; none of them produce a more successful fruit flyexactly as
mutational
defects
predicted by the creation model.Is there, then, no such thing as a beneficial mutation? Yes, there is. A
may
rarely
be
beneficial mutation is simply one that makes it possible for its possessors to contribute more offspring to
beneficial (e.g. if a
future generations than do those creatures that lack the mutation.Darwin called attention to wingless
breeder were to select
beetles on the island of Madeira. For a beetle living on a windy island, wings can be a definite
this type to prevent
disadvantage, because creatures in flight are more likely to be blown into the sea. Mutations producing the
having to pluck preloss of flight could be helpful. The sightless cave fish would be similar. Eyes are quite vulnerable to injury,
roasting?) but never
and a creature that lives in pitch dark would benefit from mutations that would replace the eye with scaradd anything new.
like tissue, reducing that vulnerability. In the world of light, having no eyes would be a terrible handicap,
There is no mutation
but is no disadvantage in a dark cave. While these mutations produce a drastic and beneficial change, it is
which
shows
how
important to notice that they always involve loss of information and never gain. One never observes the
feathers or anything
reverse occurring, namely wings or eyes being produced on creatures which never had the information to
similar arose.
produce them.
Natural selection is the obvious fact that some varieties of creatures are going to be more successful than
others, and so they will contribute more offspring to future generations. A favourite example of natural
section is the peppered moth of England,Biston betularia. As far as anyone knows, this moth has always existed in two basic
varieties, speckled and solid black. In pre-industrial England, many of the tree trunks were light in colour. This provided a
camouflage for the speckled variety, and the birds tended to prey more heavily on the black variety. Moth collections showed
many more speckled than black ones. When the Industrial Age came to England, pollution darkened the tree trunks, so the
black variety was hidden, and the speckled variety was conspicuous. Soon there were many more black moths than
speckled [Ed. note: see Goodbye, peppered moths for more information].As populations encounter changing environments,
such as that described above or as the result of migration into a new area, natural selection favours the combinations of
traits which will make the creature more successful in its new environment. This might be considered as the positive role of
natural selection. The negative role of natural selection is seen in eliminating or minimizing harmful mutations when they
occur.
Creation

The first three sources of variation are woefully inadequate to account for the diversity of life we see on earth today. An
essential feature of the creation model is the placement of considerable genetic variety in each created kind at the
beginning. Only thus can we explain the possible origin of horses, donkeys, and zebras from the same kind; of lions, tigers,
and leopards from the same kind; of some 118 varieties of the domestic dog, as well as jackals, wolves and coyotes from
the same kind. As each kind obeyed the designer`s command to be fruitful and multiply, the chance processes of
recombination and the more purposeful process of natural selection caused each kind to subdivide into the vast array we
now see.
Astonishing DNA complexity uncovered
by Alex Williams
Because of evolutionary notions of our origin, our DNA was supposed to be
mostly junk, leftovers of our animal ancestry. This has proven to be yet
another evolutionary impediment to scientific progress. Photo sxc.hu
Published: 20 June 2007(GMT+10)
When the Human Genome Project published its first draft of the human
genome in 2003, they already knew certain things in advance. These included:
Coding segments (genes that coded for proteins) were a minor component of
the total amount of DNA in each cell. It was embarrassing to find that we have
only about as many genes as mice (about 25,000) which constitute only about
3% of the entire genome.The non-coding sections (i.e. the remaining 97%)
were nearly all of unknown function. Many called it junk DNA; they thought it
was the miscopied and mutation-riddled left-overs abandoned by our ancestors
over millions of years. Molecular taxonomists routinely use this junk DNA as a
molecular clocka silent record of mutations that have been undisturbed by natural selection for millions of years because
it does not do anything. They have constructed elaborate evolutionary histories for all different kinds of life from it.Genes
were known to be functional segments of DNA (exons) interspersed with non-functional segments (introns) of unknown
purpose. When the gene is copied (transcribed into RNA) and then translated into protein the introns are spliced out and the
exons are joined up to produce the functional gene.Copying (transcription) of the gene began at a specially marked START
position, and ended at a special STOP sign.Gene switches (the molecules involved are collectively called transcription
factors) were located on the chromosome adjacent to the START end of the gene.Transcription proceeds one way, from the
START end to the STOP end.Genes were scattered throughout the chromosomes, somewhat like beads on a string,
although some areas were gene-rich and others gene-poor.DNA is a double helix molecule, somewhat like a coiled zipper.
Each strand of the DNA zipper is the complement of the otheras on a clothing zipper, one side has a lump that fits into a
cavity on the other strand. Only one side of the DNA zipper (called the sense strand) makes the correct protein sequence.
The complementary strand is called the anti-sense strand. The sense strand is like an electrical extension cord where the
female end is safe to leave open until an appliance is attached, but the protruding male end is active and for safetys sake
only works when plugged into a female socket. Thus, protein production usually only comes from copying the sense strand,
not the anti-sense strand. The anti-sense strand provides a template for copying the sense strand in a way that a
photographic negative is used to produce a positive print. Some exceptions to this rule were known (i.e. that in some cases
anti-sense strands were used to make protein) but no one expected the whole anti-sense strand to be transcribed.This
whole structure of understanding has now been turned on its head. A project called ENCODE recently reported an intensive
study of the transcripts (copies of RNA produced from the DNA) of just 1% of the human genome. 1,2 Their findings include
the following inferences:About 93% of the genome is transcribed (not 3%, as expected). Further study with more wideranging methods may raise this figure to 100%. Because much energy and coordination is required for transcription this
means that probably the whole genome is used by the cell and there is no such thing as junk DNA.Exons are not genespecific but are modules that can be joined to many different RNA transcripts. One exon (i.e. one part of one gene) can be
used in combination with up to 33 different genes located on 14 different chromosomes. This means that one exon can
specify one part shared in common by many different proteins.There is no beads on a string linear arrangement of genes,
but rather an interleaved structure of overlapping segments, with typically 5, 7, 9 or more transcripts coming from the one
gene.
Not just one strand, but both strands (sense and anti-sense) of the DNA are fully transcribed.
Transcription proceeds not just one way but both backwards and forwards.
Transcription factors can be tens or hundreds of thousands of base-pairs away from the gene that they control, even on
different chromosomes.
There is not just one START site, but many, in each particular gene region.
There is not just one transcription triggering (switching) system for each region, but many.
The authors conclude:
An interleaved genomic organization poses important mechanistic challenges for the cell. One involves the [use of] the
same DNA molecules for multiple functions. The overlap of functionally important sequence motifs must be resolved in time
and space for this organization to work properly. Another challenge is the need to compartmentalize RNA or mask RNAs that
could potentially form long double-stranded regions, to prevent RNA-RNA interactions that could prompt apoptosis
[programmed cell death].This concern for the safety of so many RNA molecules being produced in such a small space is
well-founded. RNA is a long single-strand molecule not unlike a long piece of sticky-tapeit will stick to any nearby surface,
including itself! Unless properly coordinated, it will all scrunch up into a sticky mess.These results are so astonishing, so
shocking, that it is going to take an awful lot more work to untangle what is really going on in cells. And the molecular
taxonomists, who have been drawing up evolutionary histories (phylogenies) for everything, are going to have to undo all
their years of junk DNA-based historical reconstructions and wait for the full implications to emerge before they try again.
One of the supposedly knock-down arguments that humans have a common ancestor with chimpanzees is shared nonfunctional DNA coding. That argument just got thrown out the window.

Astonishing DNA complexity update


by Alex Williams

Published: 3 July 2007 (GMT+10)


Recently we reported astonishing new discoveries about the complexity of the
information content stored in the DNA molecule. 1Notably, the 97% of the human DNA
that does not code for protein is not leftover junk DNA from our evolutionary past, as
previously thought, but is virtually all being actively used right now in our cells.Here
are a few more exciting details from the ENCODE (Encyclopedia of DNA Elements)
pilot project report.2 As a help in understanding this, DNA is a very stable molecule
ideal for storing information. In contrast, RNA is a very active (and unstable) molecule
and does lots of work in our cells. To use the stored information on our DNA, our cells
copy the information onto RNA transcripts that then do the work as instructed by that
information.Traditional beads-on-a-string type genes do form the basis of the proteinproducing code, even though much greater complexity has now been uncovered.
Genes found in the ENCODE project differ only about 2% from the existing catalogue of known protein-coding genes.We
reported previously that the transcripts overlap the gene regions, but the overlaps are huge compared to the size of the
genes. On average, the transcripts are 10 to 50 times the size of the gene region, overlapping on both sides. And as many
as 20% of transcripts range up to more than 100 times the size of the gene region. This would be like photocopying a page
in a book and having to get information from 10, 50 or even 100 other pages in order to use the information on that
page.The untranslated regions (now called UTRs, rather than junk) are farmore important than the translated regions (the
genes), as measured by the number of DNA bases appearing in RNA transcripts. Genic regions are transcribed on average
in five different overlapping and interleaved ways, while UTRs are transcribed on average in seven different overlapping
and interleaved ways. Since there are about 33 times as many bases in UTRs than in genic regions, that makes the junk
about 50 times more activethan the genes.Transcription activity can best be predicted by just one factor, the way that the
DNA is packaged into chromosomes. The DNA is coiled around protein globules called histones, then coiled again into a
rope-like structure, then super-coiled in two stages around scaffold proteins to produce the thick chromosomes that we see
under the microscope. This suggests that DNA information normally exists in a form similar to a closed bookall the coiling
prevents the coded information from coming into contact with the translation machinery. When the cell wants some
information it opens a particular page, photocopies the information, then closes the book again. Recent other work 3 shows
that this is physically accomplished as follows:The chromosomes in each cell are stored in the membrane-bound nucleus.
The nuclear membrane has about 2000 pores in it, through which molecules can be passed in and out. The required
chromosome is brought near to one of these nuclear pores.
The section of DNA to be transcribed is placed in front of the pore.
The supercoil is unwound to expose the transcription region.
The histone coils are twisted so as to expose the required copying site.
The double-helix of the DNA is unzipped to expose the coded information.
The DNA is grasped into a loop by the enzymes that do the copying, and this loop is copied onto an RNA transcript. The
transcript is then checked for accuracy (and is degraded and recycled if it is faulty). The RNA transcript is then specially
tagged for export, and is exported through the pore and carried to wherever it is needed in the cell.The book of DNA
information is then closed by a reversal of the coiling process and movement of the chromosome away from the nuclear
pore region.The most surprising result, according to the ENCODE authors, is that 95% of the functional transcripts (genic
and UTR transcripts with at least one known function) show no sign of selection pressure (i.e. they are not noticeably
conserved and are mutating at the average rate). This contradicts Charles Darwins theory that natural selection is the major
cause of our evolution. It also creates an interesting paradox: cell architecture, machinery and metabolic cycles are all highly
conserved (e.g. the human insulin gene has been put into bacteria to produce human insulin on an industrial scale), while
most of the chromosomal information is freely mutating. How could this state of affairs be maintained for the supposed 3.8
billion years since bacteria first evolved? A better answer might be that life is only thousands, not billions of years old. It also
looks like cells, not genes, are in control of lifethe direct opposite of what neo-Darwinists have long assumed.
Evidence for the design of life: part 1Genetic redundancy
by Peter Borger
Knockout strategies have demonstrated that the function of many genes cannot be studied by disrupting them in model
organisms because the inactivation of these genes does not lead to a phenotypic effect. For living systems, this peculiar
phenomenon of genetic redundancy seems to
be the rule rather than the exception. Genetic
redundancy is now defined as the situation in
which the disruption of a gene is selectively
neutral. Biology shows us that 1) two or more
genes in an organism can often substitute for
each other, 2) some genes are just there in a
silent state. Inactivation of such redundant
genes does not jeopardize the individuals
reproductive success and has no effect on
survival of the species. Genetic redundancy is
the big surprise of modern biology. Because
there is no association between redundant
genes and genetic duplications, and because
redundant genes do not mutate faster than
essential genes, redundancy therefore brings
down more than one pillar of contemporary
evolutionary thinking.

Figure 1. To create a mouse knockout for a particular gene, a selectable marker is integrated in the gene of interest in an
embryonic stem cell. The marker disrupts (knocks out) the gene of interest. The manipulated embryonic stem cell is then

injected into a mouse oocyte and transplanted back into the uterus of pseudo-pregnant mouse. Offspring carrying the
interrupted gene can be sorted out by screening for the presence of the selection marker. It is now fairly easy to obtain
animals in which both copies are interrupted through selective breeding. Mendels law of independent segregation assures
that crossbreeding littermates will produce individuals that lack both genes.The discovery of the primary rules governing
biology in the second half of the 20th century paved the way for a more fundamental understanding of the complexity of life.
One of the spin-offs of this knowledge has been the development of sophisticated techniques to elucidate the function of
proteins. When molecular biologists want to know the function of a particular human protein they genetically modify a
laboratory mouse so that it lacks the corresponding gene (for the laboratory procedure see figure 1). Mice that have both
alleles of a gene interrupted cannot produce the corresponding proteinthey are called knockouts. Theoretically, the
phenotype of a mouse lacking specific genetic information could provide essential information about the function of the
gene. Over the years, thousands of knockouts have been generated. The knockout-strategy has helped elucidate the
functions of hundreds of genes and has contributed immensely to our biological knowledge. However, there has been one
unexpected surprisethe no-phenotype knockout. This is unexpected, because according to the Darwinian paradigm, all
genes should have a selectable advantage. Hence, knockouts should have measurable, detectable phenotypes. The nophenotype knockouts demonstrate that genes can be disrupted withoutor with only minordetectable effects on the
phenotype. Many genes seem to have no measurable function! This is known as genetic redundancy and it is one of the big
surprises of modern biology.
Molecular switches
One of the most intriguing examples of genetic redundancy is found in the SRC gene family. This family comprises a group
of eight genes that code for eight distinct proteins all with a function that is technically known as tyrosine kinase. SRC
proteins attach phosphate groups to other proteins that contain the amino acid tyrosine in a specific amino acid context. The
result of this attachment is that the protein becomes activated; it is switched on, and can hence pass down information in a
signalling cascade. Four closely related members of the family are named SRC, YES, FYN and FGR, and the other related
members are known as BLK, HCK, LCK and LYN. Both families are so-called nuclear receptors, and transmit signals from
the exterior of the cell to the nucleus, the operation centre where the information present in the genes is transcribed into
messenger RNA. The proteins of the SRC gene family operate as molecular switches that regulate growth and
differentiation of cells. When a cell is triggered to proliferate, tyrosine kinase proteins are transiently switched on, and then
immediately switched off.The SRC gene family is among the most notorious genes known to man, since they cause cancer
as a consequence of single point mutations. A point mutation is a change in a DNA sequence that alters only one single
nucleotidea DNA letterof the entire gene. When the point mutation is not on a silent position, it will cause the organisms
protein-making machines to incorporate a wrong amino acid. The consequence of the point mutation is that the organism
now produces a protein that cannot be switched off. Mutated SRC genes are of particular danger because they will
permanently activate signalling cascades that induce cell proliferation: the signal that tells cells to divide is permanently
switched on. The result is uncontrolled proliferation of cellscancer. The growth-promoting point mutations cannot be
overcome by allelic compensation because a normal protein cannot help to switch off the mutated protein.Despite the SRC
protein being expressed in many tissues and cell types, mice in which the SRC gene has been knocked out are still viable.
The only obvious characteristic of the knockout is the absence of two front teeth due to osteoporosis. In contrast, there are
essentially no point mutations allowed in the SRC protein without severe phenotypic consequences. Amino acid changing
point mutations in most, presumably all, of the SRC genes can lead to uncontrolled cellular replication. 1 Knockout mice
models have been generated to reveal the functions of all the members of the SRC gene family. Four out of eight knockouts
did not have a detectable phenotype. Despite their cancer-inducing properties, half of the SRCgenes appear to be
redundant. Standard evolutionary theory tells us that redundant gene family members originated through gene duplications.
Duplicated genes are truly redundant and as such they are expected to reduce to a single functional copy over time through
the accumulation of mutations that damage the duplicated genes. Such mutations can be frame-shift mutations that
introduce premature stop signals, which are recognized by the cellular translation-machines to terminate protein synthesis.
The existence of the SRC gene family has been explained as follows:In the redundant gene family of SRC-like proteins,
many, perhaps almost all point mutations that damage the protein also cause deleterious phenotypes and kill the organism.
The genetic redundancy cannot decay away through the accumulation of point mutations. 1This scenario implies that
the SRC genes are destined to reside in the genome forever. Point mutations that immediately kill raise an intriguing origin
question. If the SRC genes are really so potently harmful that point mutations induce cancer, how could this extended gene
family come into existence through gene duplication and diversify through mutations in the first place? After the first
duplication, neither of the genes is allowed to change because it will invoke a lethal phenotype and kill the organism through
cancer. Amino acid changing mutations in the SRC genes will permanently be selected against. The same holds true for the
third, fourth and additional gene duplication. New gene copies are only allowed to mutate at neutral sites that do not replace
amino acid in the protein. Otherwise the organism will die from tumours. Because of this purifying selection mechanism, the
duplicates should remain as they are. Yet the proteins of the SRC family are distinctly different, only sharing 6080% of their
sequences.
Redundancythe rule not the exception
In 1964, a knockout cross-country skier won two gold medals during the Winter Olympics in Innsbruck. In true Olympic
tradition, Eero Maentyrantas 15 and 30 km success was surrounded by controversy. Tests showed that he had 15% more
red blood cells than normal subjects and Eero was accused of using doping to increase his level of red blood cells. Yet no
trace of blood doping could be found. In 1964 nobody knew, but modern biology showed Maentyranta had a
mutated EPO gene, which codes for erythropoietin, a messenger protein that tells the bone marrow to increase the
production of red blood cells. To increase red blood levels, EPO binds to the EPO receptor that generates two opposite
signals: one to instruct bone marrow cells to become red blood cells (the on-switch) and one to reduce production of red
blood cells (the off-switch). This auto-regulatory mechanism assures a balanced production of red blood cells. In 1993, it
turned out that the Olympic medallist had a mutation that knocked out the off-switch.2 The EPO receptor of the Finnish
athlete generated a normal activation signal, but not the deactivating one. People can do well without the off-switch.In
humans, the muscle-fiber-producing ACTN3 gene can also be missed entirely and without consequences for
fitness.3 Humans can also do without the GULO gene,4 the gene coding for caspase 12,5 the CCR5gene6 and some of
the GST genes that are involved in the detoxification of polycyclic aromatic hydrocarbons present in cigarette smoke. 7 All
these genes can be found inactivated in entire human populations (GULO, caspase 12) or subpopulations thereof. The
Douc Langur (Pygathrix nemaeus), an Asian leaf-eating Colobine monkey, is the natural no-phenotype knockout for
the angiogenin gene that codes a small protein that stimulates the formation of blood vessels. 8 Bacterial genomes can be
reduced by over 9% without selective disadvantages on minimal medium, 9 and mice in which 3 megabases of conserved
DNA was erased showed no signs of reduced survival and there was no indication of overt pathology. 10 Fewer than 2% of
approximately 200 Arabidopsis thaliana (Mouse-Ear Cress) knockouts displayed significant phenotypic alterations. Many of

the knockouts did not affect plant morphology even in the presence of severe physiological defects.11 In the nematode
worm Caenorhabditis elegans a surprising 89% of single-copy and 96% of duplicate genes show no detectable phenotypic
effect when they are knocked out.12 Prion proteins are thought to have a function in learning processes, but when they are
misfolded they can cause bovine spongiform encephalitis (BSE) or KreutzfeldJacob disease. In order to make BSE
resistant cows, a knockout breed has been created lacking the prion protein. A thorough health assessment of this knockout
breed revealed only small differences from wild-type animals. Apparently, cows can thrive very well without the prion
protein.13 Research on histone H1 genes, once believed to be indispensable for DNA condensation, suggest that any
individual H1 subtype is not necessary for mouse development, and that loss of even two subtypes is tolerated if a normal
H1-to-nucleosome stoichiometry is maintained.14 Even complete highly specialized cells can be redundant. A strain of
laboratory mouse, named WBB6F1, lacks a specific type of blood cells known as mast cells. The reported no-phenotype
knockouts are probably only the tip of the iceberg. As reported in Nature below, few knockout organisms in which no
phenotype could be traced ever see the light of day: a lot of those things [no-phenotype knockouts] you dont hear about.
No-phenotype knockouts are negative results, and as such they are usually not reported in scientific journals; because they
do not have news value. To address the problem, the journal Molecular and Cellular Biology has since 1999 a section given
over to knockout and other mutant mice that seem perfectly normal. 15So how are genes, cells and organisms supposed to
have evolved without selective constraints? If organisms can do without complete cells, it would be outlandish to assert that
natural selection was the driving force shaping those cells. Two decades of knockout experiments has made it clear that
genetic redundancy is a major characteristic of all studied life forms.
Paradigm lost
Genetic redundancy falsifies several evolutionary hypotheses. Firstly, truly redundant genes are impossible paradoxes
because natural selection cannot prevent the accumulation of harmful mutations in these genes. Hence, natural selection
cannot prevent redundancies from being lost. Secondly, redundant genes do not evolve (mutate) any faster than essential
genes. If protein evolution is due in large part to neutral and slightly deleterious amino acid substitutions, then the incidence
of such mutations should be greater in proteins that contribute less to individual reproductive success. The rationale for this
prediction is that non-essential proteins should be subject to weaker purifying selection and should accumulate mildly
deleterious substitutions more rapidly. This argument, which was presented over twenty years ago, is fundamental to many
theoretical applications of evolutionary theory, but despite intense scientific scrutiny the prediction has not been confirmed.
In contrast, a systematic analysis of mouse genes has shown that essential genes do not evolve more slowly than nonessential ones.16 Likewise, E. coli proteins that operate in huge redundant networks can tolerate just as many mutations as
unique single-copy proteins,17 and scientists comparing the human and chimpanzee genomes found that non-functional
pseudogenes, which can be considered as redundancies, have similar percentages of nucleotide substitutions as do
essential protein-coding genes.18 Thirdly, as discussed in more detail below, several recent biology studies have provided
evidence that genetic redundancy is not associated with gene duplications.
What does the evolutionary paradigm say?
An important question that needs to be addressed iscan we understand genetic redundancy from Darwins natural
selection perspective? How can genetic redundancy be maintained in the genome without natural selection acting upon it
continually? How did organisms evolve genes that are not subject to natural selection? First, lets look at how it is thought
genetic redundancies arise. Susumo Ohnos influential 1970 book, Evolution by Gene Duplication, deals with this
idea.19 Sometimes, during cell divisions, a gene or longer stretch of biological information is duplicated. If duplication occurs
in germ line cells and become inheritable, the exact same gene may be present twofold in the genome of the offspringa
genetic back-up. Ohno argues that gene and genome duplications are the principal forces that drive the increasing
complexity of Darwinian evolution, referring to the evolution from microbes to microbiologists. He proposes that duplications
of genetic material provide genetic redundancies which are then free to accumulate mutations and adopt novel biological
functions. Duplicated DNA elements are not subject to natural selection and are free to transform into novel genes. With
time, he argues, a duplicated gene will diverge with respect to expression characteristics or function due to accumulated
(point) mutations in the regulatory and coding segments of the duplicate. Duplicates transforming into novel genes with a
selective advantage will certainly be favored by natural selection. Meanwhile, the genetic redundancy will protect old
functions as new ones arise, hence reducing the lethality of mutations. Ohno estimates that for every novel gene to arise
through duplication, about ten redundant copies must join the ranks of functionless DNA base sequence.20 Diversification of
duplicated genetic material is now the accepted standard evolutionary idea on how genomes gain useful information. Ohnos
idea of evolution through duplication also provides an explanation for the no-phenotype knockouts: if genes duplicate fairly
often, it is then reasonable to expect some level of redundancy in most genomes, because duplicates provide an organism
with back-up genes. As long as duplicates do not change too much, they may substitute for each other. If one is lost, or
inactivated, the other one takes over. Hence, Ohnos theory predicts an association between genetic redundancy and gene
duplication.
The evolutionary paradigm is wrong
Figure 2. A very simple scheme of a small robust network comprised of AE,
where several nodes are redundant.
Some biologists have looked into this matter specifically using the wealth of
genetic data available for Saccharomyces cerevisiaethe common bakers
yeast. A surprising 60% ofSaccharomyces genes could be inactivated without
producing a phenotype. In 1999, Winzeler and co-workers reported
in Science that only 9% of the non-essential genes ofSaccharomyces have
sequence similarities with other genes present in the yeasts genome and
could thus be the result of duplication events.21 Most redundant genes
ofSaccharomyces are not related to genes in the yeasts genome, which
suggests that genetic duplications cannot explain genetic redundancy. In 2000,
Andreas Wagner confirmed Winzelers original findings that weak or no-effect
(i.e. non-essential and redundant) genes are no more likely to have paralogous
that is, duplicatedgenes within the yeast genome than genes that do result
in a defined phenotype when they are knocked out. Wagner concluded that the robustness of mutant strains cannot be
caused by gene duplication and redundancy, but is more likely due to the interactions between unrelated genes. 22 More
recent studies have shown that cooperating networks of unrelated genes contribute significantly more to robustness than
gene copy number.23 Redundant genes are proposed to have originated in gene duplications, but we do not find a link
between genetic redundancy, and duplicated genes in the genomes. Gene duplication is not a major contributor to genetic
redundancy, and the robust genetic networks found in organisms cannot be explained. The predicted association between

genetic redundancy and gene duplication is non-existent. Ohnos interesting idea of evolution by gene duplication therefore
cannot be right.
The non-linearity of biology
The no-phenotype knockouts can only be explained by taking into account the non-linearity of biochemical systems. It is
ironic that standard wall charts of biochemical reactions show hundreds of coupled reactions working together in networks,
while graduate students are tacitly encouraged to think in terms of linear cause and effect. The linear cause-and-effect
thinking in ancient Greek philosophy was adopted by nineteenth century European scholars, and is still dominating most
fields of science, including biology. We cannot understand that genetic redundancy and biological robustness in linear terms
of single causality, where A causes B causes C causes D causes E. Biological systems do not work like that. Biological
systems are designed as redundant scale-free networks. In a scale-free network the distribution of node linkage follows a
power law in that it contains many nodes with a low number of links, few nodes with many links and very few nodes with a
high number of links. A scale-free network is very much like the Golden Orbs web: individual nodes are not essential for
letting the system function as a whole. The internet is another example of a robust scale-free network: the major part of the
websites makes only a few links, a lesser fraction make an intermediate number of links, and a minor part makes the
majority of links. Usually hundreds of routers routinely malfunction on the Internet at any moment, but the network rarely
suffers major disruptions. As many as 80% of randomly selected Internet routers can fail, but the remaining ones will still
form a compact cluster in which there will still be a path between any two nodes. 24 Likewise, we rarely notice the
consequences of thousands of errors that routinely occur in our cells.
Scale free networks
Genes never operate alone but in redundant scale-free networks with an incredible level of buffering capacity. In a simple
non-linear biological systempresented in figure 2with nodes A through E, A may cause B, but A also causes D
independent of B and C. This very simple network of only five nodes demonstrates robustness due to redundancy of B and
C. If A fails to make the link with D, there are still B and C to make the connection. Extended networks composed of
hundreds of interconnected proteins ensure that if one network becomes inactivated by a mutation, essential pathways will
then not be shut down immediately. A network of cooperating proteins that can substitute for or bypass each others
functions makes a biological system robust. It is hard to imagine how selection acts on individual nodes of a scale-free,
redundant system. Complex engineered systems rely on scale-free networks that can incorporate small failures in order to
prevent larger failures. In a sense, cooperating scale-free networks provide systems with an anti-chaos module which is
required for stability and strength. Scale-free genetic and protein networks are an intrinsic, engineered characteristic of
genomes and may explain why genetic redundancy is so widespread among organisms. Genetic networks usually serve to
stabilize and fine-tune the complex regulatory mechanisms of living systems. They control homeostasis, regulate the
maintenance of genomes and provide regulatory feedback on gene expression. An overlap in the functions of proteins also
ensures that a cell does not have to respond with only on or off in a particular biochemical process, but instead may
operate somewhere in between.Most genes in the human genome are involved in regulatory networks that detect and
process information in order to keep the cell informed about its environment. The proteins operating in these networks come
as large gene families with overlapping functions. In a cascade of activation and deactivation of signalling proteins, external
messages are transported to the nucleus with information about what is going on outside so it can respond adequately. If
one of the interactions disappears, this will not immediately disturb the balance of life. The buffering capacity present in
redundant genetic networks also provides the robustness that allows living systems to propagate in time. In a linear system,
one detrimental mutation would immediately disable the system as a whole: the strength of a chain is determined by its
weakest link. Interacting biological networks, where parallel and converging links independently convey the same or similar
information, almost never fail. The Golden Orbs web only crumbles when an entire spoke is obliterated in a crash with a
Dragonfly, an event that will hardly ever happen. Biological systems operate as a spiders web: many interacting and
interwoven nodes produce robust genetic networks and are responsible for genetic redundancy.23
Conclusion
Genetic redundancy is an amazing property of genomes and has only recently become evident as a result of negative
knockout experiments. Protein-coding genes and highly conserved regions can be eliminated from the genome of model
organisms without a detectable effect on fitness. There is no association between redundant genes and gene duplications,
and redundant genes do not mutate faster than essential genes. Genetic redundancy stands as an unequivocal challenge to
the standard evolutionary paradigm, as it questions the importance of Darwins selection mechanism as a major force in the
evolution of genes. It is also important to realize that redundant genes cannot have resided in the genome for millions of
years, because natural selection, a conservative force, cannot prevent their destruction due to debilitating mutations.
Mainstream biologists who are educated in the Darwinian framework are unable to understand the existence of genes
without natural selection. This is clear from a statement in Nature a few years ago by Mario Cappecchi, a pioneer in the
development of knockout technology:I dont believe that there is a single [knockout] mouse that does not have a phenotype.
We just arent asking the right questions.15The right question to be asked is: is the evolutionary paradigm wrong? My
answer is yes, it is. Current naturalistic theories do not explain what scientists observe in the genomes. Genetic redundancy
is the actual key to help us understand the robustness of organisms and also their built-in flexibility to rapidly adapt to
different environments. In part 2 of this series of articles, I will explain genetic redundancy in the context of baranomes, the
multipurpose genomes baramins were originally designed with in order to rapidly spread to all the corners and crevices of
the earth.
The design of life: part 3an introduction to variation-inducing genetic elements
by Peter Borger
The inheritance of traits is determined by genes: long stretches of DNA that are
passed down from generation to generation. Usually, genes consist of a coding part
and a non-coding regulatory part. The coding part of the gene determines the
functional output, whereas the non-coding portion contains switches and units that
determine when, where and how much of the functional output should be generated.
Point-mutations in the coding part are predominantly neutral or slightly detrimental
genetic noise that accumulates in the genome, whereas point-mutations in the
regulatory part of DNA units can induce variation with respect to the amount of output.
Previously, in part 2, I argued that created kinds were frontloaded with baranomes:
that is, pluripotent genomes with an ability to induce variation from within. The output
of (morpho)genetic algorithms present in the baranome can readily be modulated by
variation-inducing genetic elements (VIGEs). VIGEs are frontloaded genetic elements
normally referred to as endogenous retroviruses, insertion sequences, LINEs, SINEs,
micro-satellites, transposons, insertion sequences, and the like. In the present report,

these transposable and repetitive DNA sequences are redefined as VIGEs, which solves the RNA virus paradox. The
(morpho)genetic algorithms were designed in such way that VIGEs easily integrated into it and became a part of it, hence
making the program explicit.The variation that Darwin saw in pigeons can be explained with the activation or deactivation of
existing genetic sequences for feather production in different parts of the body. This gives no basis for asserting that pigeons
could change into something which is not a pigeon.In order to fight off invading bugs and parasites, higher organisms have
an elaborate mechanism that induces variation in immunological defence systems. One particular type of immune cells (B
cells) produces defence proteins known as immunoglobulins. Immunoglobulins are very sticky; they bind to intruders as
biological tags and mark them as alien. Other cells of the immune system then recognize the intruder, and a destruction
cascade is activated. To have a tag available for every possible alien intruder, millions of B cells have their own highly
specific gene for immunoglobulin production. In the genome there is only limited storage space for biological information, so
how can there be millions of genes? Well, there arent. Immunoglobulin genes are assembled from several pre-existing DNA
sequences that can be independently put together. The part of the immunoglobulin that does the alien recognition contains
several domains which are each highly variable. Every single B cell forms a unique immunoglobulin gene by picking from
several short pre-existing DNA sequences. We also observe that later generations of immunoglobulins are more specific
than the earlier generations, in the sense that they bind more tightly to invading microorganisms. Binding affinity to an
invader is equivalent to recognition of that invader. And the better the immune system is able to recognize an intruder, the
better it is able to clear it. The increased specificity is due to somatic mutations deliberately introduced in the genes of the
immunoglobulins. A mechanism to rapidly induce mutations in immunoglobulin genes is present in the B cell genome. This
mechanism ensures that the recognition pattern specified by the genes becomes increasingly specific for the intruder. This
ability to recognize and defeat all potential microorganisms is characteristic of the immune systems of higher organisms,
including humans. The genomes contain all the necessary biological information required to induce variation from within. A
flexible genome is required to effectively ward off diseases and parasitic infections. B cells dont wait for mutations to
happen; they generate the necessary mutations themselves.
Darwin revisited
Previously, in part 2,1 I argued that organisms are equipped with flexible, highly adaptable, pluripotent, multipurpose
genomes. Organisms are able to conquer the world through adaptive radiation of baranomes. But how do baranomes
unleash information? Do organisms have to wait for selectable mutations to occur in order to rapidly invade and occupy
novel ecological niches? Or were the baranomes of created kinds equipped with mechanisms to rapidly induce mutations,
similar to the variation generated by B cells? Lets turn to Darwins The Origin of Species, where we will find some clues.
Darwin wrote quite extensively on variation, and in particular on the variation of feather patterns in pigeons:
Box 1. Common names of some well-known variation-inducing genetic elements (VIGEs) in prokaryotes (bacteria) and
eukaryotes (yeast, plants, insects and mammals).
Some facts in regard to the colouring of pigeons well deserve
consideration. The rock-pigeon is of a slaty-blue, and has a white
rump (the Indian sub-species, C. intermedia of Strickland, having it
bluish); the tail has a terminal dark bar, with the bases of the outer
feathers externally edged with white; the wings have two black
bars; some semi-domestic breeds and some apparently truly wild
breeds have, besides the two black bars, the wings chequered with
black. These several marks do not occur together in any other
species of the whole family. Now, in every one of the domestic
breeds, taking thoroughly well-bred birds, all the above marks, even
to the white edging of the outer tail-feathers, sometimes concur perfectly developed. Moreover, when two birds belonging to
two distinct breeds are crossed, neither of which is blue or has any of the above specified marks, the mongrel offspring are
very apt suddenly to acquire these characters; for instance, I crossed some uniformly white fantails with some uniformly
black barbs, and they produced mottled brown and black birds; these I again crossed together, and one grandchild of the
pure white fantail and pure black barb was of as beautiful a blue colour, with the white rump, double black wing-bar, and
barred and white-edged tail-feathers, as any wild rock pigeon! We can understand these facts, on the well-known principle
of reversion to the ancestral characters, if all the domestic breeds have descended from the rock-pigeon. 2Darwin argues
and correctly sothat all domestic pigeon breeds have descended from the rock-pigeon. He even knew, as demonstrated
above, how to breed the rock-pigeon from several distinct pigeon races following a breeding pattern. Darwin describes
a breeding algorithmfor pigeons, to obtain the ancestor to all pigeons! But does he also describe an algorithm for breeding
turkeys from pigeons? No. Darwin doesnt know such an algorithm. If he had found an algorithm for breeding ducks or
magpies from pigeon genomes, he would have had solid evidence in favour of his proposal On The Origin of Species
Through the Preservation of Favoured Races. His breeding experiments led him to discover the principle of reversion to
ancestral characters, but contrary to common Darwinian wisdom, it is also the falsifying observation to his proposal for
the origin of species. The observation that pigeons bring forth pigeons, and nothing else but pigeons, is not exactly the
evidence needed to argue for the common descent of all birds. On the contrary! Darwins breeding experiments
demonstrated that a pigeon is a pigeon is a pigeon. Characteristics and traits within single species of pigeons may vary
tremendously, but he always started and ended with pigeons. Breeding experiments have always shown, without exception,
that novel and distinct bird species do not arrive through artificial selection. Even Darwin argues that there is no doubt that
all varieties of ducks and rabbits have descended from the common wild duck and rabbit. 3 From the variation Darwin
observed in wild and domesticated populations, it does not follow that rabbits and ducks have some hypothetical common
ancestor in a fuzzy distant past. Darwin observed inborn, innate variation that already existed in the genomes of the pigeons
and it only had to be activated or expressed.From the excerpt above, we may even get an impression of how it works. A
genetic algorithm for making feathers (a feather program) is part of the pigeons genome and is present in every single cell.
The feather program is present in billions of pigeon cells, but it is NOT active in all those cells. Feathers are only formed
when the program is activated. The feather program is silent in cells where it should normally not operate. Activation of the
feather program in the wrong cells may often be incompatible with life, but sometimes it may produce pigeons with
(reversed) feathers on the feet. The program may be derepressed or activated through a mechanism that operates in the
pigeons genome. Whether feathers appear on the feet or on the head, and whether they appear normal or reversed is
merely a matter of activation and regulation of the feather program. But Darwin didnt know about silent genomic programs
or how they could become active. He didnt know about gene regulation and molecular switches. Darwin did not know
anything about genes and genomes.
Analogous variation
The idea that Darwin had been working on for over two decades prior to the publication of Origin, his ide fixe, was how
organic change (i.e. variation) present in populations might explain how novel species came into being. Unchanging, stable

species is not what Darwin had in mind. He pondered the riddles of variation; he thought about laws and principles
associated with the process of variation and believed he could disclose them by the study of the formation of new breeds.
Drawing from what he knew about pigeon breeding and equine varieties, Darwin describes some of his ideas about the
laws of variation in chapter five of Origin:Distinct species present analogous variations; and a variety of one species often
assumes some of the characters of an allied species, or reverts to some of the characters of an early progenitor. These
propositions will be most readily understood by looking to our domestic races. The most distinct breeds of pigeons, in
countries most widely apart, present sub-varieties with reversed feathers on the head and feathers on the feet, characters
not possessed by the aboriginal rock-pigeon; these then are analogous variations in two or more distinct races. 4Darwin
describes that the exact same traits can appear in distinct breeds of pigeons andimportantlythese traits
appeared independently in countries most widely apart. If several breeds arrive with the same characteristics
independently, it is unlikely they do so because of chance. Rather, the pigeon genomes may activate or derepress the same
feather program independently. The effect is that distinct breeds in countries most widely apart acquire the same
characteristics. Over and over the same traits appear in separated populations of organisms as the result of mutations from
within. Animal breeders like exuberant patterns and rarities; that is exactly what they are looking for to select. Aberrant traits
that are normally under stringent negative selection, as might be the case for the pigeons reversed feathers, may readily
become visible as soon as the selective pressure is relieved; that is, when organisms are reared and fed in the protective
environment of captivity. Darwin called the phenomenon of independent acquisition of the same traits analogous variation. It
is a common phenomenon well known to breeders, and Darwin easily found more examples of analogous variation:The
frequent presence of fourteen or even sixteen tail-feathers in the pouter, may be considered as a variation representing the
normal structure of another race, the fantail. I presume that no one will doubt that all such analogous variations are due to
the several races of the pigeon having inherited from a common parent the same constitution and tendency to variation,
when acted on by similar unknown influences. In the vegetable kingdom we have a case of analogous variation, in the
enlarged stems, or roots as commonly called, of the Swedish turnip and Ruta baga [sic] plants which several botanists rank
as varieties produced by cultivation from a common parent: if this be not so, the case will then be one of analogous variation
in two so-called distinct species; and to these a third may be added, namely, the common turnip. According to the ordinary
view of each species having been independently created, we should have to attribute this similarity in the enlarged stems of
these three plants, not to the vera causa of community of descent, and a consequent tendency to vary in a like manner, but
to three separate yet closely related acts of creation. 5Analogous variation originates in the genome. Through rearrangement
and/or transposition of DNA elements, previously silent (cryptic) traits can be activated. The underlying molecular
mechanism cant be merely random; if it were, then Darwin, and other breeders, would not have observed the expression of
the same traits independently of each other. A more contemporary translation of analogous variation would benonrandom (or: non-stochastic) variation, and it implies some sort of mechanism.
Reversions
In the excerpt above, Darwin also describes what he calls reversions. By this term he meant traits that are present in
ancestors, then disappear in first generation offspring, and then reappear in subsequent generations. Darwin acknowledged
that unknown laws of inheritance must exist, but still he talks about the proportion of blood. Reversions are easily explained
as traits present on separate chromosomes, and the inheritance of such traits is best understood from Gregor Mendels
inheritance laws. Through Mendels discovery of the genetic laws that underlie the inheritance of traits associated with
chromosome segregation (a hallmark of sexual reproduction), Mendel gave us a quantum theory of inheritance. He found
that traits are always inherited in well-defined and predictable proportions, and do not just come and go. Darwins
reversions are traits that reappear in later generations due to the inheritance of the same genes (alleles) from both
parents.5 Darwin didnt know about Mendels laws of inheritance, neither did he know about how variation is generated in
genomes. What Darwin described inOrigin, however, is that variation in offspring is a rule of biology. What Darwin described
in isolated species (whether domesticated breeds or island-bound birds) was the result of a burst of abundant speciation
resulting from multipurpose genomes. Variant breeds of pigeons are the phenotypes of a rearranged multipurpose pigeon
genome. The Galpagos finches (with their distinct beaks and body sizes) are the phenotypes of a rearranged multipurpose
finch genome. Where does the variation stem from in populations of Galpagos finches?Darwin was well aware of the
profound lack of knowledge on the origin of variation, and did not exclude mechanisms or laws to drive biological variation:I
have hitherto sometimes spoken as if the variations so common and multiform in organic beings under domestication, and in
a lesser degree in those in a state of nature had been due to chance. This, of course, is a wholly incorrect expression, but it
serves to acknowledge plainly our ignorance of the cause of each particular variation. 6Since Darwins days, almost all
corners of the living cell have been explored and our biological knowledge has expanded greatly. Through a vast library of
data generated by new research in biology, we now have the answers to many questions of a biological nature that had
puzzled Darwin. We may also have the answer to the cause of each particular variation, although we may not be aware of
it (yet). That is not because it is hidden between billions of other books and hard to find. No, it is because of the Darwinian
paradigm. The mechanism(s) that drive biological variations have been elucidated but are not yet recognized as such.One
of the findings of the new biology was that the DNA of most (if not all) organisms contains jumping genetic elements. The
mainstream opinion is that these elements are the remnants of ancient invasions of RNA viruses. RNA viruses are a class of
viruses that use RNA molecule(s) for information storage. Some of them, such as influenza and HIV, pose an increasing
threat to human health. Are virus invasions responsible for all the beautiful intricate complexity of organic beings? Is a virus
a creator? Most likely it is not. Otherwise why would we pump billions of research dollars into research to fight off viruses?
Could it be that mainstream science is mistaken?
The RNA virus paradox
Here is one good reason for believing that mainstream science is indeed mistaken: the RNA virus paradox. It has been
proposed that these RNA viruses have a long evolutionary history, appearing with, or perhaps before, the first cellular life
forms.7 Molecular genetic analyses have demonstrated that genomes, including those of humans and primates, are riddled
with endogenous retroviruses (ERVs), which are currently explained as the remnants of ancient RNA virus-invasions. RNA
virus origin can be estimated using homologous genes found in both ERVs and modern RNA virus families. By using the
best estimates for rates of evolutionary change (i.e. nucleotide substitution) and assuming an approximate molecular
clock,8,9 the families of RNA viruses found today could only have appeared very recently, probably not more than about
50,000 years ago.10 These data imply that present-day RNA viruses may have originated much more recently than our own
species. The implication of a recent origin of RNA viruses and the presence of genomic ERVs poses an apparent paradox
that has to be resolved. I will argue, in order to resolve the paradox, we should abstain from the mainstream idea that ERVs
are remnants of ancient RNA virus invasions.Solving the RNA paradox can only be accomplished by asking questions. First,
we have to ask ourselves, What do scientists mean when they refer to genetic elements as endogenous
retroviruses (ERVs)? In addition, we have to ask, How do ERVs behave, and whatif anyare their functions? ERVs have
been extensively studied in microorganisms, such as bakers yeast (Saccharomices cerivisiae) and the common gut

bacterium Escherichia coli. Most of our knowledge on the mechanisms of transposition of ERVs comes from those two
organisms. In yeast, the ERV known as Ty is flanked by long terminal repeats and specifies two genes, gag and pol, which
are similar to genes found in free operating RNA viruses. This is the main argument why scientists believe RNA viruses and
ERVs are evolutionarily closely related. The long terminal repeats enable the ERV to insert into the hosts DNA. The
transposition and integration is a stringently regulated process and seems to be target or site-specific. 11,12 During the
transpositions of an ERV, the hosts RNA polymerase II makes an RNA template, which is polyadenylated to become
messenger RNA. Thegag and pol mRNAs are translated and cleaved into several individual proteins. The gag gene
specifies a polyprotein that is cleaved into three proteins, which form a capsid-like structure surrounding the ERVs RNA. We
may ask here: why is a capsid involved? It should be noted that single stranded RNA molecules are very sticky nucleotide
polymers and the capsid may prevent the ERV from sticking at wrong places. The capsid may also be required to direct the
ERV to the right spots in the genome. The pol polyprotein is cleaved into four enzymes: protease, reverse transcriptase,
RNase and integrase. Protease cleaves the polyproteins into the individual proteins and then the RNA and proteins are
packed into a retrovirus-like particle. Reverse transcriptase forms a single-stranded DNA molecule from the ERV RNA
template, whereas RNase removes the RNA. The DNA is then circularized and the complementary DNA strand is
synthesized to create a double-stranded, circular copy of the ERV, which is then integrated into a new site in the hosts
genomic DNA by integrases activity. This intricate mechanism for transposition of ERVs seems to be irreducibly complex
(and thus a sign of intelligent design) since all ERVs and RNA viruses use the same or similar genetic components.
Variation-inducing genetic elements (VIGEs).
What can the function, if any, of ERVs be? If we follow the mainstream opinion, ERVs integrated into the genomes a very
long time ago as viral infections. Currently, ERVs are not particularly helpful. They merely hop around in the genome as
selfish genetic elements that serve no function in particular. They are mainly upsetting the genome. Long ago, however,
RNA viruses are alleged to have significantly contributed to evolution by helping to shape the genome.Its hard to imagine
this story to be true, and not only because of the RNA virus paradox. Modern viruses usually do not integrate into the DNA of
the germ line-cells; that is, the genes of an RNA virus dont usually become a part of the heritable material of the infected
host. If we obey the uniformitarian principle, we are allowed to argue: What currently doesnt happen didnt happen a long
time ago, either. To answer the question raised above, we must start finding out more about some biological characteristics
of a less complicated jumping genetic element, the so-called insertion-sequence (IS) element. IS elements are DNA
transposons abundantly present in the genomes of bacteria. IS elements share an important characteristic with ERVs:
transposition. Genome shuffling takes place in bacteria so frequently that we can hardly speak of a specific gene order. The
shuffling of pre-existing genetic elements may unleash cryptic information instantly as the result of position effects. Shuffling
seems to be an important mechanism to generate variation. But what is the mechanism for genome shuffling? The answer
to this question comes unexpectedly from evolutionary experiments, in which genetic diversity (evolutionary change) was
determined between reproducing populations of E. coli. During the breeding experiment, which ran for two decades, it was
observed that the number and location of IS (insertion sequence) elements dramatically changed in evolving populations,
whereas point mutations were not abundant.13After 10,000 generations of bacteria, the genomic changes were mostly due to
duplication and transposition of IS elements. A straightforward conclusion would thus be that jumping genetic elements,
such as the IS elements, were designed to deliberately generate variationvariation that might be useful to the organism. In
2004, Lenski, one of the co-authors of the studies, demonstrated that the IS elements indeed generate fitness-increasing
mutations.14 In E. coli bacteria IS elements activate crypticor silentcatabolic operons: a set of genetic programs for food
digestion. It has been reported that IS element transposition overcomes reproductive stress situations by activating cryptic
operons, so that the organism can switch to another source of food. IS elements do so in a regulated manner, transposing at
a higher rate in starving cells than in growing cells. In at least one case, IS elements activated a cryptic operon during
starvation only if the substrate for that operon was present in the environment. 15It is clear that in Lenskis experiments, IS
elements did not evolve over night. Rather, the IS elements reside in the genome of the original strain. During the two
decades of breeding, the IS elements duplicated and jumped from location to location. There was ample opportunity to
shuffle genes and regulatory sequences, and plenty of time for the IS elements to integrate into genes or to simply redirect
regulatory patterns of gene expression. Microorganisms may thus induce variation simply through shuffling the order of
genes and put old genes in new contexts: variation through position effects that can be inherited and propagated in time. Its
hardly an exaggeration to state that jumping genetic elements specified by the bacteriums genome generated the new
phenotypes.Transposition of IS elements is mostly characterized by local hopping, meaning that novel insertions are usually
in the proximity of the previous insertion and may be a more-or-less random phenomenon; the site of integration isnt
sequence dependent. Bacteria have a restricted set of genes and they divide almost indefinitely. Therefore, sequencedependent insertion and stringent regulation of transposition may not be required for IS-induced reshuffling of bacterial
genomes; in a population of billions of microorganisms all possible chromosomal rearrangements may occur due to
stochastic processes. In higher organisms the order of genes in the chromosomes is more important, but there is no
reason to exclude jumping genetic elements as a factor affecting the expression of genetic programs through position
effects. Transposable elements may therefore be a class of variation-inducing genetic elements (VIGEs) in higher
organisms. Indeed, ERVs, LINEs and SINEs resemble IS elements in bacteria in that they are able to transpose. In fact,
these elements may be responsible for a large part of the variability observed in higher organisms and may even be
responsible for adaptive phenotypes. The genomic transposition of VIGEs is not just a random process. As observed
for Ty elements in yeast, integration of all VIGEs may originally have been designed as site or sequence specific. It should
be noted that VIGEs might qualify as redundant genetic elements, of which the control over translocation may have
deteriorated over time.
VIGEs in humans
Mobile genetic elements make up a considerable part of the eukaryotic genome and have the ability to integrate into the
genome at a new site within their cell of origin. Mobile genetic elements of several classes make up more than one third of
the human genome.Human endogenous retroviruses (ERVs) are, as with yeast ERVs, first transcribed into RNA molecules
as if they were genuine coding genes. Each RNA is then transformed into a double stranded RNA-DNA hybrid through the
action of reverse transcriptase, an enzyme specified by the retrotransposon itself. The hybrid molecule is then inserted back
into the genome at an entirely different location. The result of this copy-paste mechanism is two identical copies at different
locations in the genome. More than 300,000 sequences that classify as ERVs have been found in the human genome,
which is about 8% of the entire human DNA.16

Figure 1. Variation-inducing genetic


elements (VIGEs) are found throughout
all biological domains, ranging from
bacteria to mammals. In yeast, insects
and mammals we observe similar
designs. (Homologous sequences are
indicated by the same colour).
Long
terminal
repeats
retrotransposons (LTR
retrotransposons) are transcribed into
RNA and then reverse transcribed into a
RNA-DNA hybrid and reinserted into the
genome. LTRs and retroviruses are very
similar
in
structure.
Both
contain gag and pol genes (figure 1),
which encode a viral particle coat (GAG), reverse transcriptase (RT), ribonuclease H (RH) and integrase (IN). These genes
provide proteins for the conversion of RNA into complementary DNA and facilitate insertion into the genome. Examples of
LTR retrotransposons are human endogenous retroviruses (HERVs). Unlike RNA retroviruses, LTR retrotransposons lack
envelope proteins that facilitate movements between cells.Non-LTR retrotransposons, such as long interspersed elements
(LINEs), are long stretches (4,0006,000 nucleotides) of reverse transcribed RNA molecules. LINEs have two open reading
frames: one encoding an endonuclease and reverse transcriptase, the other a nucleic acid binding protein (figure 1). There
are approximately 900,000 LINEs in the human genome, i.e. about 21% of the entire human DNA. LINEs are found in the
human genome in very high copy numbers (up to 250,000).17Short interspersed elements (SINEs) constitute another class
of VIGEs that may use an RNA intermediate for transposition. SINEs do not specify their own reverse transcriptase and
therefore they are retroposons by definition. They may be mobilized for transposition by using the enzymatic activity of
LINEs. About one million SINEs make up another 11% of the human genome. They are found in all higher organisms,
including plants, insects and mammals. The most common SINEs in humans are Alu elements. Alu elements are usually
around 300 nucleotides long, and are made up of repeating units of only three nucleotides. Some Alu elements secondarily
acquired the genes necessary to hop around in the genome, probably though recombination with LINEs. Others simply
duplicate or delete by means of unequal crossovers during cell divisions. More than one million copies of Alu elements,
often interspersed with each other, are found in the human genome, mostly in the non-coding sections. Many Alu-like
elements, however, have been found in the introns of genes; others have been observed between genes in the part
responsible for gene regulation and still others are located within the coding part of genes. In this way SINEs affect the
expression of genes and induce variation. Alu elements are often mediators of unequal homologous recombinations and
duplications.18
Figure 2. Schematic view of the central role VIGEs may play to generate
variation, adaptations and speciation events. Lower part: VIGEs may directly
modulate the output of (morpho)genetic algorithms due to position effects. Upper
part: VIGEs that are located on different chromosomes may be the result of
speciation events, because their homologous sequences facilitate chromosomal
translocations and other major karyotype rearrangements.Repetitive triplet
sequences (RTSs) present in the coding regions of proteins are a class of VIGEs
that cannot actively transpose. RTSs are usually found as an intrinsic part of the
coding region of proteins. For instance, RTSs can be formed by a tract of glycine
(GGC), proline (CCG), or alanine (GCC). Usually RTSs form a loop in the
messenger (m)RNA that provides a docking site for chaperone molecules or
proteins involved in the mRNA translation. RTSs may increase or decrease in
length through slippery DNA polymerases during DNA replication.
Conclusions and outlook
Now that we have redefined ERVs as a specific class of VIGEs, which were
present in the genomes from the day they were created, it is not difficult to see
how RNA viruses came into being. RNA viruses have emerged from VIGEs.
ERVs, LINEs and SINEs are the genetic ancestors of RNA viruses. Darwinists are wrong in promoting ERVs as remnants of
invasions of RNA viruses; it is the other way around. In my opinion, this view is supported by several recent observations.
RNA viruses contain functional genetic elements that help them to reproduce like a molecular parasite. Usually, an RNA
virus contains only a handful of genes. Human Immunodeficiency virus (HIV), the agent that causes AIDS, contains only
eight or nine genes. Where did these genes come from? An RNA world? From space? The most parsimonious answer is:
the RNA viruses got their genes from their hosts.The Rous arcoma virus (RSV), which has the ability to cause tumours, has
only 4 genes: gag, pol, envand src. In addition, the virus is flanked by a set of repeat sequences that facilitate integration
and promote replication. Gag, pol and env are genes commonly present in ERVs. The src gene of RSV is a modified hostderived src gene that normally functions as a tyrosine kinasea molecular regulator that can be switched on and off in order
to control cell proliferation. In the virus, the regulator has been reduced to an on-switch only that induces uncontrolled cell
proliferation. The src gene is not necessary for the survival of RSV, and RSV particles can be isolated that have only
the gag, pol and env genes. These have perfectly normal life cycles, but do not cause tumours in their host. It is clear the
virus picked up the src gene from the host. Why wouldnt the whole vector be derived from the host? VIGEs may easily pick
up genes or parts thereof as the result of an accidental polymerase II read-through. This will increase the genetic content of
the VIGE because the gene located next to the VIGE will also be incorporated. An improper excision of VIGEs may also
include extra genetic information. Imagine for instance HERV-K, a well-known human-specific endogenous retrovirus,
transposing itself to a location in the genome where it sits next to thesrc gene. If in the next round of transposition a part of
the src gene was accidentally added to the genes of HERV-K, it has already transformed into a fully formed RSV (see figure
3). It can be demonstrated that most RNA viruses are built of genetic information directly related to that of their hosts.

Figure 3. RNA viruses originate from VIGEs


through the uptake of host genes. In the
controlled and regulated context of the host DNA,
genes and VIGEs are harmless. A combination of
a few genes integrated in VIGEs may start an
uncontrolled replication of VIGEs. In this way,
VIGEs may take up genes that serve to form the
virus envelope (to wrap up the RNA molecule
derived from the VIGE) and genes that enable
them to leave and re-enter host cells. Once
VIGEs become full-blown shuttle vectors
between hosts, they act as virulent, devastating
and uncontrolled replicators. Hence, harmless
VIGEs may degenerated into molecular parasites in a similar way normally harmless cells turn into tumors once they lose
the power to control cell replication. VIGEs are on the basis of RNA viruses, not the other way around. The scheme outlined
here shows how the Rous sarcoma virus (RSV) may have formed from a VIGE that integrated the env gene and part of
the src gene (a proto-oncogene: for details see text).The outer membranes of influenza viruses, for instance, are built of
hemagglutinin and neuraminidase molecules. Neuraminidase is a protein that can also be found in the genomes of higher
host organisms, where it serves the function to modify glycopeptides and oligosaccharides. In humans, neuraminidase
deficiency leads to neurodegenerative lysosomal storage disorders: sialidosis and galactosialidosis.19 Even so-called orphan
genes, genes that are only found in viruses, can usually be found in the host genomes. Where? In VIGEs!To become a
shuttle-vector between organisms, all that is required is to have the right tools to penetrate and evade the defenses of the
host cell. HIV, for instance, acquired part of the gene of the hosts defence system (the gp120 core) that binds to the human
beta-chemokine receptor CCR5.20These observations make it plausible that all RNA viruses have their origin in the genomes
of living cells through recombination of hosts DNA elements (genes, promoters, enhancers). Every now and then such an
unfortunate recombination produces a molecular replicator: it is the birth of a new virus. Once the virus escapes the
genome and acquires a way to re-enter cells, it has become a fully formed infectious agent. It has long been known that
bacteria use genes acquired from bacteriophagesi.e. bacterial viruses that insert their DNA temporarily or even
permanently into the genome of their hostto gain reproductive advantage in a particular environment. Indeed, work
reaching back decades has shown that prophage (the integrated virus) genes are responsible for producing the primary
toxins associated with diseases such as diphtheria, scarlet fever, food poisoning, botulism and cholera. Diseases are
secondary entropy-facilitated phenomena. Virologists usually explain the evolution of viruses as recombination: that is, a
mixing of pre-existing viruses, a reshuffling and recombination of genes. 21 In bacteria, viruses may therefore be recombined
from plasmids carrying survival genes and/or transposable genetic elements, such as IS elements.
Discussion
Where did all the big, small and intermediate noses come from? Why are people tall, short, fat or slim? What makes
morphogenetic programs explicit? The answer may be VIGEs. It may turn out that the created kinds were designed with
baranomes that had an ability to induce variation from within. This radical view implies that the baranome of man may have
been designed to contain only one morphogenetic algorithm for making a nose. But the program was implicit. The program
was designed in such way that a VIGE easily integrated into it, becoming a part of it, hence making the program explicit.
Most inheritable variation we observe within the human population may be due to VIGEsElements that affect
morphogenetic and other programs of baranomes. It should be noted that a huge part of the genomic sequences are
redundant adaptors, spacers, duplicators, etc., which can be removed from the genome without major affects on
reproductive success (fitness). In bacteria, VIGEs have been coined IS elements; in plants they are known as transposons;
and in animals, they are called ERVs, LINEs, SINEs, and microsatellites. What these elements are particularly good at is
inducing genomic variation. It is the copy number of VIGEs and their position in the genome that determine gene expression
and the phenotype of the organism. Therefore, these transposable and repetitive elements should be renamed after their
function: variation-inducing genetic elements. VIGEs explain the variations Darwin referred to as due to chance.I will
address the details of a few specific classes of VIGEs and argue why modern genomes are literally riddled with VIGEs in a
future article. With the realization that RNA viruses have emerged from VIGEs the RNA paradox is solved. For many
mainstream scientists this solution will be bothersome because VIGEs were frontloaded elements of the baranomes of
created kinds and that implies a young age for their common ancestor and that all life is of recent origin.
The design of life: part 4variation-inducing genetic elements and their function
by Peter Borger
Endogenous retroviruses (ERVs) are claimed to be the selfish remnants of ancient RNA viruses that invaded the cells of
organisms millions of years ago and now merely free-ride the genome in order to be replicated. This selfish gene thinking
still dominates the public scene, but well-informed biologists know that the view among researchers is rapidly changing.
Increasingly, ancient RNA viruses and their remnants are being thought of as having played (and still do) a significant role in
protein evolution, gene structure, and transcriptional regulation. As argued in part 3 of this series of articles, ERVs may be
the executors of genetic variation, and qualify as specifically designed variation-inducing genetic elements (VIGEs)
responsible for variation in higher organisms. VIGEs induce variation by duplication, transposition, and may even rearrange
chromosomes. This extraordinary claim requires extraordinary scientific support, which is present throughout this paper. In
addition, the VIGE hypothesis may be a framework to understand the origin of diseases and explain rapid speciation events
through facilitated chromosome swapping.The idea that mobile genetic elements are involved in creating variation is not
new. Barbara McClintock, who discovered the first mobile genetic elements in maize, was also the first to recognize the true
nature of such jumping genetic elements. In 1956, she suggested that transposons (as she coined them) function as
molecular switches that could help determine when nearby genes turn on and off. Her key insight was that all living systems
have mechanisms available to restructure and repair the chromosomes. When it was discovered that more than half of the
human genome consists of (remnants of) mobile elements, McClintocks ideas were revived and further developed by Roy
Britten and Eric Davidson.1 It is only recently that we have begun to understand the power of VIGEs (variation-inducing
genetic elements) as genetic regulators and switches. A team of investigators lead by Haussler recently provided direct
evidence that even when a short interspersed nucleotide element (SINE) lands at some distance from a gene, it can take on
a regulatory role with powerful regulatory functions.2Haussler and his colleagues then looked at a particular examplea
copy of the ultra-conserved element that is near a gene called Islet 1 (ISL1). ISL1 produces a protein that helps control the
growth and differentiation of motor neurons. In the laboratory of Edward Rubin at the University of California, Berkeley,
postdoctoral fellow Nadav Ahituv combined the human version of the LF-SINE sequence with a reporter gene that would

produce an easily recognizable protein if the LF-SINE were serving as its on-off switch. He then injected the resulting DNA
into the nuclei of fertilized mouse eggs. Eleven days later, he examined the mouse embryos to see whether and where the
reporter gene was switched on. Sure enough, the gene was active in the embryos developing nervous systems, as would
be expected if the LF-SINE copy were regulating the activity of ISL1.3This excerpt shows that some functions of SINEs are
easily uncovered because they are directly affecting the expression of a particular gene. However, most functions of SINEs
may not be as easily detected as described of above, because they can integrate in gene desertsregions of the genome
where the chromosomes are devoid of any recognizable protein-coding genesor they may only subtly affect expression of
morphogenetic programs. Gene expression patterns largely determine how cells behave and determine the morphology of
organisms. VIGEs integrated in such genetic programs will change expression patterns of genes that will result in different
cellular behaviour and morphology. Whether the ultimate effect on the phenotype of the organism can be predicted,
however, remains to be established. This is largely due to the fact that we still do not know what morphogenetic algorithms
look like. Of course, biologists have argued that evolution and development are determined by homeobox (HOX) genes, but
HOX genes are merely executors of developmental (or morphogenetic) programs; they are not the programs themselves.In
another study by the same group, thousands of short identical DNA sequences that are scattered throughout the human
genome were analyzed. Many of those sequences were located in gene deserts, which are in fact so clogged with
regulatory DNA elements that they have recently been renamed regulatory jungles. But what do they regulate? The answer
could be morphogenesis. Most of the short DNA elements cluster near genes that play a decisive role during an organisms
first weeks after conception. The elements help to orchestrate an intricate choreography of when-and-where developmental
genes are switched on and off as the organism lays out its body plan. These elements may provide a sort of blueprint for
how to build the animal. The exact mechanism as to how such sequences may function as a plan to build an animal is not
entirely clear, but the DNA elements are particularly abundant near genes that help cells to stick together. That stickiness is
important in an organisms early life phase because these genes help cells to migrate to the right location and to form into
organs and tissues of the correct shape. The 10,402 short DNA sequences studied by Bejerano are derived from
transposable genetic elementsretrotransposons that duplicate themselves and hop around the genome. Apparently,
transposable genetic elements are not what they have been mistakenly thought to be: mess makers. Indeed, the view that
transposable elements are just bad stuff is rapidly changing. In an interview with Science Daily, Bejerano says:
We used to think they were mostly messing things up. Here is a case where they are actually useful.4
The genome is literally littered with thousands of transposable elements. The word is that when ancient retroviruses slipped
bits of their DNA into the primate genome millions of years ago, they successfully preserved their own genetic legacy. 5 It is
hard to imagine that they all have functions, but their presence could certainly determine or fine-tune the output of nearby
genes. In this way they may create subtle, but novel, variation. Bejerano and Hausslers research has already identified a
handful of transposons that serve as regulatory elements, but it is not clear how common the phenomenon might be. The
2007 study showed that the phenomenon may be a general one:
Now weve shown that transposons may be a major vehicle for evolutionary novelty. 4
The new findings indeed show that, in many cases, transposable elements function as regulators of gene output, but major
vehicles for evolution from microbe to man they are not. The transposition of jumping genetic elements may certainly affect
gene expression patterns, but it does not follow that they produce new genetic information. Considering the biological data,
it seems reasonable that transposable elements are present in the genome to deliberately induce biological variation.
Transposable elements thus qualify as variation-inducing genetic elements (VIGEs), and by leaving copies, they make sure
the new variation is heritable. The transposable elements present in regulatory jungles do not produce new biological
information, but they induce variation in the genetic algorithms and may underlie rapid adaptive radiation from uncommitted
pluripotent genomes. The regulatory jungles may provide an active reservoir of VIGEs that put existing genes in new
regulatory environments.
Regulated activity of VIGEs
The chromosome of the E. coli strain K12 includes three cryptic operons (linear genetic programs that encode programs to
metabolize three alternative sugars): one for cellobiose, one for arbutin and one for salicin. The organization of those
operons is like a normal substrate-induced bacterial operon; but the operons themselves are abnormal in that they are
cryptic (silent) in wild-type strains. Even in the presence of alternative sugars the operons are not activated, which indicates
that these bacteria dont readily use alternative sugars. Unused cryptic operons are redundant genetic programs that are not
observed by natural selection:As cryptic genes are not expressed to make any positive contribution to the fitness of the
organism, it is expected that they would eventually be lost due to the accumulation of inactivating mutations. Cryptic
genes would thus be expected to be rare in natural populations. This, however, is not the case. Over 90% of natural isolates
of E. coli carry cryptic genes for the utilization of beta-glucoside sugars. These cryptic operons can all be activated by IS
[insertion-sequence] elements, and when so activated allow E. colito utilize beta-glucoside sugars as sole carbon and
energy sources.6The excerpt shows that operons are kept inactive by repressors; that is, proteins that sit on the DNA of the
operon to ward off the nanomachines responsible for gene expression. Operons will only be active in bacteria that dont
have a functional gene coding for the repressors. Disrupting the repressor gene releases the cryptic programs. Thats where
the VIGEs come in. The transposition and integration of an IS element into the silencer elements is the mutational event that
activates the cryptic operon. Usually, the lack of an appropriate carbon and energy source triggers transposition of IS
elements. The transposition of IS elements appears to be regulated by starvation, and the integration in the repressor gene
is not utterly random. For instance, position 472 in the ebgR gene in the ebg operon of E. coli is a hotspot for integration of
IS elements, but only under starvation conditions. VIGEs may thus accumulate and integrate at well-defined positions in the
genome; this indicates a site-specific mechanism.In the fruit fly, some non-LTR (long terminal repeats retrotransposons)
integrate at very specific sites, but some others have been shown to integrate more or less at random. The specificity is
determined by endonucleases, enzymes that cut the DNA.7 Assuming VIGEs are part of a designed genome, we must
expect that their transposition and activity can be controlled and regulated. To avoid deleterious effects on the host and
retrotransposon, we may expect that the activity of VIGEs is regulated both by retrotransposon-and host-encoded factors.
Indeed, the mechanism of transposition seems to be dictated by the species in which the VIGEs operate. Recent research
has shown that in zebra fish the transposable element known as NLR integrant usually carries a few extra nucleotides at the
far end of the sequence, but it is not expressed in human cells.8 This observation would argue for the involvement of host
specific protein machinery in transpositionone more argument for the design origin of VIGEs.From the design perspective,
we may expect that the activity of VIGEs used to be a tightly controlled process. This is because the genomes in which they
operate also specify control factors: retroviral restriction factors. The restriction factors are proteins with the ability to bind to
retroviral capsid proteins and target them for degradation. Several restriction factors have been identified, including Fv1,
Trim5-alpha and Trim5-CypA.9 These factors share the common property of containing sequences that promote selfassociation: that is, they can assemble themselves. This fact, together with the observation that the restriction factors are
encoded by unrelated genes, is clear evidence of purposeful design. Retroviral restriction factors play an important role in

innate immunity against invading RNA viruses. For instance, Trim5-alpha binds directly to the incoming retroviral capsid core
and targets its premature disassembly or destruction. 10 In addition, some integrated VIGEs show evolutionary-tree
deviations, indicating a sequence-specific integration/excision mechanism. For instance, Alu HS6 is present in human,
gorilla and orangutan, but not in chimpanzee (see figure 1). This highly peculiar observation prompted the investigators to
consider the possibility of the specific excision of this Alu element from the chimpanzees genome.11 Precise excision implies
precise integration.

Figure 1. The Alu HS6 insertion sites in human, chimpanzee, gorilla, orangutan and owl monkey. Note the complete
absence in chimpanzee and owl monkey of any evidence for an extraction site. This suggests a highly specific mechanism
for integration and/or extraction. Alternatively, the sequences are a molecular falsification of the common descent of
primates.
Biologists specializing in synthetics at the Johns Hopkins University have built, from scratch, a LINE1-based retrotransposon
a genetic element capable of jumping around in the mouse genome. The man-made retrotransposon was designed to be
a far more effective jumper than natural retrotransposons; indeed, it inserts itself into many more places in the
genome.12,13 Why do not all LINEs jump so effectively? The scientists that constructed the synthetic LINE changed the
regulator sites used in transposition. Native LINE1 elements are relatively inactive in mice when they are introduced into the
mouse genome as transgenes. The synthetic LINE1-based element, ORFeus, contains two synonymously recoded ORFs
relative to mouse L1 and is far more active. This indicates that the integration and excision of native LINE1 elements are
controlled and regulated by an as yet unknown mechanism.VIGEs qualify as redundant genetic elements that can simply be
erased from the genome without fitness effects. As long as VIGEs do not upset critical genomic functions and do affect
reproductive success of the carrier, they are selectively neutral. Therefore, not only VIGEs, but also the mechanisms by
which they integrate, may readily wither and degrade due to accumulation of debilitating mutations. The control over
integration and activity we observe today may be less stringent compared to how it was originally designed. The originally
fine-tuned control for excision and transposition may have deteriorated over time and what is left today are more or less free
moving elements that may predominantly cause havoc when they integrate in the wrong location. It is easy to understand
how, for instance, endonucleases became less specific through mutations. This view may also explain why VIGEs are often
found associated with heritable diseases. As long as VIGE activity and integration do not significantly affect the fitness of the
organisms in which they operate, they are free to copy and paste themselves along the genome. Indeed, inactivating VIGEs
have been observed in genes not immediately required for reproduction. The GULO gene, which qualifies as a redundant
gene in populations with high vitamin C intake, has been hit several times by VIGEs and this may have contributed to
pseudogenization of GULO in humans.14Over time, VIGEs may have become increasingly detrimental to the hosts genome.
That is because information that regulates the integration and activity of VIGEs is subject to mutation. Some VIGEs have
been associated with susceptibility or resistance to diseases. In asthma, increased susceptibility appears to be associated
with microsatellite DNA instability (a term used for copy-number differences in repetitive DNA sequences). 15 Psoriasis is also
associated with HERV expression.16 It should be clear that deregulated and uncontrolled VIGEs cause havoc when they
integrate with and disrupt functional parts of genes.From the vantage of design, VIGE transpositions would make sense
during meiosis, which is the process leading to the formation of gametes. Controlled activity of VIGEs during meiosis may be
responsible for variation that can be passed on to the offspring. Although information is scant, it has been shown in
fungi17 and plants18 that VIGEs become active during meiosis and even have mechanisms to silence deleterious bystandereffects, such as deleterious point mutations.17 This shows transposable elements function to induce genetic variation,
providing the flexibility for populations to adapt successfully to environmental challenges. In chimpanzees, for instance, it
has been documented that large blocks of compound repetitive DNA, which have demonstrated retrotransposon function,
induce and prolong the bouquet stage in meiotic prophase and affect chiasm formations. 19 This may seem like a mouthful,
but it merely means that these repetitive genetic elements facilitate sister-chromosome exchanges when reproductive cells
(sperm and eggs) are being generated. Mammalian VIGEs, in particular Alu sequences, have the ability to induce genetic
recombination and duplications and contribute to chromosomal rearrangements, and they may account for the major part of
variation observed in humans. The methylation pattern of Alu sequences possibly determine activity and/or serve as
markers for genomic imprinting or in maintaining differences in male and female meiosis. 21
VIGEs and the human family
When short triplet repeat units are present in the coding part of a gene, they may even have functional consequences.
There is evidence that repeat units in the Runx2 gene formed the bent snout of the Bullterrier in a few
generations.22 Likewise, in mice and dogs, having five or six toes is determined by a repeat unit in the Alx4 gene.23 These
novel phenotypes can form almost over night, i.e. within one generation. Repetitive coding triplets that can be gained or lost
provide another mechanism to generate (instant) variation. It should be noted that this mechanism leads to reversible
genetic change, because a lost repetitive unit can readily be added back through duplication of a preexisting one, and vice
versa. Therefore, the RTS mechanism may explain seasonal changes in beak size observed for Galapagos finches,
adaptive phenotypes in Australian snakes and the evolution of the Cichlid varieties in African lakes.If we accept the idea of
deliberately designed VIGEs, we may also expect these elements to have played an important role in determining the
variety of human phenotypes. In other words, human races are the result of the activity of VIGEs! Biologists used to think
that our genomes all had the same basic structurethe same number of genes, in roughly the same order, with a few minor
differences in the sequence of DNA bases. Now, technologies that compare whole human genomes are revealing that this
picture is far from complete. Michael Wigler at Cold Spring Harbor Laboratory provided the first evidence that human
genomes are strikingly variable: his group showed marked differences in the copy number of protein-coding
genes.24 Apparently, some people have more copies of certain genes and, large-scale copy number polymorphisms (CNPs)
(about 100 kilobases and greater) contribute substantially to genomic variation between individuals. 25 In addition, people not
only carry different copy numbers of parts of our DNA they also have varying numbers of deletions, insertions and other
major rearrangements in their genomes.In 2005, Evan Eichler of the University of Washington reported 297 locations in the
genome where different individuals have different forms of major structural variations. At these spots some carry a major
deletion, for example, or an extra hundred bases of DNA. Differences between individuals were found in the protein-coding
genes; structural differences were also observed between individual genomes.26 From these and other studies we now know
that every one of us shares only about 99% of our DNA with all the other people on Earth. 27 The difference is due to
repetitive sequences that easily amplify or delete parts from the genome. With this, we have discovered another class of
VIGEs. The highly variable repetitive sequences also explain why genetic screening methods are so reliable nowadays: they

detect copy-number differences and hence are capable of discriminating between the DNA of a father and his son. Yes,
fathers and sons apparently differ at the level of VIGEs!A comparison of Asian and Caucasian people showed that 25% of
more than 4,000 protein-coding genes had significantly different expression patterns. Some gene expression levels differed
as much as twofold.28 The researchers commented that these findings support the idea that there are genetically
determined characteristics that tend to be clustered in different ethnic groups. Some genes are simply not expressed at all,
or are simply not present in the genomes. For instance, the gene UGT2B17 is deleted more often in Asians than in
Caucasians, and has a mean expression level that was more than 20 times greater in Caucasians relative to Asians. How
can such big differences be explained? Of course, single nucleotide polymorphisms (SNP; i.e. point mutations) in regulatory
sequences could affect gene regulation patterns. It is not clear, however, whether the SNPs themselves might be regulating
gene expression or whether they travel together with other DNA thats the regulator. We may also expect VIGEs to be
responsible for differences observed between human races.
VIGEs and chromosome 2
Human chromosome 2 looks as if it is the product of the fusion of two chromosomes that we find in chimpanzees as
chromosome 12 and 13. Therefore, some Darwinists take human chromosome 2 as the ultimate evidence for common
descent with chimpanzees. We know that a fusion of two ancestral chromosomes would have produced human
chromosome 2 with two centromeres. Currently, human chromosome 2 has only one centromere, so there must be
molecular evidence for remnants of the other. In 1982, Yunis and Prakash studied the putative fusion site of chromosome 2
with a technique known as fluorescence in situ hybridization (FISH) and reported signs of the expected centromere.29In
1991, another study also reported signs of the centromere.30 In 2005, after the complete sequencing of human chromosome
2, we would have expected full proof of the ancestors centromere. However, even after intense scrutiny there are still
only signs of the centromere. If signs of the centromere were already observed in 1982, why can it not be proved in the 2005
sequence analysis? Apparently, the site mutated at such high speed it is no longer recognizable as a centromere:During the
formation of human chromosome 2, one of the two centromeres became inactivated (2q21, which corresponds to the
centromere of chromosome 13) and the centromeric structure quickly deteriorated. 31Why would it quickly deteriorate? Why
would this region deteriorate faster than neutral? A close up scrutiny in 2005 showed the region that has been interpreted as
the ancestors centromere to be built from sequences present in 10 additional human chromosomes (1, 7, 9, 10, 13, 14, 15,
18, 21 and 22) as well as a variety of other genetic repeat elements that were already in place before the fusion
occurred.31 The sequences interpreted as ancient centromere are merely repetitive sequences and may actually qualify as
(deregulated) VIGEs.The chimpanzee and human genome projects demonstrated that the fusion did not result in loss of
protein coding genes. Instead, the human locus contains approximately 150,000 additional base pairs not found in
chimpanzee chromosome 12 and 13 (now also known as 2A and 2B). This is remarkable: why would a fusion result
in more DNA? We would rather have expected the opposite: the fusion would have left the fused product with less DNA,
since loss of DNA sequences is easily explained. The fact that humans have a unique 150 kb intervening sequence
indicates it may have been deliberately planned (or designed) into the human genome. It could also be proposed that the
150 kb DNA sequence demarcating the fusion site may have served as a particular kind of VIGE, an adaptor sequence for
bringing the chromosomes together and facilitating the fusion in humans.Another remarkable observation is that in the
fusion region we find an inactivated cobalamin synthetase (CBWD) gene. 32 Cobalamin synthetase is a protein that, in its
active form, has the ability to synthesize vitamin B12 (a crucial cofactor in the biosynthesis of nucleotides, the building
blocks of DNA and RNA molecules). Deficiency during pregnancy and/or early childhood results in severe neurological
defects because of impaired development of the brain. The Darwinian assumption is that the cobalamin synthetase gene
was donated by bacteria a long time ago and afterwards it was inactivated. Nowadays, humans must rely on
microorganisms in the colon as well as dietary intake (a substantial part coming from meat and milk products) for their
vitamin B12 supply. It is also noteworthy that humans have several copies of inactivated cobalamin-synthetase-like genes
on a number of locations in the genome, whereas chimpanzees only have one inactivated cobalamin synthetase gene. That
the fusion must have occurred after man and chimp split is evident from the fact that the fusion is unique to
humans:Because the fused chromosome is unique to humans and is fixed, the fusion must have occurred after the humanchimpanzee split, but before modern humans spread around the world, that is, between 6 and 1 million years ago. 32The
molecular analyses show we are more unique than we ever thought we were, and this is in complete accordance with
creation. Apparently the fusion of two human chromosomes that took place may have been the result of an intricate
rearrangement or activation of repetitive genetic elements after the Fall (as part of, or executors of, the curse following the
Fall)
and
inactivated
the
cobalamin synthetase gene. The
inactivation of the gene may have
reduced peoples longevity in a
similar way as the inactivation of
the GULO gene, which is crucial
to
vitamin
C
synthesis.14 Understanding
the
molecular properties of human
chromosome 2 is no longer
problematic if we simply accept
that humans, like the great apes,
were originally created with 48
chromosomes. Two of them
fused to form chromosome 2
when mankind went through a
severe
bottleneck.33 And,
as
argued above, the fusion was
mediated by VIGEs (see figure
2).
Figure 2. Putative mechanism for
how the human chromosome 2
formed through the fusion of two
ancestral chromosomes p2 and
q2, which are similar to chimpanzee chromosome 12 and 13). Like the great apes, originally the human baranome may
have contained 48 chromosomes. A) Independent transposition events may have led to the integration of a relative small
variation-inducing genetic element (VIGE). B) Extended duplication events of the VIGE may have resulted in rapid

expansion of the region in both p2 and q2, preparing it to become an adapter sequence required for fusion. C) The
expanded homologous regions align and facilitate the fusion of the chromosomes. The fusion region (2q21) and other parts
of the modern human genome still shows the remnants of this catastrophic event that only occurred in humans: the
cobalamin synthetase gene was inactivated and several inactive copies, which are not found in the chimpanzee, scattered
throughout the genome. Speculative note: Before the great flood, and probably shortly after, a balancing dynamics of both
48 and 46 chromosomes may have been present in the human family. This may explain the two extreme cranial
morphologies present in the human fossil record. The Homo erectus/Neandertal humans may have had a karyotype
comprised of 48 chromosomes (non-fused p2 and q2), whereas the other humans had 46 (fused p2 and q2).
The upside-down world
The p53 protein is a mammalian transcription factor that functions as the main switch controlling whether cells divide or go
into apoptosis (programmed cell death, which is sometimes required for severely damaged cells that may become tumours).
Scientists have long wondered how p53 gained the ability to turn on and off more than 1200 genes related to cell division,
DNA repair and programmed cell death. Without the p53 control system organisms would not function: all life would have
died as bulky tumors.Biologists at the University of California now claim that ancient retroviruses helped p53 to become an
important master gene regulator in primates.34 An RNA virus invaded the genome of our common ancestor, jumped into
hundreds of new positions throughout the human genome and spread numerous copies of repetitive DNA sequences that
allowed p53 to regulate many other genes, the team contends. Studies such as these prompted Darwinians to change their
minds about jumping genetic elements. In other words, a randomly hopping ERV provided the human genome with carefully
regulated decision-making machinery. The idea is beyond reasonable belief. Darwinists tend to mix things up. What really
happened in the human genome is a read-through of polymerase II in a VIGE that was next to a gene that already contained
a binding site for p53. Or maybe the VIGE was excised improperly, taking a bit of a flanking gene containing the p53 binding
site. Next, the modified VIGE amplified, transposed, amplified and so on. That explains this family of transposons. A similar
story can be told for the syncytin gene, which encodes a protein of the mammalian placenta that helps the fertilized egg to
become embedded in the uterus wall. Since syncytin has also been found on a transposable element, 35 mammals are
alleged to have obtained the gene from an RNA virus that infected a mammalian ancestor millions of years ago. It is more
likely, however, that syncytin was captured by a VIGE.In bacteria it is often observed that genes that convey a specific
advantageous character are transmitted via plasmids. Plasmids often contain genes for alternative metabolic routes or
genes that provide resistance to antibiotics, and they replicate independently from the hosts genome. Plasmids easily
shuttle between microorganisms via a DNA uptake-process known as transformation (or horizontal gene transfer). The
uptake of plasmids is regulated and controlled, and is DNA sequence dependent. The result of DNA transformations is rapid
adaptation to, for instance, antibiotics. Likewise, viruses replicate independently from the genomic DNA, leaving many
copies and easily transferring from one organism to another. Viruses are not plasmids, although some viruses may have a
similar function in higher organisms as do plasmids in bacteria: they may be able to aid in rapid adaptations to changing
environments. It has been observed that a virus can indeed transfer an adaptive phenotype. The virus that is present in the
fungus (Curvularia protuberata), can induce heat resistance in tropical panic grass (Dichanthelium lanuginosum), allowing
both organisms to grow at high soil temperatures in Yellowstone National Park. This shows that viruses still provide
strategies for rapid adaptation.Fungal isolates cured of the virus are unable to confer heat tolerance, but heat tolerance is
restored after the virus is reintroduced. The virus-infected fungus confers heat tolerance not only to its native monocot host
but also to a eudicot host, which suggests that the underlying mechanism involves pathways conserved between these two
groups of plants.36In fruit flies, wing pigmentation depends on a gene known as yellow. The gene exists in the genome of all
individual fruit flies, but in some it is not active. By analysing the genetic origin of the spots on fruit fly wings, researchers
have discovered a molecular mechanism that explains how new patterns of pigmentation can emerge. The secret appears
to be specific genetic elements that orchestrate where proteins are used in the construction of an insects body. The
segments do not code for proteins, but rather regulate the nearby gene that specifies the pigmentation. As such, these
regulatory DNA segments qualify as VIGEs. The researchers transferred the regulatory DNA segment from a spotted
species (Drosophila biarmipes) into another species not expressing the spot (D. melanogaster), and attached the regulatory
region to a gene for a fluorescent protein. They found that the fluorescent gene was expressed in the spot-free species in
exactly the same patterns as the yellow gene is expressed in the spotted species. By comparing several spotted and
spotfree species, the scientists established that mutation of a regulatory DNA segment led to the expression of the spotted
trait. They discovered that in the species with spotted wings this regulatory segment has multiple binding sites for a protein
that then activates the yellow gene. Spotless species do not have multiple binding sites. 37 The multiplicity of regulatory DNA
segments may argue for an amplification mechanism or targeted integration of the regulatory sequence. That explains why
the same pattern of pigmentation can emerge independently in distantly related species (Darwins analogous variation). The
observed shuttle function of viruses leads me to pose an intriguing question: Were endogenous retroviruses originally
designed to serve as shuttle-vectors to deliver messages from the soma to the germ-line? If yes, then it would put
Lamarckian evolution in an entirely new perspective.
Discussion
The findings of the new biology demonstrate that mainstream scientists are wrong regarding the idea that transposable
elements are the selfish remnants of ancient invasions by RNA viruses. Instead, RNA viruses originate from transposable
elements that were designed as variation-inducing genetic elements (VIGEs). Created kinds were deliberately frontloaded
with several types of controlled and regulated transposable elements to allow them to rapidly invade and adapt to all corners
and crevices of the earth. Due to the redundant character of VIGEs, their controlled regulation may have readily deteriorated
and some of them may now merely cause havoc. The VIGE hypothesis provides elegant explanations for several biological
observations that may otherwise be difficult to interpret within the creationist framework, including the origin of diseases
(RNA viruses) and chromosome rearrangements. The VIGE hypothesis may be a framework for extended creationist
research programs. Some intriguing question can already be raised.Were VIGEs intentionally designed to cause
diseases? No, they were not. It is conceivable that the transposition and integration of VIGEs is not entirely random. The
transposition of VIGEs may have been originally present in the baranome as controlled and regulated elements and
activated upon intrinsic or external triggers. To induce variation in offspring, triggers for the transposition of VIGEs could be
released during meiosis, when the reproductive cells are being produced. The emergence of RNA viruses from VIGEs may
be a result of the Fall, when we were cut of from the regenerating healing power of the designer.Why are some VIGEs
located on the exact same position in primates and humans? Each original baranome must have had a limited number
of VIGEs, some of which we still find on the same location in distinct species. In distinct baranomes, VIGEs may have been
located on the exact same positions (the T-zero location), which then explains why some VIGEs such as ERVs, can be
found in the same location in, for instance, primates and humans. In addition, sequence-dependent integration of VIGEs
may also contribute to this observation.How could Bdelloid rotifers, a group of strictly asexually reproducing aquatic
invertebrates, rapidly form novel species?Asexual production of progeny, as observed in Bdelloids, is found in over one

half of all eukaryotic phyla and is likely to contribute to adaptive changes, as suggested by recent evidence from both
animals and plants.38 The Bdelloids may have been derived from pluripotent baranomes containing numerous DNA
transposons and retro elements, including active LTR retrotransposons containing gag,pol, and env-like open reading
frames.39 These elements are able to reshuffle the genomes and facilitate instant variation and speciation.Do we also
observe remnants of DNA viruses in the mammalian genomes? If not, this supports my idea that RNA viruses emerged
from VIGEs, and implies DNA viruses have a different origin; probably, as with the Mimi-virus 40, they originated from
degenerated bacteria.Why was a class of VIGEs designed with information for protein capsids? The capsid may have
been acquired from the hosts genome or it may have been designed to prevent the RNA molecules from attaching
themselves to, or finding, integrations sites. A very speculative idea may be that these VIGEs were designed to shuttle
information from the soma to the germ-line. One thing is clear, however: creation researchers have loads of work to do.
INFORMATION THEORY
Refuting EvolutionChapter 9
A handbook for students, parents, and teachers countering the latest arguments for evolution
by Jonathan Sarfati, Ph.D., F.M.
Is the design explanation legitimate?
First published in Refuting Evolution, Chapter 9
As pointed out in previous chapters, Teaching about Evolution frequently dismisses creation as unscientific and religious.
Creationists frequently point out that creation occurred in the past, so cannot be directly observed by experimental science
and that the same is true of large-scale evolution. But evolution or creation might conceivably have left some effects that
can be observed. This chapter discusses the criteria that are used in everyday life to determine whether something has
been designed, and applies them to the living world. The final section discusses whether design is a legitimate explanation
for lifes complexity or whether naturalistic causes should be invoked a priori.
How do we detect design?
People detect intelligent design all the time. For example, if we find arrowheads on a desert island, we can assume they
were made by someone, even if we cannot see the designer.1There is an obvious difference between writing by an
intelligent
person,
e.g.
Shakespeares
plays,
and
a
random
letter
sequence
like
WDLMNLTDTJBKWIRZREZLMQCOP.2 There is also an obvious difference between Shakespeare and a repetitive
sequence like ABCDABCDABCD. The latter is an example oforder, which must be distinguished from Shakespeare, which is
an example of specified complexity.We can also tell the difference between messages written in sand and the results of
wave and wind action. The carved heads of the U.S. presidents on Mt Rushmore are clearly different from erosional
features. Again, this is specified complexity. Erosion produces either irregular shapes or highly ordered shapes like sand
dunes, but not presidents heads or writing.Another example is the SETI program (Search for Extraterrestrial Intelligence).
This would be pointless if there was no way of determining whether a certain type of signal from outer space would be proof
of an intelligent sender. The criterion is, again, a signal with a high level of specified complexitythis would prove that there
was an intelligent sender, even if we had no other idea of the senders nature. But neither a random nor a repetitive
sequence would be proof. Natural processes produce radio noise from outer space, while pulsars produce regular signals.
Actually, pulsars were first mistaken for signals by people eager to believe in extraterrestrials, but this is because they
mistook order for complexity. So evolutionists (as are nearly all SETI proponents) are prepared to use high specified
complexity as proof of intelligence, when it suits their ideology. This shows once more how ones biases and assumptions
affect ones interpretations of any data. .3
Life fits the design criterion
Life is also characterized by high specified complexity. The leading evolutionary origin-of-life researcher, Leslie Orgel,
confirmed this:Living things are distinguished by their specified complexity. Crystals such as granite fail to qualify as living
because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity. 4Unfortunately, a
materialist like Orgel here refuses to make the connection between specified complexity and design, even though this is the
precise criterion of design.To elaborate, a crystal is a repetitive arrangement of atoms, so is ordered. Such ordered
structures usually have the lowest energy, so will form spontaneously at low enough temperatures. And the information of
the crystals is already present in their building blocks; for example, directional forces between atoms. But proteins and DNA,
the most important large molecules of life, are not ordered (in the sense of repetitive), but have high specified complexity.
Without specification external to the system, i.e., the programmed machinery of living things or the intelligent direction of an
organic chemist, there is no natural tendency to form such complex specified arrangements at all. When their building blocks
are combined (and even this requires special conditions 5), a random sequence is the result. The difference between a
crystal and DNA is like the difference between a book containing nothing but ABCD repeated and a book of
Shakespeare. However, this doesnt stop many evolutionists (ignorant of Orgels distinction) claiming that crystals prove that
specified complexity can arise naturallythey merely prove that order can arise naturally, which no creationist contests.6
Information
The design criterion may also be described in terms of information. Specified complexity means high information content. In
formal terms, the information content of any arrangement is the size, in bits, of the shortest algorithm (program) required to
generate that arrangement. A random sequence could be formed by a short program:
Print any letter at random.
Return to step 1.
A repetitive sequence could be made by the program:
Print ABCD.
Return to step 1.
But to print the plays of Shakespeare, a program would need to be large enough to print every letter in the right place.7
The information content of living things is far greater than that of Shakespeares writings. The atheist Richard Dawkins says:
[T]here is enough information capacity in a single human cell to store the Encyclopaedia Britannica, all 30 volumes of it,
three or four times over.8If its unreasonable to believe that an encyclopedia could have originated without intelligence, then
its just as unreasonable to believe that life could have originated without intelligence.Even more amazingly, living things
have by far the most compact information storage/retrieval system known. This stands to reason if a microscopic cell stores
as much information as several sets of Encyclopaedia Britannica. To illustrate further, the amount of information that could
be stored in a pinheads volume of DNA is staggering. It is the equivalent information content of a pile of paperback books
500 times as tall as the distance from earth to the moon, each with a different, yet specific content.9
Machinery in living things

On a practical level, information specifies the many parts needed


to make machines work. Often, the removal of one part can disrupt
the whole machine, so there is a minimum number of parts without
which the machine will not work. Biochemist Michael Behe, in his
book Darwins Black Box, calls this minimum number irreducible
complexity.10 He gives the example of a very simple machine: a
mousetrap. This would not work without a platform, holding bar,
spring, hammer, and catch, all in the right place. If you remove just
one part, it wont work at allyou cannot reduce its complexity
without destroying its function entirely.The thrust of Behes book is
that many structures in living organisms show irreducible
complexity, far in excess of a mousetrap or indeed any man-made
machine. For example, he shows that even the simplest form of vision in any living creature requires a dazzling array of
chemicals in the right places, as well as a system to transmit and process the information. The blood-clotting mechanism
also has many different chemicals working together, so we wont bleed to death from minor cuts, nor yet suffer from clotting
of the entire system.
A simple cell?
Many people dont realize that even the simplest cell is fantastically complexeven the simplest self-reproducing organism
contains encyclopedic quantities of complex, specific information. Mycoplasma genitalium has the smallest known genome
of any free-living organism, containing 482 genes comprising 580,000 base pairs11 (compare 3 billion base pairs in humans,
as Teaching about Evolution states on page 42). Of course, these genes are functional only in the presence of pre-existing
translational and replicating machinery, a cell membrane, etc. But Mycoplasma can only survive by parasitizing other more
complex organisms, which provide many of the nutrients it cannot manufacture for itself. So evolutionists must postulate a
more complex first living organism with even more genes.More recently, Eugene Koonin and others tried to calculate the
bare minimum requirement for a living cell, and came up with a result of 256 genes. But they were doubtful whether such a
hypothetical bug could survive, because such an organism could barely repair DNA damage, could no longer fine-tune the
ability of its remaining genes, would lack the ability to digest complex compounds, and would need a comprehensive supply
of organic nutrients in its environment.12Molecular biologist Michael Denton, writing as a non-creationist skeptic of Darwinian
evolution, explains what is involved:Perhaps in no other area of modern biology is the challenge posed by the extreme
complexity and ingenuity of biological adaptations more apparent than in the fascinating new molecular world of the cell .
To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until
it is twenty kilometers in diameter and resembles a giant airship large enough to cover a great city like London or New York.
What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we
would see millions of openings, like the port holes of a vast space ship, opening and closing to allow a continual stream of
materials to flow in and out. If we were to enter one of these openings we would find ourselves in a world of supreme
technology and bewildering complexity.Is it really credible that random processes could have constructed a reality, the
smallest element of whicha functional protein or geneis complex beyond our own creative capacities, a reality which is
the very antithesis of chance, which excels in every sense anything produced by the intelligence of man? Alongside the level
of ingenuity and complexity exhibited by the molecular machinery of life, even our most advanced artifacts appear clumsy
.It would be an illusion to think that what we are aware of at present is any more than a fraction of the full extent of biological
design. In practically every field of fundamental biological research ever-increasing levels of design and complexity are
being revealed at an ever-accelerating rate.13For natural selection (differential reproduction) to start, there must be at least
one self-reproducing entity. But as shown above, the production of even the simplest cell is beyond the reach of undirected
chemical reactions. So its not surprising that Teaching about Evolutionomits any discussion of the origin of life, as can easily
be seen from the index. However, this is part of the General Theory of Evolution (molecules to man), 14 and is often called
chemical evolution. Indeed, the origin of the first self-reproducing system is recognized by many scientists as an unsolved
problem for evolution, and thus evidence for a designer.15 The chemical hurdles that non-living matter must overcome to
form life are insurmountable, as shown by many creationist writers. 16
Can mutations generate information?
Even if we grant evolutionists the first cell, the problem of increasing the total information content remains. To go from the
first cell to a human means finding a way to generate enormous amounts of informationbillions of base pairs (letters)
worth. This includes the recipes to build eyes, nerves, skin, bones, muscles, blood, etc. In the section on variation and
evolution, we showed that evolution relies on copying errors and natural selection to generate the required new
information.However, the examples of contemporary evolution presented byTeaching about Evolution are all losses of
information.This is confirmed by the biophysicist Dr Lee Spetner, who taught information and communication theory at Johns
Hopkins University:In this chapter Ill bring several examples of evolution, [i.e., instances alleged to be examples of
evolution] particularly mutations, and show that information is not increased. But in all the reading Ive done in the lifesciences literature, Ive never found a mutation that added information.All point mutations that have been studied on the
molecular level turn out to reduce the genetic information and not to increase it.The NDT [neo-Darwinian theory] is
supposed to explain how the information of life has been built up by evolution. The essential biological difference between a
human and a bacterium is in the information they contain. All other biological differences follow from that. The human
genome has much more information than does the bacterial genome. Information cannot be built up by mutations that lose
it. A business cant make money by losing it a little at a time.17This is not to say that no mutation is beneficial, that is, it helps
the organism to survive. But as pointed out in chapter 2, even increased antibiotic and pesticide resistance is usually the
result of loss of information, or sometimes a transfer of informationnever the result of newinformation. Other beneficial
mutations include wingless beetles on small desert islandsif beetles lose their wings and so cant fly, the wind is less likely
to blow them out to sea.18 Obviously, this has nothing to do with the origins of flight in the first place, which is what evolution
is supposed to be about. Insect flight requires complicated movements to generate the patterns of vortices needed for liftit
took a sophisticated robot to simulate the motion.19
Would any evidence convince evolutionists?
The famous British evolutionist (and Communist) J.B.S. Haldane claimed in 1949 that evolution could never produce
various mechanisms, such as the wheel and magnet, which would be useless till fairly perfect.20 Therefore such machines
in organisms would, in his opinion, prove evolution false. That is, evolution meets one criterion Teaching about
Evolution claims is necessary for science, that there are tests that could conceivably prove it was wrong (the falsifiability
criterion of the eminent philosopher of science, Karl Popper).Recent discoveries have shown that there are indeed wheels
in living organisms. This includes the rotary motor that drives the flagellum of a bacterium, and the vital enzyme that makes
ATP, the energy currency of life.21 These molecular motors have indeed fulfilled one of Haldanes criteria. Also,

turtles,22 monarch butterflies,23 and bacteria24 that use magnetic sensors for navigation seem to fulfil Haldanes other
criterion.I wonder whether Haldane would have had a change of heart if he had been alive to see these discoveries. Most
evolutionists rule out intelligent design a priori, so the evidence, overwhelming as it is, would probably have no effect.
Other marvels of design
The genetic information in the DNA cannot be translated except with many different enzymes, which are themselves
encoded. So the code cannot be translated except via products of translation, a vicious circle that ties evolutionary origin-oflife theories in knots.These include double-sieve enzymes to make sure the right amino acid is linked to the right tRNA. One
sieve rejects amino acids too large, while the other rejects those too small. 25The genetic code thats almost universal to life
on earth is about the best possible, for protecting against errors.26 [See also DNA: marvellous messages or mostly mess?]
The genetic code also has vital editing machinery that is itself encoded in the DNA. This shows that the system was fully
functional from the beginninganother vicious circle for evolutionists. [See also Self-replicating enzymes?]Yet another
vicious circle, and there are many more, is that the enzymes that make the amino acid histidine themselves contain
histidine.The complex compound eyes of some types of trilobites (extinct and supposedly primitive invertebrates) were
amazingly designed. They comprised tubes that each pointed to a different spot on the horizon, and had special lenses that
focused light from any distance. Some trilobites had a sophisticated lens design comprising a layer of calcite on top of a
layer of chitinmaterials with precisely the right refractive indicesand a wavy boundary between them of a precise
mathematical shape.27 The Designer of these eyes is a Master Physicist, who applied what we now know as the physical
laws of Fermats principle of least time, Snells law of refraction, Abbs sine law and birefringent optics.Lobster eyes are
unique in being modeled on a perfect square with precise geometrical relationships of the units. NASA X-ray telescopes
copied this design.28The amazing sonar system of dolphins was discussed in chapter 5. Many bats also have an exquisitely
designed sonar system. The echolocation of fishing bats is able to detect a minnows fin, as fine as a human hair, extending
only 2 mm above the water surface. This fine detection is possible because bats can distinguish ultra-sound echoes very
close together. Man-made sonar can distinguish echoes 12 millionths of a second apart, although with a lot of work this can
be cut to 6 millionths to 8 millionths of a second. But bats relatively easily distinguish ultra-sound echoes only 2 to 3
millionths of a second apart according to researcher James Simmons of Brown University. This means they can distinguish
objects just 3/10ths of a millimeter apartabout the width of a pen line on paper. 29The neural system of a leech uses
trigonometric calculations to work out which muscles to move and by how much. 30From my own specialist field of vibrational
spectroscopy: there is good evidence that our chemical-detecting sense (smell) works on the same quantum mechanical
principles.31
Why should design be unscientific?
The real reason for rejecting the creation explanation is the commitment to naturalism. As shown in chapter 1, evolutionists
have turned science into a materialistic game, and creation/design is excluded by their self-serving rules.32 Therefore,
although Teaching about Evolutiondismisses creation science as unscientific, this appears to be derived more from the
rules of the game than from any evidence.Even some anti-creationist philosophers of science have strongly criticized the
evolutionary scientific and legal establishment over these word games. They rightly point out that we should be more
interested in whether creation is true or false than whether it meets some self-serving criteria for science.33Many of these
word games are self-contradictory, so one must wonder whether their main purpose is to exclude creation at any cost, rather
than for logical reasons. For example, Teaching about Evolution claims on page 55:The ideas of creation science derive
from the conviction that a intelligent designer created the universeincluding humans and other living thingsall at once in
the relatively recent past. However, scientists from many fields have examined these ideas and have found them to be
scientifically insupportable. For example, evidence for a very young earth is incompatible with many different methods of
establishing the age of rocks. Furthermore, because the basic proposals of creation science are not subject to test and
verification, these ideas do not meet the criteria for science.The Teaching about Evolution definition of creation science is
almost right, although creationists following creationist assumptions would claim that different things were created on
different days. However, Teaching about Evolution claims that the ideas of creation science have been examined and found
unsupportable, then they claim that the basic proposals of creation science are not subject to test and verification. So how
could its proposals have been examined (tested!) if they are not subject to test?Of course, it is not true that science has
proved the earth to be billions of years oldsee chapter 8.The historian and philosopher of science Stephen Meyer
concluded:We have not yet encountered any good in principle reason to exclude design from science. Design seems just as
scientific (or unscientific) as its evolutionary competitors .An openness to empirical arguments for design is therefore a
necessary condition of a fully rational historical biology. A rational historical biology must not only address the question,
Which materialistic or naturalistic evolutionary scenario provides the most adequate explanation of biological complexity?
but also the question Does a strictly materialistic evolutionary scenario or one involving intelligent agency or some other
theory best explain the origin of biological complexity, given all relevant evidence? To insist otherwise is to insist that
materialism holds a metaphysically privileged position. Since there seems no reason to concede that assumption, I see no
reason to concede that origins theories must be strictly naturalistic.34
Scientific laws of information and their implicationspart 1
by Werner Gitt
The grand theory of atheistic evolution posits that matter and energy alone have
given rise to all things, including biological systems. To hold true, this theory must
attribute the existence of all information ultimately to the interaction of matter and
energy without reference to an intelligent or conscious source. All biological
systems depend upon information storage, transfer and interpretation for their
operation. Thus the primary phenomenon that the theory of evolution must account
for is the origin of biological information. In this article it is argued that fundamental
laws of information can be deduced from observations of the nature of information.
These fundamental laws exclude the possibility that information, including biological
information, can arise purely from matter and energy without reference to an
intelligent agent. As such, these laws show that the grand theory of evolution cannot
in principle account for the most fundamental biological phenomenon. In addition,
the laws here presented give positive ground for attributing the origin of biological
information to the conscious, wilful action of a designer. The far-reaching
implications of these laws are discussed.
Figure 1. The five levels of information. To fully characterise the concept of
information, five aspects must be consideredstatistics, syntax, semantics,

pragmatics and apobetics. Information is represented (that is, formulated, transmitted, stored) as a language. From a
stipulated alphabet, the individual symbols are assembled into words (code). From these words (each word having been
assigned a meaning), sentences are formed according to the firmly defined rules of grammar (syntax). These sentences are
the bearers of semantic information. Furthermore, the action intended/carried out (pragmatics) and the desired/achieved
goal (apobetics) belong of necessity to the concept of information. All our observations confirm that each of the five levels is
always pertinent for the sender as well as the receiver.In the communication age information has become fundamental to
everyday life. However, there is no binding definition of information that is universally agreed upon by practitioners of
engineering, information science, biology, linguistics or philosophy.There have been repeated attempts to grapple with the
concept of information. The most sweeping formulation was recently put forward by a philosopher: The entire universe is
information.1 Here we will set out in a new direction, by seeking a definition of information with which it is possible to
formulate laws of nature.Because information itself is non-material, 2 this would be the first time that a law of nature (scientific
law) has been formulated for such a mental entity. We will first establish a universal definition for information; then state the
laws themselves; and, finally, we will draw eight comprehensive conclusions.
What is a law of nature?
If statements about the observable world can be consistently and repeatedly confirmed to be universally true, we refer to
them as laws of nature. Laws of nature describe events, phenomena and occurrences that consistently and repeatedly take
place. They are thus universally valid laws. They can be formulated for material entities in physics and chemistry (e.g.
energy, momentum, electrical current, chemical reactions). Due to their explanatory power, laws of nature enjoy the highest
level of confidence in science. The following attributes exhibited by laws of nature are especially significant:Laws of nature
know no exceptions. This sentence is perhaps the most important one for our purposes. If dealing with a real (not merely
supposed) natural law, then it cannot be circumvented or brought down. A law of nature is thus universally valid, and
unchanging. Its hallmark is its immutability. A law of nature can, in principle, be refuteda single contrary example would
end its status as a natural law.
Laws of nature are unchanging in time.
Laws of nature can tell us whether a process being contemplated is even possible or not. This is a particularly important
application of the laws of nature.
Laws of nature exist prior to, and independent of, their discovery and formulation. They can be identified through research
and then precisely formulated. Hypotheses, theories or models are fundamentally different. They are invented by people, not
merely formulated by them. In the case of the laws of nature, for physical entities it is often, but not always, 3 possible to find
a mathematical formulation in addition to a verbal one. In the case of the laws for non-material entities presented here, the
current state of knowledge permits only verbal formulations. Nevertheless, these can be expressed just as strongly, and are
just as binding, as all others.
Laws of nature can always be successfully applied to unknown situations. Only thus was the journey to the moon, for
example, possible.
Due to their explanatory power, laws of nature enjoy the highest level of confidence in science.
When we talk of the laws of nature, we usually mean the laws of physics (e.g. the second law of thermodynamics, the law of
gravity, the law of magnetism, the law of nuclear interaction) and the laws of chemistry (e.g. Le Chateliers Principle of least
restraint). All these laws are related exclusively to matter. But to claim that our world can be described solely in terms of
material quantities is failing to acknowledge the extent of ones perception. Unfortunately many scientists follow this
philosophy of materialism (e.g. Dawkins, Kppers, Eigen 4), remaining within this self-imposed boundary of insight. But our
world also includes non-material concepts such as information, will and consciousness. This article (described more
comprehensively in ref. 1) attempts, for the first time, also to formulate laws of nature for non-material quantities. The same
scientific procedures used for identifying laws of nature are also used for identifying laws governing non-material entities.
Additionally, these laws exhibit the same attributes as listed above for the laws of nature. Therefore they fulfil the same
conditions as the laws of nature for material quantities, and possessing, consequently, a similar power of inference. Alex
Williams describes this concept as a revolutionary new understanding of information. 5 In an in-depth personal discussion
with Dr Bob Compton (Idaho, U.S.A.), he proposed to name the laws of nature on information the Scientific Laws of
Information (SLI) in order to distinguish them from the physical laws. This positive suggestion is to be taken seriously since
it takes account of the shortcomings of the materialistic view. I have therefore decided to use the term here.
What is information?
Information is not a property of matter!
The American mathematician Norbert Wiener made the oft-cited statement: Information is information, neither matter nor
energy.6 With this he acknowledged a very significant thing: information is not a material entity. Let me clarify this important
property of information with an example. Imagine a sandy stretch of beach. With my finger I write a number of sentences in
the sand. The content of the information can be understood. Now I erase the information by smoothing out the sand. Then I
write other sentence in the sand. In doing so I am using the same matter as before to display this information. Despite this
erasing and rewriting, displaying and destroying varying amounts of information, the mass of the sand did not alter at any
time. The information itself is thus massless. A similar thought experiment involving the hard drive of a computer quickly
leads to the same conclusion.Norbert Wiener has told us what information is not; the question of what information really is,
then, will be answered in this article.Because information is a non-material entity, its origin is likewise not explicable by
material processes. What causes information to come into existence at allwhat is the initiating factor? What causes us to
write a letter, a postcard, a note of congratulations, a diary entry or a file note? The most important prerequisite for the
construction of information is our own will, or that of the person who assigned the task to us. Information always depends
upon the will of a sender who issues the information. Information is not constant; it can be deliberately increased and can be
distorted or destroyed (e.g. through disturbances in transmission).
In summary: Information arises only through will (intention and purpose).
A definition of universal information
Technical terms used in science are sometimes also used in everyday language (e.g. energy, information). However, if one
wants to formulate laws of nature, then the entities to which they apply must be unambiguous and clear cut. So one always
needs to define such entities very precisely. In scientific usage, the meaning of a term is in most cases considerably more
narrowly stated than its range of meaning in everyday usage (i.e. it is a subset of). In this way, a definition does more than
just assign a meaning; it also acts to contain or restrict that meaning. A good natural-law definition is one that enables us to
exclude all those domains (realms) in which laws of nature are not applicable. The more clearly one can establish the
domain of definition, the more precise (and furthermore certain) the conclusions which can be drawn.Exampleenergy: In
everyday language we use the word energy in a wide range of meanings and situations. If someone does something with
great diligence, persistence and focused intensity, we might say he applies his whole energy to the task. But the same
word is used in physics to refer to a natural law, the law of energy. In such a context, it becomes necessary to substantially

narrow the range of meaning. Thus physics defines energy as the capacity to do work, which is force x distance.7 An
additional degree of precision is added by specifying that the force must be calculated in the direction of the distance. With
this, one has come to an unambiguous definition and has simultaneously left behind all other meanings in common usage.
Information: namely an encoded, symbolically represented message conveying expected action and intended
purpose.
The same must now be done for the concept of information. We have to say, very clearly, what information is in our naturallaw sense. We need criteria in order to be able unequivocally to determine if an unknown system belongs within the domain
of our definition or not. The following definition permits a secure allocation in all cases:Information is always present when all
the following five hierarchical levels are observed in a system: statistics, syntax, semantics, pragmatics and apobetics.If this
applies to a system in question, then we can be certain that the system falls within the domain of our definition of
information. It therefore follows that for this system all four laws of nature about information will apply.
The five levels of universal information (figure 1)
Statistics. In considering a book, a computer program or the genome of a human being we can ask the following questions:
How many letters, numbers and words does the entire text consist of? How many individual letters of the alphabet (e.g. a, b,
c z for the Roman alphabet, or G, C, A and T for the DNA alphabet) are utilized? What is the frequency of occurrence of
certain letters and words? To answer such questions it is irrelevant whether the text contains anything meaningful, is pure
nonsense, or just randomly ordered sequences of symbols or words. Such investigations do not concern themselves with
the content; they involve purely statistical aspects. All of this belongs to the first and thus bottom level of information: the
level of statistics. The statistics level can be seen as the bridge between the material and the non-material world. (This is the
level on which Claude E. Shannon developed his well-known mathematical concept of information.8)
Figure 2. The first five verses of Genesis 1 written in a special code.
Syntax. If we look at a text in any particular language, we see that
only certain combinations of letters form permissible words of that
particular language. This is determined by a pre-existing, wilful,
convention. All other conceivable combinations do not belong to
that languages vocabulary. Syntax encompasses all of the
structural characteristics of the way information is represented. This
second level involves only the symbol system itself (the code) and
the rules by which symbols and chains of symbols are combined
(grammar, vocabulary). This is independent of any particular
interpretation of the code.Semantics. Sequences of symbols and
syntactic rules form the necessary pre-conditions for the
representation of information. But the critical issue concerning
information transmission is not the particular code chosen, nor the
size, number or form of the lettersnor even the method of
transmission. It is, rather, the semantics (Greek: semantiks =
significant meaning), i.e. the message it containsthe proposition,
the sense, the meaning.Information itself is never the actual object
or act, neither is it a relationship (event or idea), but encoded
symbols merely represent that which is discussed. Symbols of
extremely different nature play a substitutionary role with regard to
the reality or a system of thought. Information is always an abstract
representation of something quite different. For example, the
symbols in todays newspaper represent an event that happened
yesterday; this event is not contemporaneous; moreover, it might
have happened in another country and is not at all present where
and when the information is transmitted. The genetic words in a
DNA molecule represent the specific amino acids that will be used
at a later stage for synthesis of protein molecules. The symbols of
figure
2
represent
what
happened
on
day
1
.Pragmatics. Information invites action. In this context it is irrelevant
whether the receiver of information acts in the manner desired by
the sender of the information, or reacts in the opposite way, or
doesnt do anything at all. Every transmission of information is
nevertheless associated with the expectation, from the side of the
sender, of generating a particular result or effect on the receiver.
Even the shortest advertising slogan for a washing powder is
intended to result in the receiver carrying out the action of
purchasing this particular brand in preference to others. We have
thus reached a completely new level at which information operates,
which we call pragmatics (Greek pragma = action, doing). The
sender is also involved in action to further his desired outcome (more sales/profit), e.g. designing the best message
(semantics) and transmitting it as widely as possible in newspapers, TV, etc.Apobetics. We have already recognized that for
any given information the sender is pursuing a goal. We have now reached the last and highest level at which information
operates: namely, apobetics (the aspect of information concerned with the goal, the result itself). In linguistic analogy to the
previous descriptions the author has here introduced the term apobetics (from the Greek apobeinon = result,
consequence). The outcome on the receivers side is predicated upon the goal demanded/desired by the senderthat is,
the plan or conception. The apobetics aspect of information is the most important of the five levels because it concerns the
question of the outcome intended by the sender.In his outstanding articles Inheritance of biological information 5, Alex
Williams has explained this five-level concept by applying it to biological information. Using the last four of the five levels, we
developed an unambiguous definition of information: namely an encoded, symbolically represented message conveying
expected action and intended purpose. We term any entity meeting the requirements of this definition as universal
information (UI).
Scientific laws of information (SLI)
In the following we will describe the four most important laws of nature about information.9
SLI-110

A material entity cannot generate a non-material entity


In our common experience we observe that an apple tree bears apples, a pear tree yields pears, and a thistle brings forth
thistle seeds. Similarly, horses give birth to foals, cows to calves and women to human babies. Likewise, we can observe
that something which is itself solely material never creates anything non-material. The universally observable finding of SLI
1 can now be couched in somewhat more specialized form by arriving at SLI2.
SLI-2
Universal information is a non-material fundamental entity
The materialistic worldview has widely infiltrated the natural sciences such that it has become the ruling paradigm. However,
this is an unjustified dogma. The reality in which we live is divisible into two fundamentally distinguishable realms: namely,
the material and the non-material. Matter involves mass, which is weighable in a gravitational field. In contrast, all nonmaterial entities (e.g. information, consciousness, intelligence and will) are massless and thus have zero weight. Information
is always based on an idea; it is thus also massless and does not arise from physical or chemical processes. Information is
also not correlated with matter in the same way as energy, momentum or electricity is. However, information is stored,
transmitted and expressed through matter and energy.
The distinction between material and non-material entities
Necessary Condition (NC): That a non-material entity must be massless (NC: m = 0) is indeed a necessary condition, but it
is not sufficient to assign it as non-material. To be precise, the sufficient condition must also be met.Sufficient Condition
(SC): An observed entity can be judged to be non-material if it has no physical or chemical correlation with matter. This is
always the case if the following four conditions are met:
SC1: The entity has no physical or chemical interaction with matter.
SC2: The entity is not a property of matter.
SC3: The entity does not originate in pure matter.
SC4: The entity is not correlated with matter.
Photons are massless particles and they are a good contrast to the SC because they do interact with matter and can
originate from and be correlated with matter.Information always depends on an idea; it is massless and does not originate
from a physical or chemical process.11 The necessary condition (NC: m = 0) and also all four sufficient conditions (SC1 to
SC4) are also fulfilled, and therefore universal information is a non-material entity. The fact that it requires matter for storage
and transportation does not turn it into matter. Thus we can state:Universal Information is a non-material entity because it
fulfils both necessary conditions:
it is massless; and,
it is neither physically nor chemically correlated with matter.Occasionally it is claimed that it is a physical (and thereby a
material) entity. But as presented under SLI-1, information is clearly a non-material entity.There is another very powerful
justification for stating that information cannot be a physical quantity. The SI System of units has seven base units: mass,
length, electric current, temperature, amount of substance, luminous intensity and time. All physical quantities can be
expressed in terms of one of these base units (e.g. area = length x length) or by a combination (by multiplication or division)
of several base units (e.g. momentum = mass x length / time). This is not possible in the case of information and therefore
information is not a physical magnitude!
SLI-3
Universal information cannot be created by statistical processes
The grand theory of evolution would gain some empirical support if it could be demonstrated, in a real experiment, that
information could arise from matter left to itself without the addition of intelligence. Despite the most intensive worldwide
efforts this has never been observed. To date, evolutionary theoreticians have only been able to offer computer simulations
that depend upon principles of design and the operation of pre-determined information. These simulations do not
correspond to reality because the theoreticians smuggle their own information into the simulations.
SLI-4
Universal information can only be produced by an intelligent sender
The question here is: What is an intelligent sender? Several attributes are required to define an intelligent sender.
Definition D1: An intelligent sender as mentioned in SLI-4
is conscious
has a will of its own12
is creative
thinks autonomously
acts purposefully
SLI-4 is a very general law from which several more specific laws may be derived. We know the Maxwell equations from
physics. They describe, in a brilliant generalization, the relationship between changing electric and magnetic fields. But for
most practical applications these equations are far too complex and cumbersome and for this reason we use more specific
formulations, such as Ohms Law, Coulombs Law or the induction law. Similarly, in the following section we will present four
more specific formulations of SLI-4 (SLI-4a to 4d) that are easier to use for our practical conclusions.
SLI-4a
Every code is based upon a mutual agreement between sender and receiverThe essential characteristic of a code symbol
(character) is that it was at one point in time freely defined. The set of symbols so created represents all allowed symbols
(by definition). They are structured in such a way as to fulfil, as well as possible, their designated purpose (e.g. a script for
the blind such as Braille must be sufficiently palpable; musical symbols must be able to describe the duration and pitch of
the notes; chemical symbols must be able to designate all the elements). An observed signal may give the impression that it
is composed of symbols, but if it can be shown that the signal is a physical or chemical property of the system then the
fundamental free mutual agreement attribute is missing and the signal is not a symbol according to our definition.13
SLI-4b
There is no new universal information without an intelligent sender
The process of the formation of new information (as opposed to simply copied information) always depends upon
intelligence and free will. A sequence of characters are selected from an available, freely defined set of symbols such that
the resulting string of characters represents (all five levels of) information. Since this cannot be achieved by a random
process, there must always be an intelligent sender. One important aspect of this is the application of will, so that we may
also say: Information cannot be created without a will.
SLI-4c
Every information transmission chain can be traced back to an intelligent sender 14It is useful to distinguish here between
the original and the intermediate sender. We mean by the original sender the author of the information, and he
must always be an individual equipped with intelligence and a will. If, after the original sender, there follows a machine-aided

chain consisting of several links, the last link in the chain might be mistaken for the originator of the message. Since this link
is onlyapparently the sender, we call this the intermediate sender (but it is not the original one!).The original sender is often
not visible: in many cases the author of the information is not or no longer visible. It is not in contradiction to the requirement
of observability when the author of historical documents is no longer visiblein such a case he was, however, observable
once upon a time. Sometimes the information received has been carried via several intermediate links. Here, too, there
must have been an intelligent author at the beginning of the chain. Take the example of a car radio: we receive audible
information from the loud speakers, but these are not the actual source; neither is the transmission tower that also belongs
to the transmission chain. An author (an intelligent originator) who created the information is at the head of the chain. In
general we can say that there is an intelligent author at the beginning of every information transmission chain.The actual
(intermediate) sender may not be an individual: we could gain the impression that, in systems with machine-aided
intermediate links, that the last observed member is the sender:The user of a car auto-wash can only trace the wash
program back to the computerbut the computer is only the intermediatesender; the original sender (the programmer) is
nowhere to be seen.The internet-surfer sees all kinds of information on his screen, but his home computer is not the original
sender, but rather someone who is perhaps at other end of the world has thought out the information and put it on the
internet.It is by no means different in the case of the DNA molecule. The genetic information is read off a material substrate,
but this substrate is not the original sender; rather, it is only the intermediate sender.It may seem obvious that the last
member of the chain is the sender because it seems to be the only discernible possibility. But it is never the case in a
system with machine-aided intermediate links that the last member is the original sender (= author of the information)it is
an intermediate sender. This intermediate sender may not be an individual, but rather only part of a machine that was
created by an intelligence. Individuals can pass on information they have received and in so doing act as intermediate
senders. However, they are in actuality only intermediate senders if they do not modify the information. If an intermediate
changes the information, he may then be considered the original sender of a new piece of information.Even in the special
case where the information was not transmitted via intermediaries, the author may remain invisible. We find in Egyptian
tombs or on the obelisks numerous hieroglyphic texts, but the authors are nowhere to be found. No one would conclude that
there had been no author.
SLI-4d
Attributing meaning to a set of symbols is an intellectual process requiring intelligenceWe have now defined the five levels
(statistics, syntax, semantics, pragmatics and apobetics) at which universal information operates. Using SLI-4d we can
make the following general observation: these five aspects are relevant for both the sender and the receiver.Origin of
information: SLI-4d describes our experience of how any information comes into being. Firstly, we draw on a set of symbols
(characters) that have been defined according to SLI-4a. Then we use one symbol after another from the set to create units
of information (e.g. words, sentences). This is not a random process, but requires the application of intelligence. The sender
has knowledge of the language he is using and he knows which symbols he needs in order to create his intended meaning.
Furthermore, the connection between any given symbol and meaning is not originally determined by laws of physics or
energy. For example, there is nothing physically about the three letters d, o, g that necessarily originally caused it to be
associated with mans much loved pet. The fact that there are other words for dog in other languages demonstrates that
the association between a word and its meaning is mental rather than physical/energetic. In other words, the original
generation of information is an intellectual process.Finally, we make three remarks that have fundamental
significance:Remark R1: Technical and biological machines can store, transmit, decode and translate information without
understanding the meaning and purpose.Remark R2: Information is the non-material basis for all technological systems and
for all biological systems.There are numerous systems that do not possess their own intelligence but nevertheless can
transfer or store information or steer processes. Some such systems are inanimate (e.g. networked computers, process
controls in a chemical factory, automatic production lines, car auto-wash, robots); others are animate (e.g. cell processes
controlled by information, bee waggle dance).It is important to recognize that biological information differs from humanly
generated information in three essential ways:In living systems we find the highest known information density. 15The
programs in living systems obviously exhibit an extremely high degree of sophistication. No scientist can explain the
program that produces an insect that looks like a withered leaf. No biologist understands the secret of an orchid blossom
that is formed and coloured like a female wasp and smells like one, too. We are able to think, feel, desire, believe and
hope. We can handle a complex thing such as language, but we are aeons away from understanding the information control
process that develop the brain in the embryo. Biological information displays a sophistication that is unparalleled in human
information.No matter how ingenious human inventions and programs may be, it is always possible for others to understand
the underlying ideas. For example, during World War II, the English succeeded, after considerable effort, in understanding
completely the German Enigma coding machine which had fallen into their hands. From then on it was possible to decode
German radio messages. However, most of the ingenious ideas and programs we find in living organisms are hardly, or at
best only partly, understood by us at all. To make an exact replica is impossible.Remark R3: The storage and transmission
of information requires a material medium.Imagine a piece of information written on a blackboard. Now wipe the board with
a duster. The information has vanished, even though all the particles of chalk are still present. The chalk in this case was the
necessary material medium but the information was represented by theparticular arrangement of the particles. And this
arrangement did not come about by chanceit had a mental origin. The same information could have been
stored/transmitted in Indian smoke signals through the arrangement of puffs of smoke, or in a computers memory through
magnetized domains. One could even line up an array of massive rocks into a Morse code pattern. So, clearly,
the amount or type of matter upon which the information resides is not the issue. Even though information requires a
material substrate for storage/transmission, information is not a property of matter. In the same way, the information in living
things resides on the DNA molecule. But it is no more an inherent property of the physics and chemistry of DNA than the
blackboards message was an intrinsic property of chalk.
Conclusion
All these four laws of nature about information have arisen from observations in the real world. None of them has been
falsified by way of an observable process or experiment.The grand theory of atheistic evolution must attribute the origin of all
information ultimately to the interaction of matter and energy, without reference to an intelligent or conscious source. A
central claim of atheistic evolution must therefore be that the macro-evolutionary processes that generate biological
information are fundamentally different from all other known information-generating processes. However, the natural laws
described here apply equally in animate and inanimate systems and demonstrate this claim to be both false and absurd.

Implications of the scientific laws of informationpart 2

by Werner Gitt
In the past there were so-called perpetual motion experts. These were inventors and tinkerers who wanted to build a
machine that would run continuously without the supply of energy. The discovery of the law of conservation of energy (a law
of nature) brought all efforts to solve this challenge to a halt because aperpetuum mobile is an impossible machine. Such a
machine will never be built, as the laws of nature make it impossible. Evolution could only occur if the possibility existed that
information could arise by itself out of matter. Those who believe that evolution is a plausible concept believe in a
perpetuum mobile of information. If there were laws of nature that preclude a perpetuum mobile of this kind, the theory of
evolution would be disproved. Such laws of nature actually exist, and I have presented these at many universities
throughout of the world. The concept of this theory of information is explained in the first article (part I) in this issue. There I
enumerated four scientific laws of information arising from observations in the real world. None of them has been falsified by
way of an observable process or experiment. In this article, eight far-reaching conclusions will be drawn.
Eight comprehensive conclusions
Having firmly established the domain of our definition of information in part 1, and familiarized ourselves with the laws of
nature about information derived from experienceknown as scientific laws of information (SLI; see figure 1)we can now
zero in on effectively applying them. Hereafter the term information will be used when referring to universal information.
There are eight very far-reaching conclusions that answer fundamental questions. All scientific thought and practice reaches
a limit beyond which science is inherently unable to take us. This situation is no exception. But some of our questions
involve matters beyond this limiting boundary and so to successfully transcend it we need a higher source of knowledge. We
will proceed in the following sequential manner:
Set out the (briefly formulated) conclusion itself.
SLI-1: A material entity cannot generate a non-material entity.
SLI-2: Universal information is a non-material fundamental entity.
SLI-3: Universal information cannot be created by statistical processes.
SLI-4: Universal information can only be produced by an intelligent sender.
4a: Every code is based upon a mutual agreement between sender and receiver.
4b: There is no new universal information without an intelligent sender.
4c: Every information transmission chain can be traced back to an intelligent sender.
4d: Attributing meaning to a set of symbols is an intellectual process requiring intelligence.
Figure 1. The four most important laws of nature about information known as scientific laws of information (SLI)
1. A Designer exists; refutation of atheism
Because it can be established that all forms of life contain a code (DNA, RNA), as well as all of the other levels of
information, we are within the domain of our definition of information.
We can therefore conclude that:

There
must
be
an
intelligent
sender!
[Applying SLI-4]
Basis for this conclusion
Because there has never been a process in the material world, demonstrable through observation or experiment, in which
information has arisen by without prior intelligence, then that also must be valid for all the information present in living things.
Furthermore, what we do observe about informationnamely that it intrinsically depends upon an original act of intelligence
to construct it, as defined by SLI-4dexcludes the possibility of information coming from non-intelligence. Thus SLI-4b
requires here, too, an intelligent author who wrote the programs. Conclusion 1 is therefore also a refutation of atheism.
The top of figure 2 outlines the realm that is, in principle, inaccessible to natural science; namely: Who is the message
sender? To answer that the sender cannot exist because the methods of human science (scientific boundary) cannot
perceive him, both misapplies science and is untenable according to the laws of information. The requirement that there
must be a personal sender exercising his own free will cannot be relinquished.
2. There is only one designer , who is all knowing and eternal
The information encoded in DNA far exceeds all our current technologies. Hence, no human being could possibly qualify as
the sender, who must therefore be sought outside
of our visible world.
We can conclude that:
There is only one sender, who must not only be
exceptionally intelligent but must possess an
infinitely large amount of information and
intelligence, i.e. he must be omniscient all
knowing), and beyond that must also be eternal.
[Applying SLI-1, SLI-2, SLI-4b]
Basis for this conclusion
Figure 2. The origin of life. If one considers living
things as unknown systems that can be analysed
with the help of natural laws, then one finds all
five levels of the definition of information:
statistics (here left off for simplicity), syntax,
semantics, pragmatics and apobetics. In
accordance with the natural laws of information,
the origin of any information requires a sender
equipped with intelligence and will. The fact that
the sender in this case is not observable is not in
contradiction to these laws. In a huge library with
thousands of volumes, the authors are also not
visible; but no one would maintain that there was
no author for all this information. According to
SLI-4b, at the beginning of every chain of
information there is an intelligent sender. When
one applies this to biological information, then
here, too, there must an intelligent author of the
information. In DNA molecules we find the

highest density of information known to us.1 Because of SLI-1, no conceivable processes in the material realm qualify as the
source of this information. Humans, who can, of course, generate information (e.g. letters, books), are also obviously
excluded as the source of this biological information. This leaves only a sender who operated outside of our normal physical
world. After a lecture at a university about biological information and the necessary sender, a young lady student said to me:
I can tell where you were heading when you spoke of an intelligent senderyou meant designer . I can accept that as far
as it goes; without a sender, that is, without a designer, it wouldnt work. But who informed him so that He could program the
DNA molecules? Two explanations spring to mind:
Explanation a): Imagine that this designer was considerably more intelligent than we are, but nevertheless limited. Lets
assume furthermore that he had so much intelligence (thus information) at his disposal that he was able to program all
biological systems. The obvious question then is: who gave him this information and who taught him? This would require a
higher information-giver I1, that is, a super-designer, who knew more than the designer. If I1 knew more than the desiner,
but was also limited, then he would in turn require an information-giver I 2i.e. a super-super-designer. So this line of
reasoning leads to an extension of this seriesI3, I4 to Iinfinity. One would require an infinite number of designers, such that
in this long chain every n 1 th deity always knew more than the n th. Only once one reached the I infinity super-super-super .
designer , could we say such a designer would be unlimited and all knowing. However, traversing an infinite is impossible
(whether it is a temporal, spatial or, as in this example, an ontological infinity) and so this explanation is unsatisfactory.
Explanation b): It is more simple and satisfying to assume only a single sendera prime mover, an ultimate designer. But
then one would need to also assume that such a designer is infinitely intelligent and in command of an infinite amount of
information. So he must be all knowing(omniscient).Which of the explanations a) and b) is correct? Both are logically
equivalent. Thus we must make a decision that is not derived from the SLI based on the following considerations. In reality,
there is no such thing as an actual infinite number of anything. The number of atoms in the universe is unimaginably vast,
but nevertheless finite, and thus in principle able to be counted. The total number of people, ants, or grains of wheat that
have ever existed is also vast, but finite. Although infinity is a useful mathematical abstraction, the fact is that in reality there
can be no such thing as an infinite number of anything that can be reached by counting for long enough. Thus explanation
a) fails the test of plausibility, leaving only explanation b). That means there is only one sender. But this one sender must
therefore be all knowing. This conclusion is a consequence of consistently applying the laws of nature about information.
What does it mean that the designer (the author of biological information, the designer), is infinite? It means that for Him
there is no question that He cannot answer, and He knows all things. Not merely about present and the past; even the future
is not hidden from Him. But if He knows all thingseven beyond all restrictions of timethen He Himself must be eternal.
3. The designer is immensely powerful
Because the sender:

ingeniously
encoded
the
information
into
the
DNA
molecules,
must have designed the complex bio-machinery that decodes the information and carries out all the processes of
biosynthesis,
and
created all the details of the original construction and reproductive capacities of all living things,
We can conclude that:
The sender accomplished his purpose and, therefore, he must be powerful.
Basis for this conclusion
In conclusion 2, we determined on the basis of laws of nature that the sender must be all knowing and eternal. Now we
consider the question of the extent of His power. Power encompasses all that which would be described under headings
such as strength, creativity, capability and might. Power of this sort is absolutely necessary in order to have created all living
things.Because of His infinite knowledge, the sender knows, for example, how DNA molecules can be programmed. But this
knowledge is not sufficient to fashion such molecules in the first place. 3 Taking the step from mere knowledge to practical
application requires the capacity to be able to build all the necessary biomachinery in the first place. Research enables us to
observe these hardware systems. But we do not see them come about other than through a coordinated process of
cellular replication which requires the same biomachinery to transmit and carry out the replication programs. Thus they had
to originally be constructed by the sender. He had the task of creating the immense variety of all the basic biological types
(created kinds), including the construction specifications for their biological machinery. There are no physio-chemical
tendencies in raw matter for complex information-bearing molecules to form spontaneously. Without creative power, life
would not have been possible.The obvious question here is the same as in conclusion 2: who gave Him this power? This
would require a higher power-giver, P1, that is, a super-designer, who has more than the desiner. If we proceed as shown
before according to explanation a) and b), we come to the conclusion that the sender must be all powerful.
4. The designer is non-material
Because information is a non-material fundamental entity, it cannot originate from a material one.
We can therefore conclude that:
The
sender
must
have
a
non-material
component
(spirit)
to
his
nature.
[Applying SLI-1, SLI-2]
Basis for this conclusion
Unaided matter has never been observed to generate information in the natural-law sense, (i.e. with all five levels: statistics,
syntax, semantics, pragmatics, apobetics). Information is a non-material entity and therefore requires for its origin a nonmaterial source. We have already reasoned our way to some characteristics of the sender. Now we have a further one; he
must be of a non-material nature, or at least must possess a non-material component to his nature.
5. No human being without a soul: refutation of materialism
Because people have the ability to create information, this cannot originate from our material portion (body).
We can therefore conclude that:
Each
person
must
have
a
non-material
component
(spirit,
soul).
[Applying SLI-1, SLI-2]
Basis for this conclusion
Evolutionary biology is locked into an exclusively materialistic paradigm. Reductionism (in which explanations are limited
exclusively to the realm of the material) has been elevated to a fundamental principle within the evolutionary paradigm. With
the aid of the laws of information, materialism may be refuted as follows: We all have the capacity to create new information.
We can put our thoughts down in letters, essays and books, or carry on creative conversations and give lectures. 5 In the
process, we are producing a non-material entity, namely information. (The fact that we need a material substrate to store
and transfer information has no bearing on the nature of information itself.) From this we can draw a very important
conclusion: namely that besides our material body we must have a non-material component. The philosophy of materialism,
which found its strongest expression in Marxism-Leninism and communism, can now be scientifically refuted with the help of
the scientific laws about information.

6. Big bang is impossible


Since information is a non-material entity, the assertion that the universe arose solely from matter and energy (scientific
materialism)
is
demonstrably
false.6
[Applying SL1-2]
Basis for this conclusion
It is widely asserted today that the universe owes its origin to a primeval explosion in which only matter and energy was
available. Everything that we experience, observe and measure in our world is, according to this view, solely the result of
these two physical entities. Energy is clearly a material entity, since it is correlated with matter through Einsteins
mass/energy equivalence relationship E = mc2. Is this big bang theory just as refutable as a perpetual motion machine?
Answer: YES, with the help of the scientific laws about information. In our world we find an abundance of information such
as in the cells of all living things. According to SLI-1, information is a non-material entity and therefore cannot possibly have
arisen from unaided matter and energy. Thus the common big bang worldview is false.
7. No evolution
Since
biological information (the fundamental component of all life) originates only from an intelligent sender, and
all theories of chemical and biological evolution require that information must have originated solely from matter and energy
(no sender),
we conclude that:
All
theories
or
concepts
of
chemical
and
biological
evolution
(macroevolution)
are
false.
[Applying SLI-1, SLI-2, SLI-4b, SLI-4d]
Basis for this conclusion
Judging by its worldwide following, evolution has become probably the most widespread teaching of our time. In accordance
with its basic precepts, we see an ongoing attempt to explain all life on a purely physical/chemical plane (reductionism). The
reductionists prefer to think of a seamless transition from the non-living to the living. 7 With the help of the laws of information
we can reach a comprehensive and fundamental conclusion: the idea of macroevolutioni.e. the journey from chemicals to
primordial cell to manis false. Information is a fundamental and absolutely necessary factor for all living things. But all
informationand living systems are not excludedmust necessarily have a non-material source. The evolutionary model, in
the light of the laws of information, shows itself to be an intellectual perpetual motion machine.Now the question arises:
where do we find the sender of the information stored within the DNA molecules? We dont observe him, so did this
information somehow come about in a molecular biological fashion?The answer is the same as that in the following cases:
Consider the wealth of information preserved in Egypt in hieroglyphics. Not a single stone allows us to see any part of the
sender. We only find these footprints of his or her existence chiselled into stone. But no one would claim that this
information arose without a sender and without a mental concept.In the case of two connected computers exchanging
information and setting off certain processes, there is also no trace of a sender. However, all the information concerned also
arose at some point from the thought processes of one (or more) programmers.8The information in DNA molecules is
transferred to RNA molecules; this occurs in an analogous fashion to a computer transferring information to another
computer. In the cell, an exceptionally complex system of biomachinery is at work which translates the programmed
commands in an ingenious fashion. But we see nothing of the sender. However, to ignore him would be a scientifically
untenable reductionism.We shouldnt be surprised to find that the programs devised by the sender of biological information
are much more ingenious than all of our human programs. After all, we are here dealing with (as already explained in
conclusion 2) a sender of infinite intelligence. The desiners program is so ingeniously conceived that it even permits a wide
range of adaptations to new circumstances. In biology, such processes are referred to as microevolution. However, they
have nothing to do with an actual evolutionary process in the way this word is normally used, but are properly understood as
parameter optimizations within the same kind.
In brief: The laws of information exclude a macro-evolution of the sort envisaged by the general theory of evolution.
By contrast, microevolutionary processes (= programmed genetic variation), with their frequently wide-ranging adaptive
processes within a kind, are explicable with the help of ingenious programs instituted by the designer.
8. No life from pure matter
Because the distinguishing characteristic of life is a non-material entity (namely information) matter cannot have given rise to
it.
From this we conclude that:
There is no process inherent within matter alone that leads from non-living chemicals to life. No purely material processes,
whether
on
the
earth
or
elsewhere
in
the
universe,
can
give
rise
to
life.
[Applying SLI-1]
Basis for this conclusion
Proponents of evolutionary theory assert that Life is a purely material phenomenon, which will arise whenever the right
conditions are present. However, the most universal and distinguishing characteristic of lifeinformationis of a nonmaterial nature. Thus we can apply scientific law SLI-1, which says: A purely material entity cannot generate a non-material
entity.Figure 3 shows an ant with a microchip. Microchips are the storage elements of present-day computers and they
represent matter plus information. The ant contains one material part
(matter) and two non-material parts (information and life).We
repeatedly hear of the discovery of water somewhere in our planetary
system (e.g. on Jupiters moon Europa), or that carbon-containing
substances have been found somewhere in our galaxy. These
announcements are promptly followed by speculations that life could
have developed there. This repeatedly reinforces the impression that
so long as the necessary chemical elements or molecules are present
on some astronomical body, and certain astronomical/physical
conditions are fulfilled, one can more or less count on life being there.
But as we have shown with the help of two laws, this is impossible.
Even under the very best chemical conditions, accompanied by
optimal physical conditions, there would still be no hope of life
developing.Figure 3. Ant carrying a microchip. Both the ant and the
microchip contain information, a non-material entity, that cannot be
generated by a material entity and which points to intelligent, creative input. The ant, moreover, contains two non-material
parts: information and life. (From: Werkbild Philips, with the kind permission of Valvo Unternehmensbereich Bauelemente,
of Philips GmbH, Hamburg).Since the phenomenon of life ultimately requires something non-material, every kind of living

thing required a mind as its ultimate initiator. The four Australian scientists Don Batten, Ken Ham, Jonathan Sarfati and Carl
Wieland thus correctly state: Without intelligent, creative input, lifeless chemicals cannot form themselves into living things.
The idea that they can is the theory of spontaneous generation, disproved by the great creationist founder of microbiology,
Louis Pasteur.9 With this new type of approach, applying the laws of information, Conclusions 7 and 8 have both shown us
that we can exclude the spontaneous origin of life in matter.
Conclusion
No one has ever observed water flowing uphill. Why are there no exceptions to this? Because there is a law of nature that
universally excludes this process from happening. Many plausible arguments have been raised against the teachings of
atheism, materialism, evolution and the big bang worldview. But if it is possible to find scientific laws that contradict these
ideas, then, since scientific laws have the highest degree of scientific credibility possible, we will have scientifically falsified
them. We will have done so just as effectively as the way in which perpetual motion machines (those which supposedly run
forever without any energy from outside) have been shown to be impossible through the application of scientific laws.
This is precisely what we have demonstrated in this paper. We have presented four scientific laws about information. 10 From
these we can generate comprehensive conclusions about te designer, the origin of life, and humanity. With the help of laws
of information we have been able to refute all of the following:
The purely materialistic approach in the natural sciences.
All current notions of evolution (chemical, biological).
Materialism (e.g. man as purely matter plus energy).
The big bang as the cause of this universe.
Atheism.
Variation, information and the created kind
by Dr Carl Wieland
Summary
All observed biological changes involve only conservation or decay of the underlying genetic information. Thus we do not
observe any sort of evolution in the sense in which the word is generally understood. For reasons of logic, practicality and
strategy, it is suggested that we:
Avoid the use of the term microevolution.
Rethink our use of the whole concept of variation within kind.
Avoid taxonomic definitions of the created kind in favour of one which is overtly axiomatic.
Most popular literature on evolution more or less implies that since we see small changes going on today in successive
generations of living things, we only have to extend this in time and we will see the types of changes which have caused
single-cell-to-man evolution. Creationists are thus seen as drawing some sort of imaginary Maginot line, and saying in
effect this much variation we will allow but no morecall it microevolution or variation within kind. When a creationist says
that, after all, mosquitoes are not seen turning into elephants or moths, this is regarded as a simplistic retreat. Such a
criticism is not without some justification, because the neo-Darwinist can rightly say that he would not expect to see that sort
of change in his lifetime either. The post-neo-Darwinist may say that our sample of geologic time is too small to be sure of
seeing a hopeful monster or any sort of significant saltational change.Another reason why the creationist position often
appears as one of weakness is that we are perceived as admitting variation only because of being forced to do so by
observation, then simply escaping the implications of variation by saying it does not go far enough. And we appear to redraw
our Maginot line depending on how much variation is demonstrated. It will be shown shortly, though, that this is a caricature
of the creationist position, and that the limits to variation arise from basic informational considerations at the genetic level.
The created kinds
Observed variation does appear to have limits. It is tempting to use this fact to show that there are created kinds, and that
variation is only within the limits of such kinds.However, the argument is circular and thus vulnerable. Since creationists by
definition regard all variation as within the limits of the created kind (see for example the statement of belief of the Creation
Research Society of the USA), how can we then use observations to prove that variation is within the limits of the kind? To
put it another wayof course we have never observed variation across the kind, since whatever two varieties descend
from a common source, they are regarded as the same kind. It is no wonder that evolutionists are keen to press us for an
exact definition of the created kind, since only then does our claim of variation is only within the kind become nontautologous and scientifically falsifiable.Circular reasoning does not invalidate the concept of created kinds, however. In the
same way, natural selection is also only capable of a circular definition (those who survive are the fittest, and the fittest are
the ones who survive), but it is nevertheless a logical, easily observable concept. All we are saying is that arguments which
are inherently circular cannot be invoked as independent proof of the kinds.When I claim that such independent proof may
not be possible by the very nature of things, this statement is in no way a cop out. For instance, let us say we happened
upon the remnants of an island which had exploded, leaving behind the debris of rocks, trees, sand, etc. It may be
impossible in principle to reconstruct the original positions of the pieces in relation to each other before the explosion. This
does not, however, mean that it is not possible to deduce with a great degree of confidence that the current state of the
debris is consistent with that sort of an explosion which was recorded for us by eyewitness testimony, rather than arising by
some other mechanism.In like manner, we can show that the observations of the living world are highly consistent with the ly
described concept of original created kinds, and inconsistent with the idea of evolution. This is best done by focusing on the
underlying genetic/informational basis of all biological change. This is more realistic and more revealing than focusing on the
degree or extent of morphological change.The issue is qualitative, not quantitative. It is not that the train has had insufficient
time to go far enoughit is heading in the wrong direction. The limits to variationobserved or unobservedwill come
about inevitably because gene pools run out of functionally efficient genetic information (or teleonomic information). A full
understanding of this eliminates the image of the desperately backpedalling creationist, redrawing his line of last resistance
depending on what new observations are made on the appearance of new varieties.It also defuses the whole issue of
micro and macro evolution. I believe it is better for creationists to avoid these confusing and misleading terms altogether.
The word evolution generally conveys the meaning of the sort of change which will ultimately be able to convert a
protozoon into a man or a reptile into a bird, and so on. I hope to show that in terms of that sort of meaning, we do not see
any evolution at all. By saying we accept micro but not macroevolution we risk reinforcing the perception that the issue is
about the amount of change, which it is not. It is about the type of change.This is not merely petty semantics, but of real
psychological and tactical significance. Of course one can say that microevolution occurs when this word is defined in a
certain fashion, but the impact of the word, the meaning it conveys, is such as to make it unwise to persevere with this
unnecessary concessional statement. Microevolution, that is, a change, no matter how small, which is unequivocally the
right sort of change to ultimately cause real, informationally uphill change, has never been observed.In any case, leading
biologists are themselves now coming to the conclusion that macroevolution is not just microevolution [using their

terminology] extended over time. In November 1980 a conference of some of the worlds leading evolutionary biologists,
billed as historic, was held at the Chicago Field Museum of Natural History on the topic of macroevolution. Reporting on
the conference in the journal Science, Roger Lewin wrote:The central question of the Chicago conference was whether the
mechanisms underlying microevolution can be extrapolated to explain the phenomena of macroevolution. At the risk of
doing violence to the positions of some of the people at the meeting, the answer can be given as a clear, No. 1Francisco
Ayala (Associate Professor of Genetics, University of California), was quoted as saying: but I am now convinced from
what the paleontologists say that small changes do not accumulate.2The fact that this article reaches essentially the same
conclusion in the following pages can thus hardly cause it to be regarded as radical. Nevertheless, the vast majority of even
well-educated people still persist in ignorance of this. That is, they believe that Big Change = Small Change x Millions of
Years.
The concept of information
The letters on this [printed] pagethat is, the matter making up the ink and paperall obey the laws of physics and
chemistry, but these laws are not responsible for the information they carry. Information may depend on matter for its
storage, transmission and retrieval, but is not a property of it. The ideas expressed in this article, for instance, originated in
mind and were imposed on the matter. Living things also carry tremendous volumes of information on their biological
moleculesagain, this information is not a property of their chemistry, not a part of matter and the physical laws per se. It
results from the orderfrom the way in which the letters of the cells genetic alphabet are arranged. This order has to be
imposed on these molecules from outside their own properties. Living things pass this information on from generation to
generation. The base sequences of the DNA molecule effectively spell out a genetic blue-print which determines the
ultimate properties of the organism. In the final analysis, inherited biological variations are expressions of the variations in
this information. Genes can be regarded as sentences of hereditary information written in the DNA language.Imagine now
the first population of living things on the evolutionists primitive earth. This so-called simple cell would, of course, have a
lot of genetic information, but vastly less than the information in only one of its present-day descendant gene pools, e.g.,
man. The evolutionist proposes that this telegram has given rise to encyclopedias of meaningful, useful genetic sentences.
(See later for discussion of meaning and usefulness in a biological sense.) Thus he must account for the origin with time of
these new and meaningful sentences. His only ultimate source for these is mutation.3Going back to the analogy of the
printed page, the information in a living creatures genes is copied during reproduction, analogous to the way in which an
automatic typewriter reproduces information over and over. A mutation is an accident, a mistake, a typing error. Although
most such changes are acknowledged to be harmful or meaningless, evolutionists propose that occasionally one is useful in
a particular environmental context and hence its possessor has a better chance of survival/reproduction. By looking now at
the informational basis for other mechanisms of biological variation, it will be seen why these are not the source of new
sentences and therefore why the evolutionist generally relies on mutation of one sort or another in his scheme of things.
1. Mendelian variation
This is the mechanism responsible for most of the new varieties which we see from breeding experiments and from
reasonable inferences in nature. Sexual reproduction allows packets of information to be combined in many different ways,
but will not produce any new packets or sentences. For example, when the many varieties of dog were bred from a
mongrel stock, this was achieved by selecting desired traits in successive generations, such that the genes or sentences
for these traits became isolated into certain lines. Although some of these sentences may have been hidden from view in
the original stock, they were already present in that population. (We are disregarding mutation for the moment, since such
new varieties may arise independently of any new mutations in the gene pool. Some dogs undoubtedly have mutant
characteristics.)This sort of variation can only occur if there is a storehouse of such sentences available to choose from.
Natural (or artificial) selection can explain the survival of the fittest but not the arrival of the fittest, which is the real question.
These Mendelian variations tell us nothing about how the genetic information in the present stock arose. Hence, it is not the
sort of change required to demonstrate upward evolutionthere has been no addition of new and useful sentences. And
this is in spite of the fact that it is possible to observe many new varieties in this wayeven new species. If you define a
species as a freely interbreeding natural unit, it is easy to see how new species could arise without any uphill change. That
is, without the addition of any new information coding for any new functional complexity. For example, mutation could
introduce a defect which served as a genetic barrier, or simple physical differences, such as the sizes of Great Dane and
Chihuahua, could make interbreeding impossible in nature.It is a little surprising to still see the occasional creationist
literature clinging to the concept that no new species have ever been observed. Even if this were true, and there is some
suggestion that it has actually been observed, there are instances of clines in field observations which make it virtually
certain that two now-isolated (reproductively) species have arisen from the same ancestral gene pool. Yet the very same
creationists who seem reluctant to make that sort of admission would be quite happy to agree with the rest of us that the
various species within what may be regarded as the dog kind, including perhaps wolves, foxes, jackals, coyotes and the
domestic dog, have arisen from a single ancestral kind. So why may this no longer be permitted to be happening under
present-day observations? It is not only scientifically unnecessary, but it sets up a straw man in the sense that any definite
observation of a new species arising is used as a further lever with which to criticize creationists.What we see in the process
of artificial selection or breeding giving rise to new varieties, is a thinning-out of the information in the parent stock, a
reduction in the genetic potential for further variation. If you try and breed a Chihuahua from a Great Dane population or vice
versa, you will find that your population lacks the necessary sentences. This is because, as each variety was selected out,
the genes it carried were not representative of the entire gene pool.What appeared to be a dramatic example of change with
the appearance of apparently new traits thus turns out, when its genetic basis is understood, to be an
overall downward movement in informational terms. The number of sentences carried by each subgroup is reduced thus
making it less likely to survive future environmental changes. Extrapolating that sort of process forward in time does not lead
to upwards evolution, but ultimately to extinction with the appearance of evermore-informationally-depleted populations.
2. Polyploidy
Again, no sentences appear which did not previously exist. This is the multiplication (photocopying) of information already
present.
3. Hybridizatlon
Again, no new sentences. This is the mingling of two sets of information already present.
4. Mutation
Since mutations are basically accidents, it is not surprising that they are observed to be largely harmful, lethal or
meaningless to the function or survival of an organism. Random changes in a highly ordered code introduce noise and
chaos, not meaning, function and complexity, which tend to be lost. However, it is conceivable that in a complex world,
occasionally a destructive change will have a limited usefulness. For example, if we knock out a sentence such that there is
a decrease in leg length in sheep (and there is such a mutation), this is useful to stop them jumping over the farmers fence.
A beetle on a lonely, wind-swept island may have a mutation which causes it to lose or corrupt the information coding for

wing manufacture; hence its wingless successors will not be so easily blown out to sea and will thus have a selective
advantage. Eyeless fish in caves, some cases of antibiotic resistancethe handful of cases of mutations which are quite
beneficialdo notinvolve the sort of increase in functional complexity which evolutionary theory demands. Nor would one
expect this to be possible from a random change.At this point some will argue that the terms useful, meaningful,
functional, etc. are misused. They claim that if some change gives survival value then by definition it has biological
meaning and usefulness. But this assumes that living systems do nothing but survivewhen in fact they and their
subsystems carry out projects and have specific functions. That is, they carry teleonomic information. This is one of the
essential differences between living objects and non-living ones (apart from machines). These projects do not always give
rise to survival/reproductive advantagesin fact, they may have very little to do with survival, but are carried out very
efficiently. The Darwinian assumption is always made, of course, that at some time in the organisms evolutionary history,
the project had survival/reproductive value. (For example, the archer-fish with its highly-skilled hobby of shooting down
bugs which it does not require for survival at the present time.) However, since these are nontestable assumptions, it is
legitimate to talk about genetic information in a teleonomic sense, in isolation from any possible survival value.The gene
pools of today carry vast quantities of information coding for the performance of projects and functions which do not exist in
the theoretical primeval cell. Hence, in order to support protozoon-to-man evolution, one must be able to point to instances
where mutation has added a new sentence or gene coding for a new project or function. This is so regardless of ones
assumptions on the survival value of any project or function.We do not know of a single mutation giving such an increase in
functional complexity. Probabilistic considerations would seem to preclude this in any case, or at least make it an
exceedingly rare event, far too rare to salvage evolution even over the assumed multibillion year time span.To illustrate
furtherthe molecule haemoglobin in man carries out its project of transporting and delivering oxygen in red cells in a
functionally efficient manner. A gene or sentence exists which codes for the production of haemoglobin. There is a known
mutation (actually three separate ones, giving the same result) in which only one letter in the sentence has been
accidentally replaced by another. If you inherit this change from both parents, you will be seriously ill with a disease called
sickle cell anaemia and will not survive for very long. Yet evolutionists frequently use this as an example of a beneficial
mutation. This is because if you inherit it from only one parent, your red cells will be affected, but not seriously enough to
affect your survivaljust enough to prevent the malaria parasite from using them as an effective host. Hence, you will be
more immune to malaria and better able to survive in malaria-infested areas. This shows us how a functionally
efficienthaemoglobin molecule became a functionally crippled haemoglobin molecule. The mutation-caused gene for this
disease is maintained at high levels in malaria-endemic regions by this incidental phenomenon of heterozygote superiority.
Its damaging effect in a proportion of offspring is balanced by the protection it gives against malaria. It is decidedly not an
upward change. We have not seen a new, efficient oxygen transport mechanism or its beginnings evolve. We have not
seen the haemoglobin transport mechanism improved.One more loose but possibly useful analogy. Let us say an
undercover agent is engaged in sending a daily reassuring telegram from enemy territory. The text says the enemy is not
attacking today. One day an accident occurs in transmission and the word not is lost. This is very likely going to be a
harmful change, perhaps even triggering a nuclear war by mistake. But perhaps, in a freak situation, it could turn out to be
useful (for example, by testing the fail-safe mechanisms involved). But this does not mean that it is the sort of change
required to begin to convert the telegram into an encyclopedia.The very small number of beneficial mutations actually
observed are simply the wrong kind of change for evolutionwe do not see the addition of new sentences which carry
meaning and information. Again surprisingly, one often reads creationist works which insist that there is no such thing as a
beneficial mutation. If benefit is defined purely in survival terms, then we would not expect this to be true in all instances,
and in fact it is notthat is, there are indeed beneficial mutations in that sense only.Information depends on order, and
since all of our observations and our understanding of entropy tells us that in a natural, spontaneous, unguided and
unprogrammed process order will decrease, the same will be true of information. The physicist and communications
engineer should not be surprised at the realisation that biological processes involve no increases in useful or functional
(teleonomic) information and complexity. In fact, the net result of any biological process involving transmission of information
(i.e., all hereditary variation) is conservation or loss of that genetic information.This points back directly to the creation of the
information, supernaturally, in the beginning. It is completely in harmony with the young age concept of a world made very
good as a balanced, functioning whole, with decay only subsequent to the Fall. This is the reason why there are inevitable
limits to variation, why the creationist does not have to worry about how many new species the future may bringbecause
there is a limit to the amount of functionally efficient genetic information present, and natural processes such as mutation
cannot add to this original storehouse.Notice that since organisms were created to migrate out from a central point at least
once and fill empty ecological niches, as well as having to cope with a decaying and changing environment, they would
require considerable variation potential. Without this built-in genetic flexibility, most populations would not be present today.
Hence the concept of biological change is in a sense predicted by the young age model, not something forced upon it only
because such change has occurred.
The created kind
The originally created information was not in the form of one super species from which all of todays populations have split
off by this thinning out process, but was created as a number of distinct gene pools. Each group of sexually reproducing
organisms had at least two members. Thus,Each original group began with a built-in amount of genetic information which is
the raw material for virtually all subsequent useful variation.Each original group was presumably genetically and
reproductively isolated from other such groups, yet was able to interbreed within its own group. Hence the original kinds
would truly have earned the modern biological definition of species. 4 We saw in our dog example that such species can
split into two or more distinct subgroups which can then diverge (without adding anything new) and can end up with the
characteristics of species themselvesthat is, reproductively isolated from each other but freely interbreeding among
themselves. The more variability in the original gene pool, the more easily can such new groups arise. However, each
splitting reduces the potential for further change and hence even this is limited. All the descendants of such an original kind
which was once a species, may then end up being classified together in a much higher taxonomic categorye.g., family.

Take a hypothetical created kind Atruly a biological species with perhaps a tremendous genetic potential. See Figure 1.
Note that A may even continue as an unchanged group, as may any of the subgroups. Splitting off of daughter populations
does not necessarily mean extinction of the parent population. In the case of man, the original group has not diverged
sufficiently to produce new species.Hence, D1, D2, D3, E1, E2, E3, P1, P2, Q1, Q2, Q3 and Q4 are all different species,
reproductively isolated. But all the functionally efficient genetic information they contain was present in A. (They presumably
carry some mutational defects as well).Let us assume that the original kind A has become extinct, and also the populations
X, B, C, D, E, P and Q. (But not D1, D2, etc.) If X carried some of the original information in A, which is not represented in B
or C, then that information is lost forever. Hence, in spite of
the fact that there are many new species which were not
originally present, we would have witnessed conservation
of most of the information, loss of some, and nothing new
added apart from mutations (harmful defects or just
meaningless noise in the genetic information). All of which
is the wrong sort of informational change if one is trying to
demonstrate protozoon-to-man evolution.Classifications
above species are more or less arbitrary groupings of
convenience, based generally on similarities and
differences of structure. It is conceivable that today, D1, D2
and D3 could be classified as species belonging to one
genus, and E1, E2 and E3 as species in another genus, for
example. It could also be that the groups B and C were
sufficiently different such that their descendants would
today be in different families. We begin to see some of the
Figure 1. The splitting off of daughter populations from
problems facing a creationist who tries to delineate todays
an original created kind.
representatives of the created kinds.Creatures may be
Click here for larger image.
classified in the same family, for example, on the basis of
similarities due to common design while in fact they belong
to two totally different created kinds. This should sound a note of caution against using morphology alone, as well as
pointing out the potential folly of saying in this case, the baramin is the family; in this case, it is the genus, etc. (Baramin is
an accepted creationist term for created kind.)There is no easy solution as yet to the problem of establishing each of these
genetic relationshipsin fact, we will probably never be able to know them all with certainty. Interbreeding, in vitro
fertilization experiments, etc. may suggest membership of the same baramin but lack of such genetic compatibility does not
prove that two groups are not in the same kind. (See earlier discussiongenetic barriers could arise via mutational
deterioration.) However, newer insights, enabling us to make direct comparisons between species via DNA sequencing,
open up an entirely new research horizon. (Although the question of where the funding for such extensive research will
come from in an evolution-dominated society remains enigmatic.)What then do we say to an evolutionist who
understandably presses us for a definition of a created kind or identification of same today? I suggest the following for
consideration:Groups of living organisms belong in the same created kind if they have descended from the same ancestral
gene pool.To talk of fixity of kinds in relation to any present-day variants thus also becomes redundantno new kinds can
appear by definition.Besides being a simple and obvious definition, it is axiomatic. Thus it is as unashamedly circular as a
rolled-up armadillo and just as impregnable, deflecting attention, quite properly, to the real issue of genetic change.The
question is notwhat is a baramin, is it a species, a family or a genus? Rather, the question iswhich of todays
populations are related to each other by this form of common descent, and are thus of the same created kind? Notice that
this is vastly removed from the evolutionists notion of common descent. As the creationist looks back in time along a line of
descent, he sees an expansion of the gene pool. As the evolutionist does likewise, he sees a contraction.
As with all taxonomic questions, common sense will probably continue to play the greatest part. For instance, it is
conceivable (though not necessarily so) that crocodiles and alligators both descended from the same ancestral gene pool
which contained all their functionally efficient genes, but not really conceivable that crocodiles, alligators and ostriches had a
common ancestral pool which carried the genes for all three!
ARE THERE EVOLUTIONARY PROCESSES THAT LEAD TO INFORMTION INCREAS?
Bears across the world
Bears are some of the most amazing creatures!
by Paula Weston and Carl Wieland
From the thick stomach lining of the panda and the partially webbed paws of the
polar bear, to the insect-sucking muzzle of the sloth bear, bears provide a
fascinating example of the variety of specialized characteristics existing within one
family.The bear family (Ursidae) consists of eight species, four of which are
contained in the Ursus group: the brown bear, American black bear, Asiatic black
bear and polar bear. Even within this group (known as a genus) the variation is
wide.The brown and American black bears are mainly vegetarians with appropriate
dental features for crushing plant material. However, the first has claws suited to
digging while the other has claws more suitable for climbing. The Asiatic black bear,
which also has claws for climbing, is an opportunistic omnivorous feeder (eating
meat and plants as available).1The polar bear, however, has some amazing features
which allow it to function perfectly in its cold, wet environment. Much heavier than
the above bears, it has two distinct hair types, one long and one short, which effectively is like having two coats. By
increasing buoyancy, this helps it to swim, as does its long neck and the partial webbing between its toes. Its fur-covered
foot pads provide better traction on the ice. Almost exclusively a meat eater (with teeth to suit such a diet), the polar bear
also has a large stomach capacity for sporadic (opportunistic) feeding.The sun bears and sloth bears (also included in
the Ursus group by many scientists) also have as many differences as similarities. The sun bear is omnivorous, with sharp,
sickle-like claws suited for tree climbing, while the sloth bear (possessing claws for both digging and tree climbing) has an
unusual head and dental structure perfect for eating its main food source, termites. The sloth bears long muzzle has
protrusible lips and nostrils which it can closethese two features allow it to create a vacuum tube to suck up the termites.

The giant panda, like the polar bear, has very specialized
features necessary for survival, including powerful jaws and
special molars for crushing plants, and an oesophagus (gullet)
with a tough, horny lining to protect the bear from splinters when
it eats bamboo, its primary source of food. The pandas stomach
also has a thick, muscular lining to protect it from bamboo
fragments.While both evolutionists and creationists consider
these specialized characteristics to be adaptations to the
environment through natural selection, the two camps are poles
apart as to how most of this variation came about in the first
place.Evolutionists believe that the genetic (hereditary)
information (which supplies the recipe to construct such
specialized features in the developing embryo) all arose by an
accumulation of copying errors (mutations). Any good errors
which helped the creature to survive were passed on. In this way,
they believe that these design features are all the result of these
copying mistakes, accumulated by selection over millions of
years.Creationists, however, while accepting that all of todays
bears probably descended from a single bear kind, 2 do not believe that the information in the recipes for all these design
features arose by chance. No-one has ever observed any biological process adding information!A better explanation is that
virtually all the necessary information was already there in the genetic makeup of the first bears, a population created with
vast genetic potential for variation.This doesnt mean that all of the features of todays bears would have been on obvious
display back then. A simple example would be the way in which mongrel dogs obviously had the potential to develop all the
different breeds we see today. Thus, there was no actual poodle to be seen among mongrel dogs hundreds of years ago,
but by looking closely at many of them, one would have seen at least some of the individual features found in todays
poodles popping up here and there.Similarly, it is unlikely that there were polar bears before the Floodhowever, since
much of the information for their specialized features was already there, some of these features, in lesser form, would have
also been apparent in a few individuals from time to time.It takes selection (natural or artificial) to concentrate and enhance
these featureshowever, this does not create anything really new, no new design information. If there were no genetic
potential in the bear family to grow really thick fur, then no bears would ever have inhabited the Arctic.However, it is likely
that not all the features for todays bears would have been coded for directly in the genes of the original bear kind.
Mutations, genetic copying mistakes which cause defects, may on rare occasions be helpful, even though they are still
defects, corruptions or losses of information. Thus, the polar bears partly webbed feet may have come from a mutation
which prevented the toes from dividing properly during its embryonic development. This defect would give it an advantage in
swimming, which would make it easier to survive as a hunter of seals among ice floes.Thus, bears carrying this defect would
be more likely to pass it on to their offspringbut only in that environment. However, since mutations are always
informationally downhill, there is a limit to the ability of this mechanism to cause adaptive features to arise. It will never turn
fur into feathers, for example. 3After the Flood, when dramatic climate and environment changes occurred, there was
suddenly a large number of empty niches, and as the first pair multiplied, groups of their descendants found new habitats.
Only those whose predominant characteristics were suitable for that environment thrived and bred. 4 In this way, it would not
need millions of years for a new variety (even a new species) to arise.For example, of the first bears forced to exist on
bamboo, only those exhibiting the genetic information for a stronger oesophagus and stomach lining would have survived in
each generation. Animals without these features would not have lived to produce offspring, thus reducing the gene pool as
only the surviving animals interbred. Thus these characteristics became more prominent in that group. This is more
reasonable than assuming that this group had to wait for the right mutations to come along, over thousands or millions of
years, to provide those vital features.
Notice how such new species will
be more specialised;
be better adapted to a particular habitat; and
have less genetic information than the original group.
(See the box (below) for a simple example of how information is lost as creatures adapt).
It makes a great deal of sense the original kinds of creatures to be created as very robust groups, possessing the ability to
vary and adapt to changing environments.
Summary
Creationists accept that the design features we see in modern animals are largely the result of original created design,
expressed and fine-tuned to fit the environment by subsequent adaptation, through natural selection in a fallen world of
death and struggle. If, as seems probable from fossil evidence, there were no ice-caps before the Flood, there would have
been no polar bears at that time. The wisdom of the designer is revealed in providing the original organisms with the
potential to adapt so as to be fit for a wide range of habitats and lifestyles.
The bear family, with its incredible variation, provides clear evidence of an intelligent designer.

How information is lost when creatures adapt to their


environment
In the example on the right (simplified for illustration), a single gene
pair is shown under each bear as coming in two possible forms.
One form of the gene (L) carries instructions for long fur, the other
(S) for short fur.In row 1, we start with medium-furred animals (LS)
interbreeding. Each of the offspring of these bears can get one of
either gene from each parent to
make up their two genes.
In row 2, we see that the
resultant offspring can have
either short (SS), medium (LS)
or long (LL) fur. Now imagine
the climate cooling drastically
(as in the post-Flood ice age).
Only those with long fur survive
to give rise to the next
generation (line 3). So from then
on, all the bears will be a new,
long-furred variety. Note that:
They are now adapted to their
environment.
They
are
now
more specialized than their ancestors on row 1.
This has occurred through natural selection.
There have been no new genes added
In fact, genes have been lost from the populationi.e. there has
been a loss of genetic information, the opposite of what microbe-toman evolution needs in order to be credible.
Now the population is less able to adapt to future environmental changeswere the climate to become hot, there is no
genetic information for short fur, so the bears would probably overheat.
Polar bears: correcting past blunders
In 1979, this magazine, then called Ex Nihilo, reported (2(2):18) that the hairs of polar bears were transparent and, like fibreoptic cables, piped light energy down to the bears skin to keep it warm. The information was from a secular source, and of
course we had no polar bear hairs to test.Now a recent author who has tested their hair points out that this idea, which has
been repeated over and over in secular science journals and reports, is actually a myth. 6The polar bears hairs are not some
unique fibre-optic substance (which come to think of it would actually have been tough to explain if all bears came recently
from one kind, as most creationists currently think), but are made of ordinary keratin, just like the hair of all other mammals.
This emphasizes the fact that all scientific claims are tentative and fallible, no matter who makes them.Another erroneous
statement about polar bears, which has appeared in some anti-Darwinian literature, is that natural selection can have nothing
to do with the polar bears white coat, since the bear has no predators.However, this is not the case, as it is obvious that of the
first bears to reach the snowbound regions, those with lighter coats would have had an advantage.By being camouflaged
against the snow, they would have had more chance of being able to sneak up on their prey undetected. Thus, especially
where food was scarce, whiter bears would have been more likely to survive and pass on their genes.

Was Dawkins Stumped?


Frog to a Prince critics refuted again
Published: 12 April 2008(GMT+10)
This week we feature a critical feedback from JW, whose complaint relates to
CMIs videoclip of Richard Dawkins being stumped by a question about
genetic information, which features on our DVD From a Frog to a Prince (see
raw footage with subtitles, right).We like to publish critical feedbacks regularly,
and our policy is to choose the best-articulated and most well-reasoned critical
feedback available. But regrettably, few of the multitudinous critical emails we
receive are cogent and logical, with most tending toward the mangled diatribe
end of the literary spectrum. JWs email below is mildly representative in this
regard. Although JWs contribution breaks our feedback rules against
unsubstantiated allegations, etc., her comments give us the opportunity to
confront the prolific accusations made against us in relation to our Richard
Dawkins interview, and also to address some common misthinking regarding
religion. We have answered previous critics in Skeptics choke on Frog: Was
Dawkins caught on the hop? But we still receive a lot of abuse and slander in
relation to our Dawkins interview, much of it revolving around incorrect
accounts of the sequence of events that occurred during the interview. For a
precise analysis and timeline of exactly what took place, see our Dawkins
Interview Timeline (below).
J.W. writes:
sxc.hu
Ive seen the raw footage of the so called stumping Richard
Hawkins [sic] fiasco. You are liars! You didnt stump him. He gave a brilliant
and true answer!!! Must you really lie to try to satisfy your followers???
Andrew Lamb responds:
Ive seen the raw footage of the so called stumping Richard Hawkins fiasco.

An excerpt of CMIs raw footage of our Richard Dawkins interview was posted online in April 2007 on a secular website. The
excerpt is that in which Richard Dawkins responds to the question, Can you give an example of a genetic mutation or an
evolutionary process which can be seen to increase the information in the genome?, a question that he was asked on two
separate occasions on the day.Encouragingly, in the year that has passed since then there have been almost half a million
viewings of this video, by people from around the world. Thats several hundred thousand people who have witnessed for
themselves the utter inability of evolutions leading apologist to account for genetic information.A shortened version of this
appears in our popular Frog to a Prince DVD, which incidentally now has subtitles in ten languages.
You are liars! You didnt stump him. He gave a brilliant and true answer!!!
We did not lie. The Richard Dawkins Stumped title given to our raw footage clip on that secular website is accurate.
Dawkins was stumped, as shown by the fact that he tried to think of an answer, but eventually responded with comments
that did not address the question.A few of the things Dawkins said were true, e.g. fish are modern animals. But even then,
they dont qualify as true answers since they hadnothing to do with the question asked.Also, much of what he said was not
true, e.g. his comment that They [fish] are descended from ancestors which were descended from. From the true eyewitness account of history we know that humans have always been humans, and did not descend from some other kind of
creature, and there are no facts of science to demonstrate that they have, only the fanciful story-telling of evolution theorists.
Must you really lie to try to satisfy your followers???
Your comment here implies that you think there is something wrong with lying. But if evolution were true, it would not be
possible to show logically that lying is bad. Rather good and bad would just be matters of opinion, not matters of objective
reality. Evolutionary beliefs provide no objective basis to justify traits like honesty. Dawkins Interview Timeline

The above timeline is of the Richard Dawkins interview that formed the basis of CMIs video From a Frog to a Prince (click
on it to see high-resolution version).
From a Frog to a Prince recording timeline resolves questions
This timeline, based on the main camera sound track of the interview, reconciles the three accounts of the interview, i.e. the
published accounts of Richard Dawkins and Gillian Brown, and the unpublished account of Philip Hohnen, given in personal
correspondence with CMI during 20012003. There seemed to be discrepancies between the three accounts, but our
timeline is consistent with all three accounts and with the audio tape. The key to resolving the apparent inconsistencies is
the realization that:
Dawkins was questioned about information twice, first by Hohnen (A on timeline), after which the interview was interrupted,
with Dawkins upset, and later by Brown (K), from behind the camera, when Dawkins had no ready answer.Dawkins anger
erupted at the first occasion, when he suspected he might be speaking to creationists. This is what Dawkins recalled and
gave as an excuse for his silence following the question on the video, which was asked some time later when Dawkins was
already aware that he was speaking with creationists. In his recollection, Professor Dawkins conflated these two
events.After Philip Hohnen had been on a tour of the house with Mrs Dawkins (Lalla Ward) (section D on timeline), and then
negotiated with Richard Dawkins (E), the latter agreed to make a statement for recording. In his statement (G, J) Dawkins
candidly admitted that evolution had to explain the information in living things and he claimed that mutations, aided by
natural selection, created all the information. These very pro-evolution statements are on the video, just as Dawkins had
wanted. After these confident assertions, Gillian Brown, from her position behind the camera, slipped in the question asking
for an actual example of an evolutionary process that can be observed to increase the information in the genome (K). It
would have been churlish of Dawkins not to try to answer this, in the light of the confident spiel he had just given. His look
(on the video) of puzzlement, even consternation, had nothing to
do with discovering the nature of the interview (this discovery
happened much earlier). The fact that he failed to answer the
question, even given time to think, should have been sufficient for
any fair-minded observer to see that the silence (L) following the
asking of the question revealed a lack of an answer, not a rising
tide of anger, etc., as claimed by Dawkins.There was a period
(DE on the timeline) which was perceived differently by the
three participants, in part because they were actually doing
different things at the time (e.g. Philip Hohnen was being given a
guided tour of the house by Mrs Dawkins). When Hohnen
returned from the tour, he did not see any evidence of a
rapprochement between Dawkins and Brown. Hohnen then
negotiated with Dawkins for a continuation of the videoing, with
Dawkins agreeing to give a statement.
This timeline harmonizes the recollections of all three persons
and shows that the video producer did not manufacture Dawkins
silence and nor was Dawkins silence due to a rising tide of anger over discovering that he was being interviewed by
creationists (this had happened earlier). Hohnen recalls that they parted in good humour. The segment where Dawkins fails

to answer the information question is fair (in fact the period of silent puzzlement was considerably shortened on the Frog to
a Prince video).It may be argued that Brown pushed the boundaries by asking the question at all when she had agreed for
Dawkins to make a statement. However, it was a question begging to be asked after Dawkins confident speech about the
adequacies of natural processes in creating new information.
Philip Hohnen has checked the timeline, and vouches for its accuracy.
Explanatory notes to timeline
Creation Ministries International have an audio tape (we may also have the actual video recording, but it is not currently
locatable) of the latter part of the interview, starting from the point at which Dawkins expressed his suspicions (B). This audio
tape comprises the sound track from the main video camera. (Another video camera was also running during much of the
interview.) A copy of this same audio tape was sent to the anticreationist Glenn Morton, who had previously been sceptical
of our account. After seeing this copy, Morton declared I will state categorically that the audio tape of the interview 100%
supports Gillian Browns contention that Dawkins couldnt answer the question.The green (lightly shaded) segments of the
timeline above represent periods covered on the audio tape. The red (darkly shaded) segments represent periods not
covered on the audio tape.The two occurrences of double slashes // in the timelines text boxes represent breaks in
recording.Dawkins oft-discussed 11-second pause is represented in this timeline by segments L and M, and is referred to
on this chart by the term silence, rather than the usual term pause in order to differentiate between this and the recording
pauses. It is 11 seconds from the end of GBs question until RDs audible intake of breath, and 19 seconds in total from the
end of GBs question until the pause in recording (O). That is, L, M and N together comprise 19 seconds. There is
approximately seven seconds of silence between RDs audible intake of breath and his request to stop.
In period O on this chart, i.e. after GB asked the info question and RD requested a stop, there was no speaking by anybody
until GB said Now recording and RD began speaking again with Ok. Theres a popular misunderstanding .
The adaptation of bacteria to feeding on nylon waste
by Don Batten
by Don Batten
In 1975, Japanese scientists discovered bacteria that could live on the waste products of nylon manufacture as their only
source of carbon and nitrogen.1 Two species, Flavobacterium sp. K172 and Pseudomonas sp. NK87, were identified that
degrade nylon compounds.Much research has flowed from this discovery to elucidate the mechanism for the apparently
novel ability of these bacteria.2 Three enzymes are involved in Flavobacterium K172: F-EI, F-EII and F-EIII, and two
in PseudomonasNK87: P-EI and P-EII. None of these has been found to have any catalytic activity towards naturally
occurring amide compounds, suggesting that the enzymes are completely new, not just modified existing enzymes. Indeed
no homology has been found with known enzymes. The genes for these enzymes are located on plasmids: 3 plasmid pOAD2
in Flavobacterium and on two plasmids, pNAD2 and pNAD6, in Pseudomonas.Apologists for materialism latched onto these
findings as an example of evolution of new information by random mutations and natural selection, for example, Thwaites in
1985.4 Thwaites’ claims have been repeated by many, without updating or critical evaluation, since.
Is the evidence consistent with random mutations generating the new genes?
Thwaites claimed that the new enzyme arose through a frame shift mutation. He based this on a research paper published
the previous year where this was suggested. 5 If this were the case, the production of an enzyme would indeed be a
fortuitous result, attributable to pure chance. However, there are good reasons to doubt the claim that this is an example of
random mutations and natural selection generating new enzymes, quite aside from the extreme improbability of such
coming about by chance.6Evidence against the evolutionary explanation includes:There are five transposable elements on
the pOAD2 plasmid. When activated, transposase enzymes coded therein cause genetic recombination. Externally imposed
stress such as high temperature, exposure to a poison, or starvation can activate transposases. The presence of the
transposases in such numbers on the plasmid suggests that the plasmid is designed to adapt when the bacterium is under
stress.All five transposable elements are identical, with 764 base pairs (bp) each. This comprises over eight percent of the
plasmid. How could random mutations produce three new catalytic/degradative genes (coding for EI, EII and EIII) without at
least some changes being made to the transposable elements? Negoro speculated that the transposable elements must
have been a late addition to the plasmids to not have changed. But there is no evidence for this, other than the circular
reasoning that supposedly random mutations generated the three enzymes and so they would have changed the
transposase genes if they had been in the plasmid all along. Furthermore, the adaptation to nylon digestion does not take
very long (see point 5 below), so the addition of the transposable elements afterwards cannot be seriously entertained.All
three types of nylon degrading genes appear on plasmids and only on plasmids. None appear on the main bacterial
chromosomes of either Flavobacterium or Pseudomonas. This does not look like some random origin of these
genes—the chance of this happening is low. If the genome of Flavobacterium is about two million bp,7 and the pOAD2
plasmid comprises 45,519 bp, and if there were say 5 pOAD2 plasmids per cell (~10% of the total chromosomal DNA), then
the chance of getting all three of the genes on the pOAD2 plasmid would be about 0.0015. If we add the probability of the
nylon degrading genes of Pseudomonas also only being on plasmids, the probability falls to 2.3 x 10 -6. If the enzymes
developed in the independent laboratory-controlled adaptation experiments (see point 5, below) also resulted in enzyme
activity on plasmids (almost certainly, but not yet determined), then attributing the development of the adaptive enzymes
purely to chance mutations becomes even more implausible.The antisense DNA strand of the four nylon genes investigated
in Flavobacterium and Pseudomonas lacks any stop codons.8 This is most remarkable in a total of 1,535 bases. The
probability of this happening by chance in all four antisense sequences is about 1 in 10 12. Furthermore, the EII gene
in Pseudomonas is clearly not phylogenetically related to the EII genes of Flavobacterium, so the lack of stop codons in the
antisense strands of all genes cannot be due to any commonality in the genes themselves (or in their ancestry). Also, the
wild-type pOAD2 plasmid is not necessary for the normal growth of Flavobacterium, so functionality in the wild-type parent
DNA sequences would appear not to be a factor in keeping the reading frames open in the genes themselves, let alone the
antisense strands.
Some statements by Yomo et al., express their consternation:
These results imply that there may be some unknown mechanism behind the evolution of these genes for nylon oligomerdegrading enzymes.
The presence of a long NSF (non-stop frame) in the antisense strand seems to be a rare case, but it may be due to the
unusual characteristics of the genes or plasmids for nylon oligomer degradation.
Accordingly, the actual existence of these NSFs leads us to speculate that some special mechanism exists in the regions of
these genes.
It looks like recombination of codons (base pair triplets), not single base pairs, has occurred between the start and stop
codons for each sequence. This would be about the simplest way that the antisense strand could be protected from stop
codon generation. The mechanism for such a recombination is unknown, but it is highly likely that the transposase genes

are involved.Interestingly, Yomo et al. also show that it is highly unlikely that any of these genes arose through a frame shift
mutation, because such mutations (forward or reverse) would have generated lots of stop codons. This nullifies the claim of
Thwaites that a functional gene arose from a purely random process (an accident).The Japanese researchers demonstrated
that nylon degrading ability can be obtained de novo in laboratory cultures of Pseudomonas aeruginosa [strain] POA, which
initially had no enzymes capable of degrading nylon oligomers.9 This was achieved in a mere nine days! The rapidity of this
adaptation suggests a special mechanism for such adaptation, not something as haphazard as random mutations and
selection.The researchers have not been able to ascertain any putative ancestral gene to the nylon-degrading genes. They
represent a new gene family. This seems to rule out gene duplications as a source of the raw material for the new genes.8
P. aeruginosa is renowned for its ability to adapt to unusual food sources—such as toluene, naphthalene, camphor,
salicylates and alkanes. These abilities reside on plasmids known as TOL, NAH, CAM, SAL and OCT
respectively.2Significantly, they do not reside on the chromosome (many examples of antibiotic resistance also reside on
plasmids).The chromosome of P. aeruginosa has 6.3 million base pairs, which makes it one of the largest bacterial genomes
sequenced. Being a large genome means that only a relatively low mutation rate can be tolerated within the actual
chromosome, otherwise error catastrophe would result. There is no way that normal mutations in the chromosome could
generate a new enzyme in nine days and hypermutation of the chromosome itself would result in non-viable bacteria.
Plasmids seem to be adaptive elements designed to make bacteria capable of adaptation to new situations while
maintaining the integrity of the main chromosome.
Stasis in bacteria
P. aeruginosa was first named by Schroeter in 1872. 10 It still has the same features that identify it as such. So, despite being
so ubiquitous, so prolific and so rapidly adaptable, this bacterium has not evolved into a different type of bacterium. Note
that the number of bacterial generations possible in over 130 years is hugeequivalent to tens of millions of years of human
generations, encompassing the origin of the putative common ancestor of ape and man, according to the evolutionary story,
indeed perhaps even all primates. And yet the bacterium shows no evidence of directional changestasis rules, not
progressive evolution. This alone should cast doubt on the evolutionary paradigm.Flavobacterium was first named in 1889
and it likewise still has the same characteristics as originally described.It seems clear that plasmids are designed features of
bacteria that enable adaptation to new food sources or the degradation of toxins. The details of just how they do this
remains to be elucidated. The results so far clearly suggest that these adaptations did not come about by chance mutations,
but by some designed mechanism. This mechanism might be analogous to the way that vertebrates rapidly generate novel
effective antibodies with hypermutation in B-cell maturation, which does not lend credibility to the grand scheme of neoDarwinian evolution.11 Further research will, I expect, show that there is a sophisticated, irreducibly complex, molecular
system involved in plasmid-based adaptationthe evidence strongly suggests that such a system exists. This system will
once again, as the black box becomes illuminated, speak of intelligent creation, not chance. Understanding this adaptation
system could well lead to a breakthrough in disease control, because specific inhibitors of the adaptation machinery could
protect antibiotics from the development of plasmid-based resistance in the target pathogenic microbes.
New plant coloursis this new information?
CMI scientist answers a skeptic
11 July 2000
One skeptic believes that he has found an example of new information arising by mutations and natural selection. Could he
be correct?
Question/statements from skeptic
Since I have some background in genetics and plant breeding, I can tell you that the entire field of plant breeding is based
on new information arising from random mutations. New traits do appear, at the molecular and morphological level new
proteins, new pigments, etc. These are novelties.Two parents with blue eyes will generally produce children with blue eyes,
and likewise two plants with white flowers will generally produce new plants with white flowers, but sometimes that seedlings
with red or purple flower turns up, not because a recessive allele has been revealed, but because a mutation has altered an
existing pigment or biochemical pathway to produce something entirely new, that has never existed before. This is NEW
INFORMATION.As an example, there is nothing like an ear of corn in any other species of grass. It seems to be entirely
unique in the plant kingdom. And yet there are three or four species of grass, very similar to corn in their overall growth, but
with typical grass-like reproductive organs. The funny thing is, they will breed with corn to produce fully fertile offspring. It is
clear that a combination of mutation and selection has produced in corn an unusual and entirely novel structure from a very
typical grass in other words, NEW INFORMATION.
Response by Don Batten, Ph.D.
The question comes from someone who does not understand the concept of information. The appearance of a new trait
does not have to involve the addition of information via the DNA coding. In fact, as bioinformatics expert Dr Lee Spetner has
demonstrated (in his book, Not by Chance, Judaica Press), such is so unlikely that it could never be the basis for the
increased information needed for molecules-to-man evolution. Information content is measured not by the number of traits,
but by what is called the specified complexity of a base sequence or protein amino acid sequence. A mutation, being a
random change in highly specified information contained in the nucleic acid base sequence, could almost never do anything
but scramble the information; that is, reduce the information.Now sometimes such a loss of information results in a new trait
for example, purple or red flowers where there were only blue ones before. This would have to be studied at the DNA
base sequence level (or amino acid sequence in the enzyme producing the pigment, or the pigment itself) to show this. For
example, a blue pigment could be changed into a red or purple pigment by loss of a side-chain from the basic pigment
molecule. Such a change would involve a loss of specified complexity and therefore a loss of information. Even an
informationally neutral change could be responsiblethis is not to be confused with Kimuras neutral mutation, which has
nothing to do with the concept of information, only the effect on survival. Even a change of one amino acid in a protein, not
altering information content, can alter energy levels in such a way as to change the visible absorption spectrum, e.g. by
reducing the number of consecutive conjugated bonds. And a small change in pH can have a large effect on color (this
effect was overlooked by a group of molecular biologists who managed to get the gene for the blue pigment in hydrangeas
into a rosethe rose was not blue, although the pigment was manufactured, because the cell pH was not the same as a
hydrangeas!).Of the many hundreds of antibiotic, herbicide and insecticide resistance mechanisms studied at a biochemical
level, none involve addition of specified complexity in the DNA. Although some are new traits due to mutations, all involve
loss of information. An example is the loss of control over the production of an enzyme that happens to break down penicillin
in Staphylococcus aureus, resulting in the production of greatly increased amounts of the enzyme and thus conferring
resistance to penicillin. Another mode of antibiotic resistance due to mutation is decreased effectiveness of a membrane
transport protein so that the antibiotic is no longer taken up by the cell (but the normal function of the transporter is also

impaired and the bacterium is less fit to survive in the wild). However, much antibiotic resistance seems to be acquired by
the transfer of plasmids from other species of bacteria via conjugation, which of course does not explain the ultimate origin
of the information.What about the corn story? The questioner is probably correct about the species of grass and the origin of
corn. I have no problem with that. Creationists would say that the species that interbreed with corn (maize) are of the same
created kind (see Ligers and wholphins? What next?, Q&A: Speciation). However, until the biochemical/genetic basis of the
difference between maize and its wild relatives is determined, it cannot be said that the maize inflorescence is due to new
information. Loss of information in some base sequences responsible for early steps in inflorescence development could
easily account for such seemingly large differences.It must be noted (again) that creationists do not say that mutations are
always harmful, just that they are almost invariably a loss of information (i.e. specified complexity). Sometimes a loss of
information can be beneficial, but it is a loss of information. For example, loss of function of wings in the flightless cormorant
in the Galpagos Islands, which can now dive better than its flying cousins, or flightless beetles on a windswept island that
are better off because they are less likely to be blown into the seasee Beetle bloopers.Evolution needs swags of new
information, if a microbe really did change into a man over several billion years. The additional new information would take
nearly a thousand books of 500 pages each to print the sequence. Random changes cannot account for a page, or even a
sentence, of this, let alone accounting for all of it. The evolutionist has an incredible faith!
Further reading: In the Beginning Was Information by Dr Werner Gitt (an information scientist in
Germany). The Mystery of Lifes Origin by Thaxton, Bradley and Olsenthese are
thermodynamics experts and they deal with the origin of information from a thermodynamics
point of view, showing the impossibility of natural processes creating the information in living
things. See also Q&A: Information Theory.
Is antibiotic resistance really due to increase in information?
22 October 2001; reposted and updated 11 November 2006
In 2001, the responses by Dr Jonathan Sarfati to the PBS Evolution propaganda series induced
mainly favorable responses [and were later incorporated into the book Refuting Evolution 2,
right].
Order
online
This feedback, from Mikko Ilmari N. of Finland, criticises the responses to the PBS series on
Online
chapter
index
ostensibly scientific grounds. He accuses CMI of bias, falsehood, and misinformation, but fails
to back up his points.The only issue he does attempt to back up is a claim of information
increase that caused increased resistance to antibiotics. But this once again fails to understand
the key relationship between information and specified complexity. Once again, supposed evidence for evolution turns out to
be better explained by the Creation/Fall model. His letter is printed with point-by-point responses by Dr Jonathan Sarfati (the
author of the PBS responses) interspersed as per normal email fashion. MINs letter includes quotes from the PBS rebuttal,
which are double-indented. Ellipses () at the end of one of the paragraphs signal that a mid-sentence comment
follows, not an omission.
Just noticed that your ministries have, by assistance of Australian creationist Jonathan Sarfati, responded to the PBS-TV
seriesEvolution.
The Australian creationist Jonathan Sarfati was not just offering assistance; it was part of his (my) job, since Im part
of Creation Ministries International.
The responses have got multiple omissions and scientific errors,
Really? Lets see if that claim stands up to scrutiny.
No special problems. I just noticed that [CMI] has provided misinformation in its PBS-rebuttals, and I suspect that reason
for doing so is [CMIs] fundamentalist-Christian bias, strictly requiring separately created species, young earth and many
other features with which, [CMIs] personnel sure is also familiar with.
As covered above, CMI staff are not the only ones with biases. Its just a convenient excuse to avoid having to actually
refute the scientific evidence for a young Earth. Note also, as shown in Q&A: Speciation, we do not believe that every one of
todays species was separately created. Rather we predict rapid speciation within a created kind, not requiring
any new genetic information but instead recombinations of already existing information and information-losing mutations.
Ill take Sarfatis writings of poison newts as an example: (from [this page], actually)
Poison newt
The program moves to Oregon, where there were mysterious deaths of campers, with newts found in their sleeping bags. It
turns out that these Rough-skinned Newts (Taricha granulosa) secrete a deadly toxin from their skin glands, so powerful that
even a pinhead can kill an adult human. They are the deadliest salamanders on Earth. So scientists investigated why this
newt should have such a deadly toxin.
Up to this point, still OK.
They theorized that a predator was driving this evolution, and they found that the Common Garter Snake (Thamnophis
sirtalis) was the newts only predator. Most snakes will be killed, but the Common Garter Snake just loses muscle control for
a few hours, which could of course have serious consequences.
Here the Evolution-program makes a good point how scientific research of evolution can be done, with satisfying results,
indeed.
And as I pointed out, the assumption of goo-to-you evolution was unnecessary and in fact is irrelevantthis is perfectly well
explained by the Creation/Fall model. Note that the creationist Edward Blyth talked about natural selection 25 years before
Darwin wrote Origin.
But the newts were also driving the evolution of the snakes-they also had various degrees of resistance to the newt toxin.
Are their conclusions correct? Yes, they are probably correct that the predators and prey are driving each others changes,
and that they are the result of mutations and natural selection. Although this might surprise the ill-informed anti-creationist,
this shouldnt be so surprising to anyone who understands the young age model.
Why the involvement of mutations and natural selection should surprise so-called ill-informed anti-creationists?
Because they present a caricature of creationism that pretends that we believe in fixity of species.
So is this proof of particles-to-people evolution? Not at all. There is no proof that the changes increase genetic information.
In fact, the reverse seems to be true.
[CMIs] text slips into obvious falsehoods. The main point of the Evolution-program here is that (a) other species form large
part of the environment of one species,
Since when did we deny this? The problem is, this has nothing to do with particles-to-people evolution. So where is the
obvious falsehood?

(b) mutations, recombinations and natural selection is the clue how the species absorbs information from it's environment
thru generations following each other.
This is gobbledygook. There is no information from the environment to absorb! One wonders what meaning you assign to
the term information. I have a fair idea where you picked up this nonsense, and the source of this misinformation is refuted
in detail by the article The Problem of Information for the Theory of Evolution <www.trueorigin.org/dawkinfo.asp>.
(As a side point, it is very important to notice that this increase of information in a species does not conflict the Second Law
of Thermodynamics, as life on planet Earth is energetically open system.)
As a Ph.D. physical chemist, needing no instruction in thermodynamics, Im always amused by anticreationists, mainly
biologists and geologists, who think they know something about this topic, when they obviously dont. As I point out in The
Second Law of Thermodynamics Answers to Critics, an open system is necessary but not sufficient for an increase in
information content.
Since the PBS episode provides no explanation of the poisons activity, its fair to propose a possible scenario (it would be
hypocritical to object, since evolutionists often produce far more hypothetical just-so stories): suppose the poison normally
reacts with a particular neurotransmitter to produce something that halts all nerve impulses, resulting in death. But if the
snake had a mutation reducing production of this the neurotransmitter, so the poison has fewer targets to act upon. Another
possibility is a mutation altering its precise structure so that its shape no longer matches the protein.
Either way, the poison would be less effective. But either reduced production of the neurotransmitter or a less precise shape,
slow nerve impulses, meaning that muscle movement is slower.
Rather than producing these just-so stories and trying to dig an excuse to do so from evolutionists,
This was perfectly legitimate given the available information. It is far more legitimate than the evolutionary just-so stories that
you evidently tolerate, because my explanation of an information-losing mutation is based on the observed fact that the
more resistant snakes suffer from a disability.
could [CMI] PLEASE make a note that protein structures can be examined to see if these more-effective poisons actually
show loss of information or less-complicated structures than less-effective poisons.
Indeed they can be, and in every case they have shown a reduced specificity, which may be beneficial. So please provide
actual evidencepatronising assertions are unimpressive.
Indeed, I have come across the similar kind of (creationist) claim than yours about antibiotic resistence earlier. Finnish
creationist Dr. Pekka Reinikainen claimed that bacteria with better antibiotic resistence always show having less information
or simpler structure.
I havent heard of Dr Reinikainen, but from the limited amount of information you provide, he seems to know what hes
talking about.
An article from a popular science magazine, concerning this antibiotic-resistence showed that Reinikainen had it wrong.
Increase of information and new structural complexity has been observed in not just some, but in fact, many cases.
As will be shown, you have failed to demonstrate this in even one case! You would benefit by reading Dr Spetners book
(Not by chance!) and his more detailed explanations of information in terms of specified complexity (Part
1 <www.trueorigin.org/spetner1.asp> & Part 2<www.trueorigin.org/spetner2.asp>)True Origins site, also hyperlinked
on Q&A: Information.
The original magazine (which is not at hand now) was a Finnish popular scientific magazine Tiede 2000 i.e. Science 2000. It
had some non-technical examples of antibiotic resistance which however showed clearly that in many cases we cannot
honestly call the evolution of antibiotic resistance, "a loss of information". Instead, I have put (as an attachment) an article by
Petrosino, Cantu and Palzkoll, titled -Lactamases: protein evolution in real time
This was Trends in Microbiology 6(8):323327, August 1998. Some bacteria produce -Lactamases to destroy -Lactam
antibiotics, which include penicillin.
You may judge it and check if its always about loss of information as frequently claimed by some creationists. (Or maybe
you accept increased information by evolution in this case without any further problems your original article was about
poisonous newts, indeed.)
Right, I read this paper as you requested. But despite its title, it does not support your points, but ours! For example, one
mechanism featured in the article was acquisition of genes from other bacteria. I.e. the genes already existedhopefully it
should be obvious that this is irrelevant to the origin of these genes in the first place, which is what goo-to-you evolution is
supposed to explain! The other clue is the statement many of the mutations located around the active site pocket result in
increased catalytic activity for hydrolysis of extended-spectrum substrates. Mutations far from the active site also increase
extended spectrum catalysis. This provided an advantage to the bacteria containing these mutations, because they could
destroy more types of antibiotics. But here was yet another
example of an information loss conferring an advantage.
To understand this properly, its necessary to realize enzymes
are usually tuned very precisely to only onetype of molecule (the
substrate), and this fine-tuning is necessary for living cells to
function. Mutationsreduce specificity and hence would reduce
the effectiveness of its primary function, but would enable it to
degrade other substrates too. But this loss of specificity
means loss of information content. Dr Spetner analyzes this with
rigorous mathematics using standard definitions of information.
He presents the two extremes:An enzyme has activity for only
one substrate out of n possible ones and zero for the others
here the information gain is log2n.The second is where there is
no discrimination between any of the substrateshere the
information gain is zero.Real enzymes are somewhere in
between, and Dr Spetner shows how to calculate their
information. As explained above, living organisms require
enzymes to do a specific job, so their information content is very
close to the maximum in case 1. Quite close to the other extreme
Comparison of ribitol, xylitol and arabitolactivities of wild
are ordinary acids or alkalis, which hydrolyse many compounds.
and mutant ribitol dehydrogenase (from Lee
These have wonderful extended-spectrum catalytic activity, but
Spetner, True Origins website).
are not specific, so have low information content, so would be
useless for the precise control required for biological reactions.
All observed mutations reduce the specificity and trend towards the second extreme case. The trend described in the Lactamases is just the same as that described in ribitol dehydrogenase, the enzyme some bacteria use to metabolize ribitol,

a derivative of a type of sugar (left). That is, the mutant acquired the new ability to metabolize xylitol, so it was thought to be
an example of new information arising, and that it could trend towards a highly specific xylitol dehydrogenase. But on further
inspection, it turned out not only toreduce its ability to perform its original specific function of metabolizing ribitol, but also to
increase the ability to synthesize lots of other things, including arabitol. The trend is towards loss of specificity and producing
an ordinary broad-spectrum catalyst, i.e. from case 1 to case 2. A graph of wild v. mutant -Lactamase activity on various
antibiotics would be essentially the same as this graph of wild v. mutant ribitol dehydrogenase activity on the different types
of sugars.In conclusion, there is nothing to support any information gain at all. But evolution posits that the information
content of the simplest living organisms, the mycoplasma with 580,000 letters (482 genes), was increased to, say, the 3
billion letters equivalent in man. If this were so, we should be able to observe plenty of examples of information gain without
intelligent input. But we have yet to observe even one, including the example you cited.
WHAT IS THE DIFFERENCES BETWEEN ORDER AND COMPLEXITY
The treasures of the snow
Do pretty crystals prove that organization can arise spontaneously?
by Martin Tampier
Snow crystals are some of the most beautiful shapes
that nature has to offer, and no two flakes are alike.
Many evolutionists have tried to claim the order of a
crystal forming due to atomic structures as proof for
something coming out of nothing, due simply to
natural laws. But closer examination of this argument
shows it does not hold up to scientific scrutiny.
Modern snowflake research
Several scientists are trying to grow their own crystals
to understand and direct their development.
Applications of this research reach way beyond
meteorology, with the aim of controlling the growth of other crystals, such as silicon structures, for the semiconductor
industry.
[Snowflake] shape is due to the properties of their building blocks, the water molecules (H 2O).
So why do snow crystals form this shape? Does it require special design? No, their shape is due to the properties of their
building blocks, the water molecules (H 2O). These are bent and polar (i.e. with positively and negatively charged ends).
When they come together in solid form, they tend to form the lowest-energy structure they can, 1 which is crystals with
hexagonal (six-fold) symmetry.2 By contrast, carbon dioxide (CO2), a linear and more symmetrical molecule,
forms cubic crystals in its solid form (dry ice).We now know that not only temperature, but also humidity influences crystal
formation and shape. The beautiful six-legged star-like crystals grow in air warmer than -3C. Between -3C and -10C,
snow falls as little prisms. Between -10C and -22C, it is little stars again, and below that, prisms once more.
Nevertheless, scientists still cannot tell exactly why snow crystal shapes change so much with temperature. These shapes
depend on how water vapour molecules are incorporated into the growing ice crystal, and the physical processes governing
crystal growth are complex and not well understood yet.3
Snowflakesproof of evolution?4
Photo by Martin Tampier
Sometimes evolutionists claim that snowflakes show that order can
arise from disorder, and more complex structures from simple ones,
based purely on the inherent physical properties of matter. Therefore,
the reasoning goes, life could have arisen from simple molecules that
organize themselves in a way that ultimately leads to more complex
structures, and eventually the first living cell.5But crystals are nothing
like a living cell. Formed by the withdrawal of heat from water, they
are dead structures that contain no more information than is in their
component parts, the water
molecules. Life forms, on the
other
hand,
came
into
Fun stuff
existence,
evolutionists
An excellent snowflake website is
believe, through the addition of heat energy to some postulated primordial soup. Not
www.snowcrystals.com.
only are these processes very different, but life requires the emergence of new
You can download and use many
information (a code) in order to take over the functions of organization and
snowflake photos to create your
reproduction of a cell. There is therefore no analogy between snow crystals and the
own calendar, greeting card or
far, far greater complexity of living organisms.
other
present.
Apart
from
More importantly, the organization in proteins and DNA is not caused by the
beautiful photos, the site will tell
properties of the constituent amino acids and nucleotides themselves, any more
you just about everything you
than forces between ink molecules make them join up into letters and words.
ever wanted to know about
Michael Polanyi (18911976), a former chairman of physical chemistry at the
snowflakes.
University of Manchester (UK) who turned to philosophy, confirmed this:
As the arrangement of a printed page is extraneous to the chemistry of the printed
page, so is the base sequence in a DNA molecule extraneous to the chemical forces
at work in the DNA molecule. It is this physical indeterminacy of the sequence that produces the improbability of occurrence
of any particular sequence and thereby enables it to have a meaninga meaning that has a mathematically determinate
information content .6Snow crystals are not direct evidence for creation, either. Nevertheless, the philosophical argument
can be made that a universe without a designer cannot logically be expected to create such order out of disorder.7 So when
we observe order and design in the universe, as exemplified by the six-cornered snowflake, doesnt this demand a designer
who supplies this order and design?8Of course, the physical properties of water are known to be necessary preconditions for
life to exist on Earth, which testifies to a designer who conceived the universe and its physical laws as conducive to life. 9 For
example, snow forms an insulating layer on the ground that protects plants and animals below it from the much harsher
temperatures above. But whereas this could have been achieved with very simple shapes, such as round or square disks,
the lavish beauty and variety in snow crystals shows the designer`s loving creativity in making snow not only very useful,

but also wonderful to look at! As even evolutionists admit, One could almost convince oneself that snowflakes constitute a
demonstration of supernatural power.5
No two alike?
Actually, smaller snowflakes that take the shape of hexagonal prisms look pretty much the same. On the other hand, larger,
star-shaped crystals are all different. To understand why, think of how many different ways 15 books can be arranged on a
bookshelf. You have 15 choices for the first book, 14 for the second, 13 for the third, etc. The total number of possibilities is
thus 15 14 13 (15!), or over a trillion ways to arrange those books. Crystals can easily have 100 or more features that
can be recombined in different waysleading to at least a staggering 10158 different possibilities. This is 1070 times the
number of atoms in the entire universe!1
Adapted from www.its.caltech.edu/~atomic/snowcrystals/alike/alike.htm.

The Snowflake Man from Vermont


Astronomer Johannes Kepler seems to have been the first scientist to examine snow crystals. He wrote a booklet on the
subject in 1611.1 But the real Snowflake Man was Wilson Alwyn Bentley, born 1865 in Vermont, USA. Bentley was the first to
photograph snowflakes.2 He published more than 5,000 photographs, and wrote numerous articles on snow, rain, dew and
other natural phenomena related to water and precipitation.WA Bentley was the first to photograph snowflakes. He dedicated
his life to studying snow, dew and rain and although he was a farmer without formal scientific training, he was years ahead of
his time with his meteorological hypotheses.Bentley relates that it was his mother who instilled the love of scientific
investigation into him: he was home schooled until he was 14 years old, and in his quest for learning he even read an
encyclopedia! It was my mother that made it possible for me, at fifteen, to begin the work to which I have devoted my life.
She had a small microscope, which she had used in her school teaching. When the other boys of my age were playing with
popguns and sling-shots, I was absorbed in studying things under this microscope: drops of water, tiny fragments of stone, a
feather dropped from a birds wing, a delicately veined petal from some flower. But always, from the very beginning, it was
snowflakes that fascinated me most.Bentley knew nothing about photography and for the longest time could not manage to
take pictures of snowflakes. But through persistence and learning by trial and error he learned how to work rapidly before the
ice crystal changed shape, how to use transmitted light by pointing the camera to the sky, and how to get sharpness of detail
on the crystal by using a large f-stop. Finally, during a January snowstorm in 1885, he obtained the first photomicrographs
ever taken of an ice crystal.He kept detailed meteorological records, and pondered over the meaning of the shapes and
sizes of the crystals and why they often varied from one storm to the next. Starting in 1898, he published his findings in
scientific journals. Bentley greatly contributed to what is today common knowledge, i.e. that temperature changes and
movements in the storm clouds impact on the form and type of the crystals formed. With his research, he was years ahead of
the meteorological thinking of his time.Bentley loved people, but was misunderstood by them, and the scientific world
appreciated (or caught up with) the value of his work only much later. When he convened a meeting in his hometown to
present on his work, only six people attended.One of his National Geographic (Jan. 1923) articles, The magic beauty of
snow and dew, is accompanied by over 100 photomicrographs of ice crystals, frost patterns, and dew. 3 Although his photos
were sold for jewellery and other purposes, Bentley did not become rich through his work. But he said that he would not
change places with Ford or Rockefeller: he felt he was serving the Great Designer; capturing the evanescent loveliness
which, but for him, would be unappreciatedeven unseen by most of his fellow men. And with that role he was content.
When he died of pneumonia in 1931, his obituary read, Truly, greatness blooms in quiet corners and flourishes under
strange circumstances. For Wilson Bentley was a greater man than many a millionaire who lives in luxury of which the
Snowflake Man never dreamed.
Creation question: Snowflakes
Editors note: As Creation magazine has been continuously
Chilling Facts
published since 1978, we are publishing some of the articles from
Snow covers about 23 per cent of the Earth's
the archives for historical interest, such as this. For teaching and
surfacepermanently or temporarily.
sharing purposes, readers are advised to supplement these
The lowest air temperature ever recorded was at
historic articles with more up-to-date ones available by
Vostok II in Antarctica, 3,420 metres (11,218 feet)
searching creation.com.
above sea level. The temperature dropped to
Q:Snowflakes show beautiful design patterns, which appear highly
-88.3
degrees
Celsius
(-127
degrees
ordered, and which arise by themselves under simple freezing
Fahrenheit).The size and shape of snow crystals
conditions. Since this shows order arising from disorder, doesnt
depend mainly on the temperature of their
this mean that the ordered patterns of complex life could arise from
formation and the amount of water vapour
simpler chemicals?
However,
there is no tendency for simple organic molecules to
available at deposition. At temperatures between 0
A: In themselves
form
fact, there isinto
no parallel
the precise
between
sequences
the two needed
issues attoall.
form
To put
the
and 3 degrees Celsius, thin hexagonal plates
it simply, water
long-chain
information-bearing
forming snowflakes
molecules
is 'doing
found
whatincomes
living naturally',
systems.
form. Between -3 and -5 degrees, needles form. At
given isthe
That
because
properties
the properties
of the system.
of theThere
'finished
is no
product'
need for
are any
not
-25 to -30 degrees, the crystal shape is hollow
external information
programmed
in the components
or programming
of thetosystem.
be added
It takes
to the
thesystem
addition
prism.
thesome
of
existing
extra
properties
informationeither
of the waterbymolecule
an intelligent
and the
mind
atmospheric
at work or
conditions
a
programmed
are machine.
enough to
What
give
would
risebe
inevitably
analogous
to snowflake-type
is if you saw a
patterns.
doily
crocheted into the pattern of a snowflake. There is no natural,
spontaneous tendency for the components of the system (for example, wool or cotton fibres) to assume that shape. The
pattern has to be imposed by external informationeither by the operation of mind or a programmed machine.So whenever
you see a snowflake doily, you instinctively recognize this fact and see it as the result of creation, as you should when you
contemplate a section of a chromosomethe raw ingredients are not sufficient without a source of information. In living
things, that information has come from the parent organism (a programmed mechanism) which arose from its parent which
arose.... You might find that the doily has been crocheted by a programmed machine in a factory, which might itself have
been built by another machinebut eventually that information had to arise in amind. A snowflake pattern as water freezes
may appear beautiful, but it is not the same thing at all, because no external programming or information has to be applied.A
similar issue (sometimes raised by evolutionists who should know better) is that of salt crystal formation as a warm
1. ABCABCABCABCABCABCABC
2. A CAT SAT ON THE MAT
Both are 'ordered', but only type 2 resembles the ordering in, say, a protein molecule. Chop the first sequence in half, and
the two halves are essentially the same. Break a crystal of salt in two, and you see the same effect. Chop a protein (for

example haemoglobin) molecule in half and you no longer have haemoglobinthe two halves don't resemble one another.
That is because the ordering is like that in the type 2 example abovechop that sentence in half and it loses all its meaning.
To put it another way, as a salt crystal grows and grows, it is like continuing the type 1 sequence above. The sequence gets
longer, the crystal gets bigger (simply more of the same), but not more complex. For simple organisms to become more
complex (or simple chemicals to become a living thing) would be like the type 2 sentence becoming a whole story about
cats, for example.
CONCLUSION
To compare snowflake or salt crystal formation to any assumed evolutionary growth in complexity is like comparing chalk
with cheese. Examining the two simply highlights the need for external information before biological order will arisewhich
is a strong argument for creation.
HOW DOES INFORMATION THEORY SUPPORT CREATION?
Information, science and biology
by Werner Gitt
Summary
Energy and matter are considered to be basic universal quantities. However, the concept of information has become just as
fundamental and far-reaching, justifying its categorisation as the third fundamental quantity. One of the intrinsic
characteristics of life is information. A rigorous analysis of the characteristics of information demonstrates that living things
intrinsically reflect both the mind and will of their designer.
Information confronts us at every turn both in technological and in natural systems: in data processing, in communications
engineering, in control engineering, in the natural languages, in biological communications systems, and in information
processes in living cells. Thus, information has rightly become known as the third fundamental, universal quantity. Hand in
hand with the rapid developments in computer technology, a new field of studythat of information sciencehas attained a
significance that could hardly have been foreseen only two or three decades ago. In addition, information has become an
interdisciplinary concept of undisputed central importance to fields such as technology, biology and linguistics. The concept
of information therefore requires a thorough discussion, particularly with regard to its definition, with understanding of its
basic characteristic features and the establishment of empirical principles. This paper is intended to make a contribution to
such a discussion.
Information: a statistical study
With his (1948) paper entitled A Mathematical Theory of Communication, Claude E. Shannon was the first to devise a
mathematical definition of the concept of information. His measure of information which is given in bits (binary digits),
possessed the advantage of allowing quantitative statements to be made about relationships that had previously defied
precise mathematical description. This method has an evident drawback, however: information according to Shannon does
not relate to the qualitative nature of the data, but confines itself to one particular aspect that is of special significance for its
technological transmission and storage. Shannon completely ignores whether a text is meaningful, comprehensible, correct,
incorrect or meaningless. Equally excluded are the important questions as to where the information comes from (transmitter)
and for whom it is intended (receiver). As far as Shannons concept of information is concerned, it is entirely irrelevant
whether a series of letters represents an exceptionally significant and meaningful text or whether it has come about by
throwing dice. Yes, paradoxical though it may sound, considered from the point of view of information theory, a random
sequence of letters possesses the maximum information content, whereas a text of equal length, although linguistically
meaningful, is assigned a lower value.The definition of information according to Shannon is limited to just one aspect of
information, namely its property of expressing something new: information content is defined in terms of newness. This does
not mean a new idea, a new thought or a new item of informationthat would involve a semantic aspectbut relates merely
to the greater surprise effect that is caused by a less common symbol. Information thus becomes a measure of the
improbability of an event. A very improbable symbol is therefore assigned correspondingly high information content.Before a
source of symbols (not a source of information!) generates a symbol, uncertainty exists as to which particular symbol will
emerge from the available supply of symbols (for example, an alphabet). Only after the symbol has been generated is the
uncertainty eliminated. According to Shannon, therefore, the following applies: information is the uncertainty that is
eliminated by the appearance of the symbol in question. Since Shannon is interested only in the probability of occurrence of
the symbols, he addresses himself merely to the statistical dimension of information. His concept of information is thus
confined to a non-semantic aspect. According to Shannon, information content is defined such that three conditions must be
fulfilled:
Summation condition: The information contents of mutually independent symbols (or chains or symbols) should be
capable of addition. The summation condition views information as something quantitative.
Probability condition: The information content to be ascribed to a symbol (or to a chain of symbols) should rise as the level
of surprise increases. The surprise effect of the less common z (low probability) is greater than that of the more frequent e
(high probability). It follows from this that the information content of a symbol should increase as its probability decreases.
The bit as a unit of information: In the simplest case, when the supply of symbols consists of just two symbols, which,
moreover, occur with equal frequency, the information content of one of these symbols should be assigned a unit of
precisely 1 bit. The following empirical principle can be derived from this:
Theorem 1: The statistical information content of a chain of symbols is a quantitative concept. It is given in bits (binary
digits).According to Shannons definition, the information content of a single item of information (an item of information in this
context merely means a symbol, character, syllable, or word) is a measure of the uncertainty existing prior to its reception.
Since the probability of its occurrence may only assume values between 0 and 1, the numerical value of the information
content is always positive. The information content of a plurality of items of information (for example, characters) results
(according to the summation condition) from the summation of the values of the individual items of information. This yields
an important characteristic of information according to Shannon:
Theorem 2: According to Shannons theory, a disturbed signal generally contains more information than an undisturbed
signal, because, in comparison with the undisturbed transmission, it originates from a larger quantity of possible alternatives.
Shannons theory also states that information content increases directly with the number of symbols. How inappropriately
such a relationship describes actual information content becomes apparent from the following situation: If someone uses
many words to say virtually nothing, then, according to Shannon, in accordance with the large number of letters, this
utterance is assigned a very high information content, whereas the utterance of another person, who is skilled in expressing
succinctly that which is essential, is ascribed only a very low information content.Furthermore, in its equation of information
content, Shannons theory uses the factor of entropy to take account of the different frequency distributions of the letters.

Entropy thus represents a generalised but specific feature of the language used. Given an equal number of symbols (for
example, languages that use the Latin alphabet), one language will have a higher entropy value than another language if its
frequency distribution is closer to a uniform distribution. Entropy assumes its maximum value in the extreme case of uniform
distribution.
Symbols: a look at their average information content
If the individual symbols of a long sequence of symbols are not equally probable (for example, text), what is of interest is the
average information content for each symbol in this sequence as well as the average value over the entire language. When
this theory is applied to the various code systems, the average information content for one symbol results as follows:
In the German language: I = 4.113 bits/letter
In the English language: I = 4.046 bits/letter
In the dual system: I = 1 bit/digit
In the decimal system: I = 3.32 bits/digit
In the DNA molecule: I = 2 bits/nucleotide
The highest information density
The highest information density known to us is that of the DNA (deoxyribonucleic acid) molecules of living cells. This
chemical storage medium is 2 nm in diameter and has a 3.4 nm helix pitch (see Figure 1). This results in a volume of 10.68
x 10-21 cm3 per spiral. Each spiral contains ten chemical letters (nucleotides), resulting in a volumetric information density of
0.94 x 1021 letters/cm3. In the genetic alphabet, the DNA molecules contain only the four nucleotide bases, that is, adenine,
thymine, guanine and cytosine. The information content of such a letter is 2 bits/nucleotide. Thus, the statistical information
density is 1.88 x 1021 bits/cm3.Proteins are the basic substances that compose living organisms and include, inter alia, such
important compounds as enzymes, antibodies, haemoglobins and hormones. These important substances are both organand species-specific. In the human body alone, there are at least 50,000 different proteins performing important functions.
Their structures must be coded just as effectively as the chemical processes in the cells, in which synthesis must take place
with the required dosage in accordance with an optimised technology. It is known that all the proteins occurring in living
organisms are composed of a total of just 20 different chemical building blocks (amino acids). The precise sequence of
these individual building blocks is of exceptional significance for life and must therefore be carefully defined. This is done
with the aid of the genetic code. Shannons information theory makes it possible to determine the smallest number of letters
that must be combined to form a word in order to allow unambiguous identification of all amino acids. With 20 amino acids,
the average information content is 4.32 bits/amino acid. If words are made up of two letters (doublets), with 4 bits/word,
these contain too little information. Quartets would have 8 bits/word and would be too complex. According to information
theory, words of three letters (triplets) having 6 bits/word are sufficient and are therefore the most economical method of
coding. Binary coding with two chemical letters is also, in principle, conceivable. This however, would require a quintet to
represent each amino acid and would be 67 per cent less economical than the use of triplets.

Figure 1. The DNA moleculethe universal storage medium of natural systems. A short
section of a strand of the double helix with sugar-phosphate chain reveals its chemical
structure (left). The schematic representation of the double helix (right) shows the base pairs
coupled by hydrogen bridges (in a plane perpendicular to the helical axis).
Computer chips and natural storage media
Figures 1, 2 and 3 show three different storage technologies: the DNA molecule, the core memory, and the microchip. Lets
take a look at these.
Core memory: Earlier core memories were capable of storing 4,096 bits in an area of 6,400 mm 2 (see Figure 2). This
corresponds to an area storage density of 0.64 bits/mm 2. With a core diameter of 1.24 mm (storage volume 7,936 mm 3), a
volumetric storage density of 0.52 bits/mm3 is obtained.

Figure 2. Detail of the TR440 computers core-memory matrix


(Manufacturer: Computer Gesellschaft Konstanz).
1-Mbit DRAM: The innovative leap from the core memory to the semiconductor memory is expressed in striking figures in
terms of storage density; present-day 1-Mbit DRAMs (see Figure 3) permit the storage of 1,048,576 bits in an area of
approximately 50 mm2, corresponding to an area storage density of 21,000 bits/mm 2. With a thickness of approximately 0.5
mm, we thus obtain a volumetric storage density of 42,000 bits/mm 3. The megachip surpasses the core memory in terms of
area storage density by a factor of 32,800 and in terms of volumetric storage density by a factor of 81,000.

Figure 3. The 1-Mbit DRAMa dynamic random-access memory for


1,048,576 bits.
DNA molecule: The carriers of genetic information, which perform their biological functions throughout an entire life, are
nucleic acids. All cellular organisms and many viruses employ DNAs that are twisted in an identical manner to form double
helices; the remaining viruses employ single-stranded ribonucleic acids (RNA). The figures obtained from a comparison with
man-made storage devices are nothing short of astronomical if one includes the DNA molecule (see Figure 1). In this super
storage device, the storage density is exploited to the physico-chemical limit: its value for the DNA molecule is 45 x
1012 times that of the megachip. What is the explanation for this immense difference of 45 trillion between VLSI technology
and natural systems? There are three decisive reasons:The DNA molecule uses genuine volumetric storage technology,
whereas storage in computer devices is area-oriented. Even though the structures of the chips comprise several layers, their
storage elements only have a two-dimensional orientation.Theoretically, one single molecule is sufficient to represent an
information unit. This most economical of technologies has been implemented in the design of the DNA molecule. In spite of
all research efforts on miniaturisation, industrial technology is still within the macroscopic range.Only two circuit states are
possible in chips; this leads to exclusively binary codes. In the DNA molecule, there are four chemical symbols (see Figure
1); this permits a quaternary code in which one state already represents 2 bits.The knowledge currently stored in the
libraries of the world is estimated at 1018 bits. If it were possible for this information to be stored in DNA molecules, 1 per
cent of the volume of a pinhead would be sufficient for this purpose. If, on the other hand, this information were to be stored
with the aid of megachips, we would need a pile higher than the distance between the earth and the moon.
The five levels of information
Shannons concept of information is adequate to deal with the storage and transmission of data, but it fails when trying to
understand the qualitative nature of information.Theorem 3: Since Shannons definition of information relates exclusively to
the statistical relationship of chains of symbols and completely ignores their semantic aspect, this concept of information is
wholly unsuitable for the evaluation of chains of symbols conveying a meaning.In order to be able adequately to evaluate
information and its processing in different systems, both animate and inanimate, we need to widen the concept of

information considerably beyond the bounds of Shannons theory. Figure 4 illustrates how information can be represented
as well as the five levels that are necessary for understanding its qualitative nature.
Level 1: statistics
Shannons information theory is well suited to an understanding of the statistical aspect of information. This theory makes it
possible to give a quantitative description of those characteristics of languages that are based intrinsically on frequencies.
However, whether a chain of symbols has a meaning is not taken into consideration. Also, the question of grammatical
correctness is completely excluded at this level.
Level 2: syntax
In chains of symbols conveying information, the stringing-together of symbols to form words as well as the joining of words
to form sentences are subject to specific rules, which, for each language, are based on consciously established
conventions. At the syntactical level, we require a supply of symbols (code system) in order to represent the information.
Most written languages employ letters; however, an extremely wide range of conventions is in use for various purposes:
Morse code, hieroglyphics, semaphore, musical notes, computer codes, genetic codes, figures in the dance of foraging
bees, odour symbols in the pheromone languages of insects, and hand movements in sign language.The field of syntax
involves the following questions:
Which symbol combinations are defined characters of the language (code)?
Which symbol combinations are defined words of the particular language (lexicon, spelling)?
How should the words be positioned with respect to one another (sentence formation, word order, style)? How should they
be joined together? And how can they be altered within the structure of a sentence (grammar)?

Figure 4. The five mandatory levels of information (middle) begin with statistics (at the lowest
level). At the highest level is apobetics (purpose).
The syntax of a language, therefore, comprises all the rules by which individual elements of language can or must be
combined. The syntax of natural languages is of a much more complex structure than that of formalised or artificial
languages. Syntactical rules in formalised languages must be complete and unambiguous, since, for example, a compiler
has no way of referring back to the programmers semantic considerations. At the syntactical level of information, we can
formulate several theorems to express empirical principles:
Theorem 4: A code is an absolutely necessary condition for the representation of information.
Theorem 5: The assignment of the symbol set is based on convention and constitutes a mental process.
Theorem 6: Once the code has been freely defined by convention, this definition must be strictly observed thereafter.
Theorem 7: The code used must be known both to the transmitter and receiver if the information is to be understood.
Theorem 8: Only those structures that are based on a code can represent information (because of Theorem 4). This is a
necessary, but still inadequate, condition for the existence of information.
These theorems already allow fundamental statements to be made at the level of the code. If, for example, a basic code is
found in any system, it can be concluded that the system originates from a mental concept.
Level 3: semantics
Chains of symbols and syntactical rules form the necessary precondition for the representation of information. The decisive
aspect of a transmitted item of information, however, is not the selected code, the size, number or form of the letters, or the
method of transmission (script, optical, acoustic, electrical, tactile or olfactory signals), but the message it contains, what it
says and what it means (semantics). This central aspect of information plays no part in its storage and transmission. The
price of a telegram depends not on the importance of its contents but merely on the number of words. What is of prime
interest to both sender and recipient, however, is the meaning; indeed, it is the meaning that turns a chain of symbols into
an item of information. It is in the nature of every item of information that it is emitted by someone and directed at someone.
Wherever information occurs, there is always a transmitter and a receiver. Since no information can exist without semantics,
we can state:
Theorem 9: Only that which contains semantics is information.
According to a much-quoted statement by Norbert Wiener, the founder of cybernetics and information theory, information
cannot be of a physical nature:Information is information, neither matter nor energy. No materialism that fails to take
account of this can survive the present day.The Dortmund information scientist Werner Strombach emphasises the nonmaterial nature of information when he defines it as an appearance of order at the level of reflective consciousness.
Semantic information, therefore, defies a mechanistic approach. Accordingly, a computer is only a syntactical device
(Zemanek) which knows no semantic categories. Consequently, we must distinguish between data and knowledge, between
algorithmically conditioned branches in a programme and deliberate decisions, between comparative extraction and

association, between determination of values and understanding of meanings, between formal processes in a decision tree
and individual selection, between consequences of operations in a computer and creative thought processes, between
accumulation of data and learning processes. A computer can do the former; this is where its strengths, its application areas,
but also its limits lie. Meanings always represent mental concepts; we can therefore further state:
Theorem 10: Each item of information needs, if it is traced back to the beginning of the transmission chain, a mental source
(transmitter).Theorems 9 and 10 basically link information to a transmitter (intelligent information source). Whether the
information is understood by a receiver or not does nothing to change its existence. Even before they were deciphered the
inscriptions in Egyptian obelisks were clearly regarded as information, since they obviously did not originate from a random
process. Before the discovery of the Rosetta Stone (1799), the semantics of these hieroglyphics was beyond the
comprehension of any contemporary person (receiver); nevertheless, these symbols still represented information.All suitable
formant devices (linguistic configurations) that are capable of expressing meanings (mental substrates, thoughts, contents of
consciousness) are termed languages. It is only by means of language that information may be transmitted and stored on
physical carriers. The information itself is entirely invariant, both with regard to change of transmission system (acoustic,
optical, electrical) and also of storage system (brain, book, computer system, magnetic tape). The reason for this invariance
lies in its non-material nature. We distinguish between different kinds of languages:
Natural languages: at present, there are approximately 5,100 living languages on earth.
Artificial or sign languages: Esperanto, sign language, semaphore, traffic signs.
Artificial (formal) languages: logical and mathematical calculations, chemical symbols, shorthand, algorithmic languages,
programming languages.Specialist languages in engineering: building plans, design plans, block diagrams, bonding
diagrams, circuit diagrams in electrical engineering, hydraulics, pneumatics.Special languages in the living world: genetic
language, the foraging-bee dance, pheromone languages, hormone language, signal system in a spiders web, dolphin
language, instincts (for example, flight of birds, migration of salmon).Common to all languages is that these formant devices
use defined systems of symbols whose individual elements operate with fixed, uniquely agreed rules and semantic
correspondences. Every language has units (for example, morphemes, lexemes, phrases and whole sentences in natural
languages) that act as semantic elements (formatives). Meanings are correspondences between the formatives, within a
language, and imply a unique semantic assignment between transmitter and receiver.Any communication process between
transmitter and receiver consists of the formulation and comprehension of the sememes (sema = sign) in one and the same
language. In the formulation process, the thoughts of the transmitter generate the transmissible information by means of a
formant device (language). In the comprehension process, the combination of symbols is analysed and imaged as
corresponding thoughts in the receiver.
Level 4: pragmatics
Up to the level of semantics, the question of the objective pursued by the transmitter in sending information is not relevant.
Every transfer of information is, however, performed with the intention of producing a particular result in the receiver. To
achieve the intended result, the transmitter considers how the receiver can be made to satisfy his planned objective. This
intentional aspect is expressed by the term pragmatics. In language, sentences are not simply strung together; rather, they
represent a formulation of requests, complaints, questions, inquiries, instructions, exhortations, threats and commands,
which are intended to trigger a specific action in the receiver. Strombach defines information as a structure that produces a
change in a receiving system. By this, he stresses the important aspect of action. In order to cover the wide variety of types
of action, we may differentiate between:Modes of action without any degree of freedom (rigid, indispensable, unambiguous,
program-controlled), such as program runs in computers, machine translation of natural languages, mechanised
manufacturing operations, the development of biological cells, the functions of organs;Modes of action with a limited degree
of freedom, such as the translation of natural languages by humans and instinctive actions (patterns of behaviour in the
animal kingdom);Modes of action with the maximum degree of freedom (flexible, creative, original; only in humans), for
example, acquired behaviour (social deportment, activities involving manual skills), reasoned actions, intuitive actions and
intelligent actions based on free will.All these modes of action on the part of the receiver are invariably based on information
that has been previously designed by the transmitter for the intended purpose.
Level 5: apobetics
The final and highest level of information is purpose. The concept of apobetics has been introduced for this reason by
linguistic analogy with the previous definitions. The result at the receiving end is based atthe transmitting end on the
purpose, the objective, the plan, or the design. The apobetic aspect of information is the most important one, because it
inquires into the objective pursued by the transmitter. The following question can be asked with regard to all items of
information: Why is the transmitter transmitting this information at all? What result does he/she/it wish to achieve in the
receiver? The following examples are intended to deal somewhat more fully with this aspect:Computer programmes are
target-oriented in their design (for example, the solving of a system of equations, the inversion of matrices, system
tools).With its song, the male bird would like to gain the attention of the female or to lay claim to a particular territory.With the
advertising slogan for a detergent, the manufacturer would like to persuade the receiver to decide in favour of its
product.Humans are endowed with the gift of natural language; they can thus enter into communication and can formulate
objectives.
We can now formulate some further theorems:
Theorem 11: The apobetic aspect of information is the most important, because it embraces the objective of the transmitter.
The entire effort involved in the four lower levels is necessary only as a means to an end in order to achieve this objective.
Theorem 12: The five aspects of information apply both at the transmitter and receiver ends. They always involve an
interaction between transmitter and receiver (see Figure 4).
Theorem 13: The individual aspects of information are linked to one another in such a manner that the lower levels are
always a prerequisite for the realisation of higher levels.
Theorem 14: The apobetic aspect may sometimes largely coincide with the pragmatic aspect. It is, however, possible in
principle to separate the two.
Having completed these considerations, we are in a position to formulate conditions that allow us to distinguish between
information and non-information. Two necessary conditions (NCs; to be satisfied simultaneously) must be met if information
is to exist:
NC1: A code system must exist.
NC2: The chain of symbols must contain semantics.
Sufficient conditions (SCs) for the existence of information are:
SC1: It must be possible to discern the ulterior intention at the semantic, pragmatic and apobetic levels (example: Karl v.
Frisch analysed the dance of foraging beesand, in conformance with our model, ascertained the levels of semantics,
pragmatics and apobetics. In this case, information is unambiguously present).

SC2: A sequence of symbols does not represent information if it is based on randomness. According to G.J. Chaitin, an
American informatics expert, randomness cannot, in principle, be proven; in this case, therefore, communication about the
originating cause is necessary.The above information theorems not only play a role in technological applications, they also
embrace all otherwise occurring information (for example, computer technology, linguistics, living organisms).
Information in living organisms
Life confronts us in an exceptional variety of forms; for all its simplicity, even a monocellular organism is more complex and
purposeful in its design than any product of human invention. Although matter and energy are necessary fundamental
properties of life, they do not in themselves imply any basic differentiation between animate and inanimate systems. One of
the prime characteristics of all living organisms, however, is the information they contain for all operational processes
(performance of all life functions, genetic information for reproduction). Braitenberg, a German cybernetist, has submitted
evidence that information is an intrinsic part of the essential nature of life. The transmission of information plays a
fundamental role in everything that lives. When insects transmit pollen from flower blossoms, (genetic) information is
essentially transmitted; the matter involved in this process is insignificant. Although this in no way provides a complete
description of life as yet, it touches upon an extremely crucial factor.Without a doubt, the most complex information
processing system in existence is the human body. If we take all human information processes together, that is, conscious
ones (language, information-controlled functions of the organs, hormone system), this involves the processing of 10 24 bits
daily. This astronomically high figure is higher by a factor of 1,000,000 than the total human knowledge of 10 18 bits stored in
all the worlds libraries.
The concept of information
On the basis of Shannons information theory, which can now be regarded as being mathematically complete, we have
extended the concept of information as far as the fifth level. The most important empirical principles relating to the concept
of information have been defined in the form of theorems. Here is a brief summary of them:1
No information can exist without a code.
No code can exist without a free and deliberate convention.
No information can exist without the five hierarchical levels: statistics, syntax, semantics, pragmatics and apobetics.
No information can exist in purely statistical processes.
No information can exist without a transmitter.
No information chain can exist without a mental origin.
No information can exist without an initial mental source; that is, information is, by its nature, a mental and not a material
quantity.
No information can exist without a will.
The marvellous message molecule
by Carl Wieland
When someone sends a message, something rather fascinating and mysterious gets passed along. Let's say Alphonse in
Alsace wants to send the message, 'Ned, the war is over. Al'. He dictates it to a friend; the message has begun as patterns
of air compression (spoken words). His friend puts it down as ink on paper and mails it to another, who puts it in a fax
machine. The machine transfers the message into a coded pattern of electrical impulses, which are sent down a phone line
and received at a remote Indian outpost where it is printed out in letters once again. Here the person who reads the fax
lights a campfire and sends the same message as a pattern of smoke signals. Old Ned in Nevada, miles away, looks up and
gets the exact message that was meant for him. Nothing physical has been transmitted; not a single atom or molecule of
any substance travelled from Alsace to Nevada, yet it is obvious thatsomething travelled all the way.This elusive something
is called information. It is obviously not a material thing, since no matter has been transmitted. Yet it seems to need matter
on which to 'ride' during its journey. This is true whether the message is in Turkish, Tamil or Tagalog. The matter on which
information travels can change, without the information having to change. Air molecules being compressed in sound waves;
ink and paper; electrons travelling down phone wires, semaphore signalswhateverall involve material mediums used to
transmit information, but the medium is not the information.This fascinating thing called information is the key to
understanding what makes life different from dead matter. It is the Achilles' heel of all materialist explanations of life, which
say that life is nothing more than matter obeying the laws of physics and chemistry. Life is more than just physics and
chemistry; living things carry vast amounts of information.Some might argue that a sheet of paper carrying a written
message is nothing more than ink and paper obeying the laws of physics and chemistry. But ink and paper unaided do not
write messagesminds do. The alphabetical letters in a Scrabble kit do not constitute information until someone puts
them into a special sequencemind is needed to get information. You can program a machine to arrange Scrabble letters
into a message, but a mind had to write the program for the machine.How is the information for life carried? How is the
message which spells out the recipe that makes a frog, rather than a frangipani tree, sent from one generation to the next?
How is it stored? What matter does it 'ride' on? The answer is the marvellous 'message molecule' called DNA. This molecule
is like a long rope or string of beads, which is tightly coiled up inside the centre of every cell of your body. This is the
molecule that carries the programs of life, the information which is transmitted from each generation to the next. *Some
people think that DNA is alivethis is wrong. DNA is a dead molecule. It can't copy itselfyou need the machinery of a
living cell to make copies of a DNA molecule. It may seem as if DNA is the information in your body. Not sothe DNA is
simply the carrier of the message, the 'medium' on which the message is written. In the same way, Scrabble letters are not
information until the message is 'imposed' on them from the 'outside'. Think of DNA as a chain of such alphabet letters
linked together, with a large variety of different ways in which this can happen. Unless they are joined in the right sequence,
no usable message will result, even though it is still DNA.Now to read the message, you need a pre-existing language code
or convention, as well as machinery to translate it. All of that machinery exists in the cell. Like man-made machinery, it does
not arise by itself from the properties of the raw materials. If you just throw the basic raw ingredients for a living cell together,
without information nothing will happen. Machines and programs do not come from the laws of physics and chemistry by
themselves. Why? Because they reflect information, and information has never been observed to come about by unaided,
raw matter plus time and chance. Information is the very opposite of chanceif you want to arrange letters into a sequence
to spell a message, a particular order has to be imposed on the matter.When living things reproduce, they transmit
information from one generation to the next. This information, travelling on the DNA from mother and from father, is the
'instruction manual' which enables the machinery in a fertilized egg cell to construct, from raw materials, the new living
organisma fantastic feat. This is in a new combination so that children are not exactly like their parents, although the
information itself, which is expressed in the make-up of those children, was there all along in both parents. That is, the deck
was reshuffled, but no new cards were added.Just how much space does DNA need to store its information? The
technological achievements of humankind in storing information seem sensational. Imagine how much information is stored
on a videotape of a movie, for exampleyou can hold it all in one hand. Yet compared to this, the feat of information

miniaturization performed by DNA is nothing short of mind-blowing. For a given amount of information, the room needed to
store it on DNA is about a trillionth of that for information on videotapei.e. it is a million million times more efficient at
storing information.1How much information is contained in the DNA code which describes you? Estimates vary widely. Using
simple analogies, based upon the storage space in DNA, they range from 500 large library books of small-type information,
to more than 100 complete 30 volume encyclopaedia sets. When you think about it, even that is probably not enough to
specify the intricate construction of even the human brain, with its trillions of precise connections. There are probably higherlevel information storage and multiplication systems in the body that we have not even dreamed of yetthere are many
more marvellous mysteries waiting to be discovered about the designer`s handiwork.Not only is the way in which DNA is
encoded highly efficienteven more space is saved by the way in which it is tightly coiled up. According to genetics expert
Professor Jrme Lejeune, all the information required to specify the exact make-up of every unique human being on Earth
could be stored in a volume of DNA no bigger than a couple of aspirin tablets! 2 If you took the DNA from one single cell in
your body (an amount of matter so small you would need a microscope to see it) and unravelled it, it would stretch to two
metres!This becomes truly sensational when you consider that there are 75 to 100 trillion cells in the body. Taking the lower
figure, it means that if we stretched out all of the DNA in one human body 3 and joined it end to end, it would stretch to a
distance of 150 billion kilometres (around 94 billion miles). How long is this? It would stretch right around the Earth's equator
three-and-a-half million times! It is a thousand times as far as from the Earth to the sun. If you were to shine a torch along
that distance, it would take the light, travelling at 300,000 kilometres (186,000 miles) every second, five-and-a-half days to
get there.But the really sensational thing is the way in which the information carried on DNA in all living things points directly
to intelligent, supernatural creation, by straightforward, scientific logic, as follows:
Observations
1. The coded information used in the construction of living things is transferred from pre-existing messages (programs),
which are themselves transmitted from pre-existing messages.
2. During this transfer, the fate of the information follows the dictates of message/information theory and common sense.
That is, it either stays the same, or decreases (mutational loss, genetic drift, species extinction) but seldom, probably never,
is it seen to increase in any informationally meaningful sense.
Deduction from observation No. 2
3. Were we to look back in time along the line of any living population, e.g. humans (the information in their genetic
programs) we would see an overall pattern of gradual increase the further back we go.
Axiom
4. No population can be infinitely old, nor contain infinite information. Therefore:
Deduction from points 3 and 4
5. There had to be a point in time in which the first program arose without a pre-existing programi.e. the first of that type
had no parents.
Further observation
6. Information and messages only ever originate in mind or in pre-existing messages. Never, ever are they seen to arise
from spontaneous, unguided natural law and natural processes.
Conclusion
The programs in those first representatives of each type of organism must have originated not in natural law, but in mind.
This is totally consistent with the model, which teaches us that the programs for each of the original 'kind' populations, with
all of their vast variation potential, arose from the mind of a designer at a point in time, by creation. These messages, written
in intricate coded language, could not have written themselves, as far as real, observational science can tell us.
Once the first messages were written, they also contained instructions to make machinery with which to transmit those
messages 'on down the line'. DNA, this marvellous 'message molecule', carries that special, non-material something called
information, down through many generations, from its origin in the mind of a designer.
More or less information? / Has a recent experiment proved creation?
Published: 17 February 2007 (GMT+10)
Photo by Jos A. Warletta, from www.sxc.hu
One
of
the
most
important
creationist
arguments
concerns information. Understanding this issue deflects many anticreationist equivocations, calling any change evolution. That is, no
creationist denies that things change, and even speciate, but nearly
all the cited changes do not involve the increase in information
content required for microbes-to-man evolution, but go in the wrong
direction. See one illustration: How information is lost when
creatures adapt to their environment.This weeks feedback comes
from Casey P who picked up from a website a vexatious question
about how to define information. The evolutionist who first posed
that question erred by presupposing a simplistic definition, while Andrew Lambs reply shows that there are more levels of
information needed to understand its role in biology.The second feedback addresses questions on a recent article about a
research scientist whose work supposedly proves creation. However,Jonathan Sarfati had replied to a similar query in 1999
about the same phenomenon, and it is updated below.
How do we define information in biology?
Photo by Dacho Dachovic, from www.sxc.hu

Im curious to know perhaps you could fill me in on this. Which one has the most information, and what exactly are these two
sequences?
Sequence 1: cag tgt ctt ggg ttc tcg cct gac tac Thanks for your help here. God bless.
Casey P
gag acg cgt ttg tct tta cag gtc ctc ggc cag cac Dear Mr P
Thank you for your email of 17 January, submitted via
ctt aga caa gca ccc ggg acg cac ctt tca gtg ggc our website.
In response to creationist arguments about genetic
act cat aat ggc gga gta cca agg agg cac ggt cca information, some evolutionists disingenuously object
that since there is no one measure of information
ttg ttt tcg ggc cgg cat tgc tca tct ctt gag att content applicable to all situations, therefore genetic
information doesnt exist! But even hardened atheists
like the eugenicistRichard Dawkins recognize that DNA
tcc ata ctt
Sequence 2: tgg agt tct aag aca gta caa ctc tgc contains information. In fact there is a burgeoning new
field of science called bio-informatics, which is all about
genetic information.
gac cgt gct ggg gta gcc act tct ggc cta atc tac With respect to the two sequences you presented, one
would need to know their functions before it would be
gtt aca gaa aat ttg agg ttg cgc ggt gtc ctc gtt possible to consider making a comparison about which
sequence carried more information. If their functions
agg cac aca cgg gtg gaa tgg ggg tct ctt acc aaa (assuming they were not just gobbledygook) were
dissimilar, then it would be fairly meaningless to attempt
ggg ctg ccg tat cag gta cga cgt agg tat tgc cgt a comparison of information content. For example if one
was a genetic sequence coding for an enzyme, and the
gat aga ctg
other a genetic sequence coding for a structural protein,
then to ask which has the most information would be as meaningless as asking, say, which has more information60
grams worth of apple or 60 grams worth of orange.
If the meaning/function is similar, then an information-content comparison may be possible. Consider the following two
sequences:
She
has
a
yellow
vehicle.
She has a yellow car.
Both are English sentences. The first is 25 characters long, and the second is 21 characters long. The first sentence has
more characters, but the second sentence has more information, because it is more specific (cars being just one of scores
of different types of vehicle), andspecificity is one measure of information content. Specificity only relates to the purpose of
the information, not to the way it is expressed or the size of the message when it is expressed in some particular
way/language.There are five levels of information content (after Information, Science and Biology by Dr Werner Gitt,
information scientist):
statistics (symbols and their frequencies)
syntax (patterns of arrangement of symbols)
semantics (meaning)
pragmatics (function/result/outcome)
apobetics (purpose/plan/design)
Specificity relates to the pragmatics or apobetics level.
Gitts Theorem 9 states that Only that which contains semantics is information. This is a crucial point. Many evolutionists err
by restricting information measurement to the statistical level, or to Shannon information. So-called Shannon information is
not a measure of information per se, but merely a measure of the minimum number of characters/units needed to represent
a sequence, regardless of whether the sequence is meaningful or not. Gobbledygook can have more Shannon information
than a sentence in English.So, if the two sequences you presented were composed randomly, then it is highly unlikely that
either contains any information. However, for arguments sake, I will assume that they may be meaningful, and compare
them.The two sequences both contain the same amount of statistical information, 240 characters worth, when represented
in text.Both sequences appear the same at the syntactical level, i.e. both consist of 60 spaced triplets composed of the
symbols c, a, t, and g.At the semantic level, I recognize that these letter triplets are the same as ones used to represent
triplets of DNA bases that code for particular amino acids. Since all 64 possible triplets have a meaning in the DNA code,
and since neither sequence contains any of the three stop codes (taa, tga, tag), it follows that both sequences could be
regarded as having the same amount of information at the semantic level, since, if processed by the appropriate genetic
machinery, both sequences could probably produce a segment of protein 60 amino acids in length.
However, when it comes to the pragmatics level, as far as I can determine (being unable to locate these sequences in a
gene library such asNCBIs Entrez Nucleotides) both sequences apparently carry the same amount of meaningful

informationzilch.At the apobetics level, I have no idea what outcomes would result from processing of the two sequences.
Conceivably, at one extreme, they could result in production of an enzyme that kills the cell, or even a toxin that kills the
organism to which the cell belongs. At the other extreme, they could (for all I know) prevent aging, thus extending the
lifespanI have no idea. Indeed, one of the most intractable problems in molecular biology is computing the final protein
configuration from an amino acid sequence (see a current project).Note also that each creature has its own unique set of
cellular machinery, so the outcomes that result from the reading of these genetic sequences could be very different
depending on which organisms genetic machinery reads them. For example the genetic sequence found in the HIV virus is
harmless when read by the cellular machinery in apes cells, but ultimately lethal when read by human cellular machinery
very different outcomes at the apobetics level from the same genetic sequence. Also, there are some organisms with
slightly different genetic codes, so the same semantic information would be read differently resulting in
different pragmatic and apobetic information.The final protein configuration that results from a particular DNA sequence is
mainly determined by cellular machines of a type calledchaperonins, which influence protein folding. Without chaperonins,
an important protein might mis-fold into a deadly prion. This is the likely cause of the fatal brain conditions CreutzfeldtJakob
disease and bovine spongiform encephalopathy (BSE) aka mad cow disease (Discoveries that undermine the one gene
one protein idea).I hope this helps. We have many articles on our website on the issue of information in living organisms.
They can be found listed under the topic Information Theory in the Frequently Asked Questions index on this website.
Yours sincerely
Andrew
Lamb
Information Officer
How is information content measured?
New plant coloursis this new information?
Is antibiotic resistance really due to increase in information?
The Problem of Information for the Theory of Evolution: Has Dawkins really solved it?
Lifes irreducible structurePart 1: autopoiesis
by Alex Williams
The commonly cited case for intelligent design appeals to: (a) the irreducible complexity of (b) some aspects of life. But
complex arguments invite complex refutations (valid or otherwise), and the claim that only someaspects of life are irreducibly
complex implies that others are not, and so the average person remains unconvinced. Here I use another principle
autopoiesis (self-making)to show that all aspects of life lie beyond the reach of naturalistic explanations. Autopoiesis
provides a compelling case for intelligent design in three stages: (i) autopoiesis is universal in all living things, which makes
it a pre-requisite for life, not an end product of natural selection; (ii) the inversely-causal, information-driven, structured
hierarchy of autopoiesis is not reducible to the laws of physics and chemistry; and (iii) there is an unbridgeable abyss
between the dirty, mass-action chemistry of the natural environmental and the perfectly-pure, single-molecule precision of
biochemistry. Naturalistic objections to these propositions are considered in Part II of this article.
Snowflake photos by Kenneth G. Libbrecht.
Figure 1. Reducible structure. Snowflakes
(left) occur in hexagonal shapes because
water crystallizes into ice in a hexagonal
pattern (right). Snowflake structure can
therefore be reduced to (explained in terms
of) ice crystal structure. Crystal formation is
spontaneous in a cooling environment. The
energetic vapour molecules are locked into
solid bonds with the release of heat to the
environment, thus increasing overall entropy
in accord with the second law of
thermodynamics.The commonly cited case
for intelligent design (ID) goes as follows:
some biological systems are so complex
that they can only function when all of their components are present, so that the system could not have evolved from a
simpler assemblage that did not contain the full machinery. 1 This definition is what biochemist Michael Behe
calledirreducible complexity in his popular book Darwins Black Box2 where he pointed to examples such as the bloodclotting cascade and the proton-driven molecular motor in the bacterial flagellum. However, because Behe appealed
to complexity, many equally complex rebuttals have been put forward, 3 and because he claimed that only some of the
aspects of life were irreducibly complex, he thereby implied that the majority of living structure was open to naturalistic
explanation. As a result of these two factors, the concept of intelligent design remains controversial and unproven in popular
understanding.In this article, I shall argue that all aspects of life point to intelligent design, based on what European
polymath Professor Michael Polanyi FRS, in his 1968 article in Science called Lifes Irreducible Structure.4 Polanyi argued
that living organisms have a machine-like structure that cannot be explained by (or reduced to) the physics and chemistry of
the molecules of which they consist. This concept is simpler, and broader in its application, than Behes concept of
irreducible complexity, and it applies to all of life, not just to some of it.
The nature and origin of biological design
Biologists universally admire the wonder of the beautiful designs evident in living organisms, and they often recoil in
revulsion at the horrible designs exhibited by parasites and predators in ensuring the survival of themselves and their
species. But to a Darwinist, these are only apparent designsthe end result of millions of years of tinkering by mutation
and fine tuning by natural selection. They do not point to a cosmic Designer, only to a long and blind process of survival of
the fittest.5 For a Darwinist, the same must also apply to the origin of lifeit must be an emergent property of matter. An
emergent property of a system is some special arrangement that is not usually observed, but may arise through natural
causes under the right environmental conditions. For example, the vortex of a tornado is an emergent property of
atmospheric movements and temperature gradients. Accordingly, evolutionists seek endlessly for those special
environmental conditions that may have launched the first round of carbon-based macromolecules 6 on their long journey
towards life. Should they ever find those unique environmental conditions, they would then be able to explain life in terms of
physics and chemistry. That is, life could then be reduced to the known laws of physics, chemistry and environmental

conditions.However, Polanyi argued that the form and function of the various parts of living organisms cannot be reduced to
(or explained in terms of) the laws of physics and chemistry, and so life exhibits irreducible structure. He did not speculate
on the origin of life, arguing only that scientists should be willing to recognize the impossible when they see it:The
recognition of certain basic impossibilities has laid the foundations of some major principles of physics and chemistry;
similarly, recognition of the impossibility of understanding living things in terms of physics and chemistry, far from setting
limits to our understanding of life, will guide it in the right direction.7
Reducible and irreducible structures
To understand Polanyis concept of irreducible structure, we must first look at reducible structure. The snowflakes in figure 1
illustrate reducible structure.Meteorologists have recognized about eighty different basic snowflake shapes, and subtle
variations on these themes add to the mix to produce a virtually infinite variety of actual shapes. Yet they all arise from just
one kind of moleculewater. How is this possible?
Figure 2. Irreducible structure. The silver
coins (left) have properties of flatness,
roundness and impressions on faces and
rims, that cannot be explained in terms of the
crystalline state of silver (close packed cubes)
or its natural occurrence as native silver
(right).When water freezes, its crystals take
the form of a hexagonal prism. Crystals then
grow by joining prism to prism. The elaborate
branching patterns of snowflakes arise from
the statistical fact that a molecule of water
vapour in the air is most likely to join up to its
nearest surface. Any protruding bump will thus tend to grow more quickly than the surrounding crystal area because it will be
the nearest surface to the most vapour molecules.8 There are six bumps (corners) on a hexagonal prism, so growth will
occur most rapidly from these, producing the observed six-armed pattern.Snowflakes have a reducible structure because
you can produce them with a little bit of vapour or with a lot. They can be large or small. Any one water molecule is as good
as any other water molecule in forming them. Nothing goes wrong if you add or subtract one or more water molecules from
them. You can build them up one step at a time, using any and every available water molecule. The patterns can thus all be
explained by (reduced to) the physics and chemistry of water and the atmospheric conditions.
Figure 3. Common irreducibly structured
machine components: lever (A), cogwheel (B)
and coiled spring (C). All are made of metal,
but their detailed structure and function cannot
be reduced to (explained by) the properties of
the metal they are made of.To now
understand irreducible structure, consider a
silver coin.Silver is found naturally in copper,
lead, zinc, nickel and gold oresand rarely, in
an almost pure form called native silver.
Figure 2 shows the back and front of two
vintage silver coins, together with a nugget of
the rare native form of silver. The crystal
structure of solid silver consists of closely
packed cubes. The main body of the native
silver nugget has the familiar lustre of the pure
metal, and it has taken on a shape that
reflects the available space when it was
precipitated from groundwater solution. The
black encrustations are very fine crystals of
silver that continued to grow when the rate of
deposition diminished after the main load of silver had been deposited out of solution.Unlike the case of the beautifully
structured snowflakes, there is no natural process here that could turn the closely packed cubes of solid silver into round,
flat discs with images of men, animals and writing on them. Adding more or less silver cannot produce the roundness,
flatness and image-bearing properties of the coins, and looking for special environmental conditions would be futile because
we recognize that the patterns are man-made. The coin structure is therefore irreducible to the physics and chemistry of
silver, and was clearly imposed upon the silver by some intelligent external agent (in this case, humans).Whatever the
explanation, however, the irreducibility of the coin structure to the properties of its component silver constitutes what I shall
call a Polanyi impossibility. That is, Polanyi identified this kind of irreducibility as a naturalistic impossibility, and argued that
it should be recognized as such by the scientific community, so I am simply attaching his name to the principle.Polanyi
pointed to the machine-like structures that exist in living organisms. Figure 3 gives three examples of common machine
components: a lever, a cogwheel and a coiled spring. Just as the structure and function of these common machine
components cannot be explained in terms of the metal they are made of, so the structure and function of the parallel
components in life cannot be reduced to the properties of the carbon, hydrogen, oxygen, nitrogen, phosphorus, sulphur and
trace elements that they are made of. There are endless examples of such irreducible structures in living systems, but they
all work under a unifying principle called autopoiesis.
Autopoiesis defined
Autopoiesis literally means self-making (from the Greek auto for self, and the verb poi meaning I make or I do) and it
refers to the unique ability of a living organism to continually repair and maintain itselfultimately to the point of reproducing
itselfusing energy and raw materials from its environment. In contrast, an allopoietic system (from the Greek allo for other)
such as a car factory, uses energy and raw materials to produce an organized structure (a car) which is
something other than itself (a factory). 9Autopoiesis is a unique and amazing property of lifethere is nothing else like it in
the known universe. It is made up of a hierarchy of irreducibly structured levels. These include: (i) components with perfectly
pure composition, (ii) components with highly specific structure, (iii) components that are functionally integrated, (iv)
comprehensively regulated information-driven processes, and (v) inversely-causal meta-informational strategies for
individual and species survival (these terms will be explained shortly). Each level is built upon, but cannot be explained in
terms of, the level below it. And between the base level (perfectly pure composition) and the natural environment, there is an

unbridgeable abyss. The enormously complex details are still beyond our current knowledge and understanding, but I will
illustrate the main points using an analogy with a vacuum cleaner.
A vacuum cleaner analogy
My mother was excited when my father bought our first electric vacuum cleaner in 1953. It consisted of a motor and
housing, exhaust fan, dust bag, and a flexible hose with various end pieces. Our current machine uses a cyclone filter and
follows me around on two wheels rather than on sliders as did my mothers original one. My next version might be the small
robotic machine that runs around the room all by itself until its battery runs out. If I could afford it, perhaps I might buy the
more expensive version that automatically senses battery run-down and returns to its induction housing for battery recharge.
Notice the hierarchy of control systems here. The original machine required an operator and some physical effort to pull the
machine in the required direction. The transition to two wheels allows the machine to trail behind the operator with little
effort, and the cyclone filter eliminates the messy dust bag. The next transition to on-board robotic control requires no effort
at all by the operator, except to initiate the action to begin with and to take the machine back to the power source for
recharge when it has run down. And the next transition to automatic sensing of power run-down and return-to-base control
mechanism requires no effort at all by the operator once the initial program is set up to tell the machine when to do its work.
If we now continue this analogy to reach the living condition of autopoiesis, the next step would be to install an on-board
power generation system that could use various organic, chemical or light sources from the environment as raw material.
Next, install a sensory and information processing system that could determine the state of both the external and internal
environments (the dirtiness of the floor and the condition of the vacuum cleaner) and make decisions about where to expend
effort and how to avoid hazards, but within the operating range of the available resources. Then, finally, the pice de
rsistance, to install a meta-information (information about information) facility with the ability to automatically maintain and
repair the life system, including the almost miraculous ability to reproduce itselfautopoiesis.Notice that each level of
structure within the autopoietic hierarchy depends upon the level below it, but it cannot be explained in terms of that lower
level. For example, the transition from out-sourced to on-board power generation depends upon there being an electric
motor to run. An electric vacuum cleaner could sit in the cupboard forever without being able to rid itself of its dependence
upon an outside source of powerit must be imposed from the level above, for it cannot come from the level below.
Likewise, autopoiesis is useless if there is no vacuum cleaner to repair, maintain and reproduce. A vacuum cleaner without
autopoietic capability could sit in the cupboard forever without ever attaining to the autopoietic stageit must be imposed
from the level above, as it cannot come from the level below.The autopoietic hierarchy is therefore structured in such a way
that any kind of naturalistic transition from one level to a higher level would constitute a Polanyi impossibility. That is, the
structure at level i is dependent upon the structure at level i-1 but cannot be explained by the structure at that level. So the
structure at level i must have been imposed from level i or above.
The naturalistic abyss
Most origin-of-life researchers agree (at least in the more revealing parts of their writings) 10 that there is no naturalistic
experimental evidence directly demonstrating a pathway from non-life to life. They continue their research, however,
believing that it is just a matter of time before we discover that pathway. But by using the vacuum cleaner analogy, we can
give a solid demonstration that the problem is a Polanyi impossibility right at the foundationlife is separated from non-life
by an unbridgeable abyss.
Dirty, mass-action environmental chemistry
The simple structure of the early vacuum cleaner is not simple at all. It is made of high-purity materials (aluminium, plastic,
fabric, copper wire, steel plates etc) that are specifically structured for the job in hand and functionally integrated to achieve
the designed task of sucking up dirt from the floor. Surprisingly, the dirt that it sucks up contains largely the same materials
that the vacuum cleaner itself is made ofaluminium, iron and copper in the mineral grains of dirt, fabric fibres in the dust,
and organic compounds in the varied debris of everyday home life. However, it is the difference in form and function of these
otherwise similar materials that distinguishes the vacuum cleaner from the dirt on the floor. In the same way, it is the
amazing form and function of life in a cell that separates it from the non-life in its environment.Naturalistic chemistry is
invariably dirty chemistry while life uses only perfectly-pure chemistry. I have chosen the word dirty chemistry not in order
to denigrate origin-of-life research, but because it is the term used by Nobel Prize winner Professor Christian de Duve, a
leading atheist researcher in this field.11 Raw materials in the environment, such as air, water and soil, are invariably
mixtures of many different chemicals. In dirty chemistry experiments, contaminants are always present and cause annoying
side reactions that spoil the hoped-for outcomes. As a result, researchers often tend to fudge the outcome by using
artificially purified reagents. But even when given pure reagents to start with, naturalistic experiments typically produce what
a recent evolutionist reviewer variously called muck, goo and gunk 12which is actually toxic sludge. Even our best
industrial chemical processes can only produce reagent purities in the order of 99.99%. To produce 100% purity in the
laboratory requires very highly specialized equipment that can sort out single molecules from one another.Another crucial
difference between environmental chemistry and life is that chemical reactions in a test tube follow the Law of Mass
Action.13Large numbers of molecules are involved, and the rate of a reaction, together with its final outcome, can be
predicted by assuming that each molecule behaves independently and each of the reactants has the same probability of
interacting. In contrast, cells metabolize their reactants with single-molecule precision, and they control the rate and
outcome of reactions, using enzymes and nano-scale-structured pathways, so that the result of a biochemical reaction can
be totally different to that predicted by the Law of Mass Action.
The autopoietic hierarchy
Perfectly-pure, single-molecule-specific bio-chemistry
The vacuum cleaner analogy breaks down before we get anywhere near life because the chemical composition of its
components is nowhere near pure enough for life. The materials suitable for use in a vacuum cleaner can tolerate several
percent of impurities and still produce adequate performance, but nothing less than 100% purity will work in the molecular
machinery of the cell.One of the most famous examples is homochirality. Many carbon-based molecules have a property
called chiralitythey can exist in two forms that are mirror images of each other (like our left and right hands) called
enantiomers. Living organisms generally use only one of these enantiomers (e.g. left-handed amino acids and right-handed
sugars). In contrast, naturalistic experiments that produce amino acids and sugars always produce an approximately 50:50
mixture (called a racemic mixture) of the left-and right-handed forms. The horrors of the thalidomide drug disaster resulted
from this problem of chirality. The homochiral form of one kind had therapeutic benefits for pregnant women, but the other
form caused shocking fetal abnormalities.The property of life that allows it to create such perfectly pure chemical
components is its ability to manipulate single molecules one at a time. The assembly of proteins in ribosomes illustrates this
single-molecule precision. The recipe for the protein structure is coded onto the DNA molecule. This is transcribed onto a
messenger-RNA molecule which then takes it to a ribosome where a procession of transfer-RNA molecules each bring a
single molecule of the next required amino acid for the ribosome to add on to the growing chain. The protein is built up one
molecule at a time, and so the composition can be monitored and corrected if even a single error is made.

Specially structured molecules


Life contains such a vast new world of molecular amazement that no one has yet plumbed the depths of it. We cannot hope
to cover even a fraction of its wonders in a short article, so I will choose just one example. Proteins consist of long chains of
amino acids linked together. There are 20 amino acids coded for in DNA, and proteins commonly contain hundreds or even
thousands of amino acids. Cyclin B is an averaged-size protein, with 433 amino acids. It belongs to the hedgehog group of
signalling pathways which are essential for development in all metazoans. Now there are 20 433 (20 multiplied by itself 433
times) = 10563 (10 multiplied by itself 563 times) possible proteins that could be made from an arbitrary arrangement of 20
different kinds of amino acids in a chain of 433 units. The human bodythe most complex known organismcontains
somewhere between 105 (= 100,000) and 106 (=1,000,000) different proteins. So the probability (p) that an average-sized
biologically useful protein could arise by a chance combination of 20 different amino acids is about p = 106 /10563 = 1/10557 .
And this assumes that only L-amino acids are being usedi.e. perfect enantiomer purity.14For comparison, the chance of
winning the lottery is about 1/10 6 per trial, and the chance of finding a needle in a haystack is about 1/10 11per trial. Even the
whole universe only contains about 1080 atoms, so there are not even enough atoms to ensure the chance assembly of even
a single average-sized biologically useful molecule. Out of all possible proteins, those we see in life are very highly
specializedthey can do things that are naturally not possible. For example, some enzymes can do in one second what
natural processes would take a billion years to do. 15 Just like the needle in the haystack. Out of all the infinite possible
arrangements of iron alloy (steel) particles, only those with a long narrow shape, pointed at one end and with an eye-loop at
the other end, will function as a needle. This structure does not arise from the properties of steel, but is imposed from
outside.
Water, water, everywhere
There is an amazing paradox at the heart of biology. Water is essential to life, 16 but also toxicit splits up polymers by a
process called hydrolysis, and that is why we use it to wash with. Hydrolysis is a constant hazard to origin-of-life
experiments, but it is never a problem in cells, even though cells are mostly water (typically 6090%). In fact, special
enzymes called hydrolases are required in order to get hydrolysis to occur at all in a cell. 17 Why the difference? Water in a
test tube is free and active, but water in cells is highly structured, via a process called hydrogen bonding, and this waterstructure is comprehensively integrated with both the structure and function of all the cells macromolecules:
The hydrogen-bonding properties of water are crucial to [its] versatility, as they allow water to execute an intricate threedimensional ballet, exchanging partners while retaining complex order and enduring effects. Water can generate small
active clusters and macroscopic assemblies, which can both transmit and receive information on different scales.18
Water should actually be first on the list of molecules that need to be specially configured for life to function. Both the vast
variety of specially structured macromolecules and their complementary hydrogen-bonded water structures are required at
the same time. No origin-of-life experiment has ever addressed this problem.
Functionally integrated molecular machines
Figure 4. ATP synthase, a proton-powered
molecular motor. Protons (+) from inside the
cell (below) move through the stator
mechanism embedded in the cell membrane
and turn the rotor (top part) which adds
inorganic phosphate (iP) to ADP to convert it
to the high-energy state ATP.It is not enough
to have specifically structured, ultra-pure
molecules, they must also be integrated
together into useful machinery. A can of
stewed fruit is full of chemically pure and
biologically useful molecules but it will never
produce a living organism19 because the
molecules have been disorganized in the
cooking process. Cells contain an enormous
array of useful molecular machinery. The
average machine in a yeast cell contains 5
component proteins,20 and the most complex
the spliceosome, that orchestrates the
reading of separated sections of genes
consists of about 300 proteins and several
nucleic acids.21One of the more spectacular
machines is the tiny proton-powered motor
that produces the universal energy molecule
ATP (adenosine tri-phosphate) illustrated in Figure 4. When the motor spins one way, it takes energy from digested food and
converts it into the high-energy ATP, and when the motor spins the other way, it breaks down the ATP in such a way that its
energy is available for use by other metabolic processes.22
Comprehensively regulated, information-driven metabolic functions
It is still not enough to have spectacular molecular machinerythe various machines must be linked up into metabolic
pathways and cycles that work towards an overall purpose. What purpose? This question is potentially far deeper than
science can take us, but science certainly can ascertain that the immediate practical purpose of the amazing array of life
structures is the survival of the individual and perpetuation of its species. 23 Although we are still unravelling the way cells
work, a good idea of the multiplicity of metabolic pathways and cycles can be found in the BioCyc collection. The majority of
organisms so far examined, from microbes to humans, have between 1,000 and 10,000 different metabolic
pathways.24 Nothing ever happens on its own in a cellsomething else always causes it, links with it or benefits or is
affected by it. And all of these links are multi-step processes.All of these links are also choreographed by informationa
phenomenon that never occurs in the natural environment. At the bottom of the information hierarchy is the storage
moleculeDNA. The double-helix of DNA is just right for genetic information storage, and this just right structure is
beautifully matched by the elegance and efficiency of the code in which the cells information is written there. 25 But it is not
enough even to have an elegant just right information storage systemit must also contain information. And not just
biologically relevant information, but brilliantly inventive strategies and tactics to guide living things through the extraordinary
challenges they face in their seemingly miraculous achievements of metabolism and reproduction. Yet even ingenious
strategies and tactics are not enough. Choreography requires an intricate and harmonious regulation of every aspect of life
to make sure that the right things happen at the right time, and in the right sequence, otherwise chaos and death soon

follow.Recent discoveries show that biochemical molecules are constantly moving, and much of their amazing achievements
are the result of choreographing all this constant and complex movement to accomplish things that static molecules could
never achieve. Yet there is no spacious dance floor on which to choreograph the intense and lightning-fast (up to a million
events per second for a single reaction26) activity of metabolism. A cell is more like a crowded dressing room than a dance
floor, and in a show with a cast of millions!
Inversely causal meta-information
The Law of Cause and Effect is one of the most fundamental in all of science. Every scientific experiment is based upon the
assumption that the end result of the experiment will be caused by something that happens during the experiment. If the
experimenter is clever enough, then he/she might be able to identify that cause and describe how it produced that particular
result or effect.Causality always happens in a very specific orderthe cause always comes before the effect.27 That is,
event A must always precede eventB if A is to be considered as a possible cause of B. If we happened to observe
that A occurred after B, then this would rule out A as a possible cause of B.In living systems however, we see the universal
occurrence of inverse causality. That is, an event A is the cause of event B, but A exists or occurs after B. It is easier to
understand the biological situation if we refer to examples from human affairs. In economics, for example, it occurs when
behaviour now, such as an investment decision, is influenced by some future event, such as an anticipated profit or loss. In
psychology, a condition that exists now, such as anxiety or paranoia, may be caused by some anticipated future event, such
as harm to ones person. In the field of occupational health and safety, workplace and environmental hazards can exert
direct toxic effects upon workers (normal causality), but the anticipation or fear of potential future harm can also have an
independently toxic effect (inverse causality).Darwinian philosopher of science Michael Ruse recently noted that inverse
causality is a universal feature of life,28 and his example was that stegosaur plates begin forming in the embryo but only
have a function in the adultsupposedly for temperature control. However most biologists avoid admitting such things
because it suggests that life might have purpose (a future goal), and this is strictly forbidden to materialists.The most
important example of inverse causality in living organisms is, of course, autopoiesis. We still do not fully understand it, but
we do understand the most important aspects. Fundamentally, it is meta-informationit is information about information. It is
the information that you need to have in order to keep the information you want to have to stay alive, and to ensure the
survival of your descendants and the perpetuation of your species.This last statement is the crux of this whole paper, so to
illustrate its validity lets go back to the vacuum cleaner analogy. Lets imagine that one lineage of vacuum cleaners
managed to reach the robotic, energy-independent stage, but lacked autopoiesis, while a second makes it all the way to
autopoiesis. What is the difference between these vacuum cleaners? Both will function very well for a time. But as the
Second Law of Thermodynamics begins to take its toll, components will begin to wear out, vibrations will loosen
connections, dust will gather and short circuit the electronics, blockages in the suction passage will reduce cleaning
efficiency, wheel axles will go rusty and make movement difficult, and so on. The former will eventually die and leave no
descendants. The latter will repair itself, keep its components running smoothly and reproduce itself to ensure the
perpetuation of its species.But what happens if the environment changes and endangers the often-delicate metabolic cycles
that real organisms depend upon? Differential reproduction is the solution. Evolutionists from Darwin to Dawkins have taken
this amazing ability for granted, but it cannot be overlooked. There are elaborate systems in placefor example, the diploid
to haploid transition in meiosis, the often extraordinary embellishments and rituals of sexual encounters, the huge number of
permutations and combinations provided for in recombination mechanismsto provide offspring with variations from their
parents that might prove of survival value. To complement these potentially dangerous deviations from the tried-and-true
there are also firm conservation measures in place to protect the essential processes of life (e.g. the ability to read the DNA
code and to translate it into metabolic action). None of this should ever be taken for granted.In summary, autopoiesis is the
informationand associated abilitiesthat you need to have (repair, maintenance and differential reproduction) in order to
keep the information that you want to have (e.g. vacuum cleaner functionality) alive and in good condition to ensure both
your survival and that of your descendants. In a parallel way, my humanity is what I personally value, so my autopoietic
capability is the repair, maintenance and differential reproductive capacity that I have to maintain my humanity and to share
it with my descendants. The egg and sperm that produced me knew nothing of this, but the information was encoded there
and only reached fruition six decades later as I sit here writing thisthe inverse causality of autopoiesis.
Summary
There are three lines of reasoning pointing to the conclusion that autopoiesis provides a compelling case for the intelligent
design of life.
If life began in some stepwise manner from a non-autopoietic beginning, then autopoiesis will be the end product of some
long and blind process of accidents and natural selection. Such a result would mean that autopoiesis is not essential to life,
so some organisms should exist that never attained it, and some organisms should have lost it by natural selection because
they do not need it. However, autopoiesis is universal in all forms of life, so it must be essential. The argument from the
Second Law of Thermodynamics as applied to the vacuum cleaner analogy also points to the same conclusion. Both
arguments demonstrate that autopoiesis is required at thebeginning for life to even exist and perpetuate itself, and could not
have turned up at the end of some long naturalistic process. This conclusion is consistent with the experimental finding that
origin-of-life projects which begin without autopoiesis as a pre-requisite have proved universally futile in achieving even the
first step towards life.
Each level of the autopoietic hierarchy is dependent upon the one below it, but is causally separated from it by a Polanyi
impossibility. Autopoiesis therefore cannot be reduced to any sequence of naturalistic causes.
There is an unbridgeable abyss below the autopoietic hierarchy, between the dirty, mass-action chemistry of the natural
environment and the perfect purity, the single-molecule precision, the structural specificity, and the inversely causal
integration, regulation, repair, maintenance and differential reproduction of life.
DAILY
DNA INFORMATION
Information Theorypart 1: overview of key ideas
by Royal Truman
The origin of information in nature cannot be explained if matter and energy is all there is. But the many, and often
contradictory, meanings of information confound a clear analysis of why this is so. In this, the first of a four-part series, the
key views about information theory by leading thinkers in the creation/evolution controversy are presented. In part 2,
attention is drawn to various difficulties in the existing paradigms in use. Part 3 introduces the notion of replacing Information
by Coded Information System (CIS) to resolve many difficulties. Part 4 completes the theoretical backbone of CIS theory,

showing how various conceptual frameworks can be integrated into this comprehensive model. The intention is to focus the
discussion in the future on whether CISs can arise naturalistically.
Intelligent beings design tools to help solve problems. These
tools can be physical or intellectual, and can be used and
reused to solve classes of problems.1 But creating separate
tools for each kind of problem is usually inefficient. In nature
many problems involving growth, reproduction and
adjustment to changes are performed with information-based
tools. These share the remarkable property that an almost
endless range of intentions can be communicated via coded
messages using the same sending, transmission and
receiving equipment.All known life depends on information.
But what is information and can it arise naturally? Naturalists
deny the existence of anything beyond matter, energy, laws
of nature and chance. But then where do will, choice, and
information come from?In the creation science and intelligent
design literature we find inconsistent or imprecise
understandings about what is meant by information. We
sometimes read that nature cannot create information. But in
other places, that some, but not enough, information could be
produced to explain the large amount observed today in
nature.Suppose a species of bacteria can produce five similar
variants of a protein which dont work very well for some
function, and another otherwise identical species produces
only a single, highly tuned version. Which has more
information?Consider a species of birds with white and grey
members. A catastrophe occurs and the few survivors only
produce white offspring from now on. Has information
increased or decreased?What about enzymes. Do they
possess more information when able to act on several different substrates or when specific to only one?In the creation
science and intelligent design literature we find inconsistent or imprecise understandings about what is meant by
information. The influence of Shannons Theory of InformationMost of the experts debating the origin of information rely on
the mathematical model of communication developed by the late Claude Shannon with its quantitative merits. 24 Shannons
fame began with publication of his masters thesis, which was called possibly the most important, and also the most
famous, masters thesis of the century. 5Messages are strings of symbols, like 10011101, ACCTGGTCAA, and go away.
All messages are composed of symbols taken from a coding alphabet. The English alphabet uses 26 symbols, the DNA
code, four, and binary codes use two symbols.In Shannons model, one bit of information communicates a decision between
two equiprobable choices and in general n bits between 2n equiprobable choices. Each symbol in an alphabet of s
alternatives can provide log2s bits of information.Entropy, H, plays an important role in Shannons work. The entropy of a
Source can be calculated by observing the frequency each symbol i is generated in messages:
where log is to the base 2 and p is the probability of each symbol i appearing in a message. For example, if both symbols of
an alphabet [0,1] are equiprobable, then eqn. (1) leads to:
Maximum entropy results when the symbols are equiprobable, whereas zero entropy indicates that the same message is
always produced. Maximum entropy indicates that we have no way of guessing which sequence of symbols will be
produced. In English, letter frequencies differ, so entropy is not maximum. Even without understanding English, one can
know that many messages will not be produced, such as sentences over a hundred letters long using only the letters z and
q.6Equations for other kinds of entropy, each with special applications, exist in Shannons theory: joint entropy, conditional
entropy (equivocation), and mutual information.Shannon devotes much attention to calculating the Channel Capacity. This is
the rate at which the initial message can be transmitted error-free in the presence of disturbing noise, and requires
knowledge of the probability that each symbol i sent will arrive correctly or be corrupted into another symbol, j. These errorcorrection measures require special codes to be devised, with additional data accompanying the original message.There
are many applications of Shannons theories, especially in data storage and transmission. 7 A more compact code could exist
whenever the entropy of the messages is not maximum, and the theoretical limit to data compression obtained by recoding
can be calculated.8 Specifically, if messages based on some alphabet are to be stored or transmitted and the frequency of
each symbol is known, then the upper compression limit for a new code can be known. 9Hubert Yockey is a pioneer in
applying Shannons theory to biology.1014 His work and the mathematical calculations have been discussed in this journal.15
Once it was realized that the genetic code uses four nucleobases, abbreviated A, C, G, and T, in combinations of three to
code for amino acids, the relevance of Information Theory became quickly apparent. Yockey used the mathematical
formalism of Shannons work to evaluate the information of cytochrome c proteins, 15 selected due to the large number of
sequence examples available. Many proteins are several times larger, or show far less tolerance to variability, as is the case
of another example Yockey discusses:The pea histone H3 and the chicken histone H3 differ at only three sites, showing
almost no change in evolution since the common ancestor. Therefore histones have 122 invariant sites the information
content of an invariant site is 4.139 bits, so the information content of the histones is approximately 4.139 122, or 505 bits
required just for the invariant sites to determine the histone molecule. 16Yockey seems to believe the information was frontloaded on to DNA about four billion years ago in some primitive organism. This viewpoint is not elaborated on by him and is
deduced primarily by his comments that Shannons Channel Capacity Theorem ensures transmission of the original
message correctly.It is unfortunate that the mysterious allusions 17 to the Channel Capacity Theorem were not explained. In
one part he wrote,But once life has appeared, Shannons Channel Capacity Theorem (Section 5.3) assures us that genetic
messages will not fade away and can indeed survive for 3.85 billion years without assistance from an Intelligent Designer. 18
This is nonsense. The Channel Capacity Theorem only claims that it is theoretically possible to devise a code with enough
redundancy and error-correction to transmit a message error-free. Increased redundancies (themselves subject to
corruption) are needed as the demand for accuracy increases, and perfect accuracy is achieved only at the limit of infinite

low effective transmission of the intended error-free message. Whether this is even conceivable using mechanical or
biological components is not addressed by the Channel Capacity Theorem. But the key point is that Yockey claims the
theorem assures that the evolutionary message will not fade away. He confuses a mathematical in principle notion with an
implemented fact. He fails to show what the necessary error-correcting coding measures would be and that they have been
actually implemented.In his latest edition, Yockey (or possibly an editor) was very hostile to the notion of an intelligent
designer. Tragically, his comments on topics like Behes irreducible complexity suggests he does not understand what the
term means. As one example, we read:mRNA acts like the reading head on a Turing machine that moves along the DNA
sequence to read off the genetic message to the proteasome. The fact the sequence has been read shows that it is not
irreducibly complex nor random. By the same token, Behes mouse trap is not irreducibly complex or random. 19The word
information is used in many ways, which complicates the discussion as to its origin. Yockeys work is often difficult to follow.
Calculations which are easily understood and can be performed effortlessly with a spreadsheet or computer program are
needlessly complicated by deriving poorly explained alternative formulations.20 Very problematic in his work is the difficulty in
understanding his multiple uses of the word information. For example, the entropy of iso-1-cytochrome c sequences is called
Information content.21 Then presumably the greater the randomness of these sequences, the higher the entropy and
therefore the higher the information content, right? That makes no sense, and is the wrong conclusion. But why, since
higher entropy of the Source (DNA) according to Shannons theory always indicates more information?I believe this is the
source of much confusion in the creationist and Intelligent Design literature which criticizes Shannons approach as
supposedly implying greater randomness always implies more information.Kirk Durston, a member of the Intelligent Design
community, improves considerably on Yockeys pioneering efforts. He correctly identifies the difference in entropy of all
messages generated by a Source, H0, and the entropy of those messages which provide a particular function, H f, as the
measure of interest. He calls this difference, H0Hf, functional information.22 This difference in entropies is actually used by
all those applying Shannons work to biological sequences, whether evolutionists or not, although this fact is not immediately
apparent when reading their papers.Entropies are defined by eqn. (1), but Yockeys approach has a conceptual flaw (and
implied assumption) which destroys his justification for using Shannons Information Theory with protein sequence
analysis.23 Truman already pointed out that Yockeys quantitative results are obtained within little more than a rounding-off
error with the same data, using much simpler standard probability calculations. 24The sum of the entropy contributions at
each position of a protein leads to H f. To calculate these site entropies, Durston aligned all known primary sequences of a
protein using the ClustalX program, and determined the proportion of each amino acid in the dataset, using eqn. (1). Large
datasets were collected for 35 protein families and the bits of functional information, or Fits, were calculated. Twelve
examples were found having over 500 Fits, or a proportion of <2500 = 3 10151 among random sequences. The highest
value reported was for protein Flu PB2, with 2416 Fits.Durstons calculations have one minor and one major weakness. To
calculate H0, he assumed amino acids are equiprobable, which is not true. This effect is not very significant, but indeed H 0 is
a little less random than he assumed. The other assumption is that of mutational context independence: that all mutations
which are tolerated individually are also acceptable concurrently. This is not the case, as Durston knows, and the result is
that the amount of entropy in Hf is much lower than he calculated. 15,25,26 The conclusion is that the protein families actually
contain far more Fits of functional information, and represent a much lower subset among random sequences. This effect is
counteracted somewhat by the fact that not all organisms which ever lived are represented in the dataset.Bio-physicist Lee
Spetner, Ph.D. from MIT, is a leading information theoretician who wrote the book Not by Chance.27 He is a very lucid
participant in Internet debates on evolution and information theory, and is adamant that evolutionary processes quantitatively
wont increase information. In his book, he wrote,I dont say its impossible for a mutation to add a little information. Its just
highly improbable on theoretical grounds. But in all the reading Ive done in the life-sciences literature, Ive never found a
mutation that added information. The NDT says not only that such mutations must occur, they must also be probable
enough for a long sequence of them to lead to macroevolution. 28Within Shannons framework, it is correct that a random
mutation could increase information content. However, one must not automatically conflate more information content with
good or useful.29,30Although Spetner says information could be in principle created or increased, Dr Werner Gitt, retired
Director and Professor at the German Federal Institute of Physics and Technology, denies this:Theorem 23: There is no
known natural law through which matter can give rise to information, neither is any physical process or material
phenomenon known that can do this.31In his latest book, Gitt refines and explains his conclusions from a lifetime of
research on information and its inseparable reliance on an intelligent source. 32 There are various manifestations of
information: for example, the spiders web; the diffraction pattern of butterfly wings; development of embryos; and an organplaying robot.33 He introduces the term Universal Information 34 to minimize confusion with other usages of the word
information:Universal Information (UI) is a symbolically encoded, abstractly represented message conveying the expected
actions(s) and the intended purposes(s). In this context, message is meant to include instructions for carrying out a specific
task or eliciting a specific response [emphasis added].35Information must be encoded on a series of symbols which satisfy
three Necessary Conditions (NC). These are conclusions, based on observation.
NC1: A set of abstract symbols is required.
NC2: The sequence of abstract symbols must be irregular.
NC3: The symbols must be presented in a recognizable form, such as rows, columns, circles, spirals and so on.
Gitt also concludes that UI is embedded in a five-level hierarchy with each level building upon the lower one:
statistics (signal, number of symbols)
cosyntics (set of symbols, grammar)
semantics (meaning)
pragmatics (action)
apobetics (purpose, result).
Gitt believes information is guided by immutable Scientific Laws of Information (SLIs). 36,37 Unless shown to be wrong, they
deny a naturalist origin for information, and they are:38
SLI-1: Information is a non-material entity.
SLI-2: A material entity cannot create a non-material entity.
SLI-3: UI cannot be created by purely random processes.
SLI-4: UI can only be created by an intelligent sender.
SLI-4a: A code system requires an intelligent sender.
SLI-4b: No new UI without an intelligent sender.
SLI-4c: All senders that create UI have a non-material component.
SLI-4d: Every UI transmission chain can be traced back to an original intelligent sender
SLI-4e: Allocating meanings to, and determining meanings from, sequences of symbols are intellectual processes.
SLI-5: The pragmatic attribute of UI requires a machine.
SLI-5a: UI and creative power are required for the design and construction of all machines.

SLI-5b: A functioning machine means that UI is affecting the material domain.


SLI-5c: Machines operate exclusively within the physical chemical laws of matter.
SLI-5d: Machines cause matter to function in specific ways.
SLI-6: Existing UI is never increased over time by purely physical, chemical processes.
These laws are inconsistent with the assumption stated by Nobel Prize winner and origin-of-life specialist Manfred Eigen:
The logic of life has its origin in physics and chemistry. 39 The issue of information, the basis of genetics and morphology,
has simply been ignored. On the other hand, Norbert Wiener, a leading pioneer in information theory, understood clearly
that, Information is information, neither matter nor energy. Any materialism that disregards this will not live to see another
day.40It is apparent that Gitt views Shannons model as inadequate to handle most aspects of information, and that he
means something entirely different by the word information.Arch-atheist Richard Dawkins reveals a Shannon orientation to
what information means when he wrote, Information, in the technical sense, is surprise value, measured as the inverse of
expected probability.41 He adds, It is a theory which has long held a fascination for me, and I have used it in several of my
research papers over the years. And more specifically,The technical definition of information was introduced by the
American engineer Claude Shannon in 1948. An employee of the Bell Telephone Company, Shannon was concerned to
measure information as an economic commodity. 42DNA carries information in a very computer-like way, and we can
measure the genomes capacity in bits too, if we wish. DNA doesnt use a binary code, but a quaternary one. Whereas the
unit of information in the computer is a 1 or a 0, the unit in DNA can be T, A, C or G. If I tell you that a particular location in a
DNA sequence is a T, how much information is conveyed from me to you? Begin by measuring the prior uncertainty. How
many possibilities are open before the message T arrives? Four. How many possibilities remain after it has arrived? One.
So you might think the information transferred is four bits, but actually it is two. 40In articles and discussions among nonspecialists, questions are raised such as Where does the information come from to create wings? There is an intuition
among most of us that adding biological novelty requires information, and more features implies more information. I suspect
this is what lies behind claims that evolutionary processes cannot create information, meaning complex new biological
features. Even Dawkins subscribes to this intuitive notion of information:Imagine writing a book describing the lobster. Now
write another book describing the millipede down to the same level of detail. Divide the word-count in one book by the wordcount in the other, and you have an approximate estimate of the relative information content of lobster and
millipede.40Stephen C. Meyer, director of the Discovery Institutes Center for Science and Culture and active member of the
Intelligent Design movement, relies on Shannons theory for his critiques on naturalism. 43,44 He recognizes that some
sequences of characters serve a deliberate and useful purpose. Meyer says the messages with this property exhibit
specified complexity, or specified information.45 Shannons Theory of Communication itself has no need to address the
question of usefulness, value, or meaning of transmitted messages. In fact, he later avoided the word information. His
concern was how to transmit messages error-free. But Meyer points out that molecular biologists beginning with Francis
Crick have equated biological information not only with improbability (or complexity), but also with specificity, where
specificity or specified has meant necessary to function.46I believe Meyers definition of information corresponds to
Durstons Functional Information.
Figure 1. Shannons schematic diagram of a
general communication system.2
William Dembski, another prominent figure in the
Intelligent Design movement, is a major leader in
the analysis of the properties and calculations of
information, and will be referred to in the next parts
to this series. He has not reported any analysis of
his own on protein or gene sequences, but also
accepts that H0Hf is the relevant measure from
Shannons work to quantify information.
In part 2 of this series Ill show that many things are
implied in Shannons theory that indicate an underlying active intelligence.
Thomas Schneider is a Research Biologist at the National Institutes of Health. His Ph.D. thesis in 1984 was on applying
Shannons Information Theory to DNA and RNA binding sites and he has continued this work ever since and published
extensively.17,47

Figure 2. The transmission of genetic message from the DNA tape to the protein tape, according to Yockey.17
Senders and receivers in information theories

There is common agreement that a sender initiates transmission of a coded message which is received and decoded by a
receiver. Figure 1 shows how Shannon depicted this and figure 2 shows Yockeys version.2,17
Figure 3. A comprehensive diagram of the five levels
of Universal Information, according to Gitt.32
A fundamental difference in Gitts model is the
statement that all levels of information, including the
Apobetics (intended purpose) are present in the
Sender (figure 3). All other models treat the Sender as
merely whatever releases the coded message to a
receiver. In Shannons case, the Sender is the
mindless equipment which initiates transmission to a
channel. For Yockey the Sender is DNA, although he
considers the ultimate origin of the DNA sequences
open. Gitt distinguishes between the original and the
intermediate Sender.48Humans intuitively develop
coded information systemsHumans interact with
coded messages with such phenomenal skill, most
dont even notice what is going on. We discuss
verbally with ease. Engineers effortlessly devise
various designs: sometimes many copies of machines
are built and equipped with message-based
processing resources (operating systems, drivers,
microchips, etc.). Alternatively, the hardware alone could be distributed and all the processing power provided centrally
(such as the dumb terminals used before personal computers). To illustrate, intellectual tools such as reading, grammar,
and language can be taught to many students in advance. Later it is only necessary to distribute text to the multiple human
processors.The strategy of distributing autonomous processing copies is common in nature. Seeds and bacterial colonies
already contain preloaded messages, ribosomes already possess engineered processing parts, and so on.
Conclusion
The word information is used in many ways, which complicates the discussion as to its origin. The analysis shows two
families of approaches. One is derived from Shannons work and the other is Gitts. To a large extent the former addresses
the how question: how to measure and quantify information. The latter deals more with the why issue: why is information
there, what is it good for?The algorithmic definition of information, developed by Solomonoff and Kolmogorov, with
contributions from Chaitin, is rarely used in the debate about origins and in general discussions about information currently.
For this reason it was not discussed in this part of the series.
Information Theorypart 2: weaknesses in current conceptual frameworks
by Royal Truman
The origin of information is a problem for the theory of evolution. But the wide, and often inconsistent, use of the word
information often leads to incompatible statements among Intelligent Design and creation science advocates. This hinders
fruitful discussion. Most information theoreticians base their work on Shannons Information Theory. One conclusion is that
the larger genomes of higher organisms require more information, and raises the question whether this could arise
naturalistically. Lee Spetner claims no examples of information-increasing mutations are known, whereas most ID advocates
only claim that not enough bits of information could have arisen during evolutionary timescales. It has also been proposed
that nature reflects the intention of the designer, and therefore all lifeforms might have the same information content. Gitt
claims information cant be measured. The underlying concepts of these theoreticians were discussed in part 1 of this
series. In part 3 a solution will be offered for the difficulties documented here.
Background: stock.xchng
Origin of life researcher Dr Kppers defines life
as matter plus information.1 Having a clear and
common understanding of what we mean by
information is necessary for a fruitful
discussion about its origin. But in part 1 of this
series I pointed out that various researchers of
evolutionary and creationist persuasion give
the word very different definitions.2 Creation
Magazine often draws attention to the need for
information-adding mutations if evolutionary
theory is true, for example:Slow and gradual
evolutionary modification of these crucial organs of movement would require many information-adding mutations to occur in
just the right places at just the right times. 3What does information mean? Williams introduced many useful thoughts in this
journal in a three-part series on biological information.46 Consistent with the usage of information above, he points out that
Creationists commonly challenge evolutionists to explain how vast amounts of new information could be produced that
would be required to turn a microbe into a microbiologist.7On the same page he adds, But the extra wings arose from three
mutations that switched off existing developmental processes. No new information was added. Nor was any new
capability/functionality achieved.Schneiders simulations only work because they were designed to do so, and are
intelligently guided. I understand and agree with the intuition behind this usage of the word information. Nevertheless, even
literature sold by Creation Ministries International, such as MIT Ph.D. Lee Spetners classic Not by Chance!,8 are not using
information in the same sense. Spetner is an expert on Shannons Theory of Communication (information) and is one of the
most lucid writers on its application.Sometimes creationists (e.g. Gitt) state that information cannot, in principle, arise
naturally whereas others (e.g. Stephen Meyer, Lee Spetner) are saying that not enough could arise for macro-evolutionary
purposes.2The view that not enough time was available to add the necessary information found in genomes (based on one
definition of information) becomes clouded when Williams argues that the Darwinian arguments are without force, since it is
clear that organisms are designed to vary.9,10 Behind this reasoning lies a different usage of information.
Williams even implies that information cannot be quantified at all:

a new, useful enzyme will not contain more information than the original system because the intention remains the same
to produce enzymes with variable amino acid sequences that may help in adapting to new food sources when there is
stress due to an energy deficit.9I believe that approach should be reconsidered, especially if intention is defined in such
generic, broad terms. Suppose the intention is to help ones daughter get better grades at school. The above suggestion
seemingly assigns the same amount of information whether a two-minute verbal explanation is offered, or years of private
tutoring over many topics.I believe most of Williams intuitions are right, but hope the model given in the third part of this
series will bring the pieces together in a more unified manner. Williams suggests that other codes are present in the cell
environment in addition to the one used by DNA. He once made the significant statement: We could, in theory, quantify this
information using an algorithmic approach, but for practical purposes it is enough to note that it is enormous and noncoded.11I agree that information can also be non-coded, but it is not apparent how an algorithmic measure of information
could be used, a topic Bartlett has devoted effort to. 12The precise definition of information has dramatic consequences on
the conclusions reached. Gitt believes information cannot be quantified. Others believed it can, and in exact detail. Weber,
Claude Shannons thesis supervisor, had this to say:It seems very reasonable to want to say that three relays could handle
three times as much information as one. And this indeed is the way it works out if one uses the logarithmic definition of
information.13When asked by creationists if he knew of any biological process that could increase the information content of
a genome, Dawkins could not answer the question.6,14 He subscribes to Shannons definition of information and understands
the issue at stake, writing later:Therefore the creationist challenge with which we began is tantamount to the standard
challenge to explain how biological complexity can evolve from simpler antecedents.15Several years ago Answers in
Genesis sponsored a workshop on the topic of information. Werner Gitt proposed we try to find a single formulation
everyone could work with. This challenge remains remarkably difficult, because people routinely use the word in different
manners.In 2009, Gitt offered the following definition for information in this journal,16 which at the advice of Bob Crompton he
now calls Universal Information (UI).Information is always present when all the following five hierarchical level are observed
in a system: statistics, syntax, semantics, pragmatics and apobetics.Let us call this Definition 1. Gitt also states that he now
uses UI and information interchangeably.17I have collaborated with Werner Gitt during the last 25 years of so on various
topics, and the comments which follow are not to be construed as criticism against him or his work. 18 At times it seems there
is a discrepancy between what he means and how it is expressed on paper. 19 Considerable refinement has occurred in his
thinking, and I hope to contribute by a critical but constructive attempt at further improvement.The variety of usages of the
word information continues to trap us. When Gitt wrote:Theorem 3. Information comprises the nonmaterial foundation for all
technological systems and for all works of art20andRemark R2: Information is the non-material basis for all technological
systems,21he appears to have switched to another (valid but different) usage of the word information. For example, it is not
apparent why valuable technologies like the first axe, shovel, or saw depended on the coded messages (statistics, syntax)
portion of his definition of information, a definition which seems to require all five hierarchical levels to be present.As another
example of inconsistent, or at least questionable, usage of the word, we read that the information in living things resides
on the DNA molecule.22The parts of the definition of information which satisfy apobetics (purpose, result) do not reside on
DNA. External factors enhance and interplay with what is encrypted and indirectly implied on DNA, but apobetics is not
physically present there. To illustrate, neuron connections are made and rearranged as part of dynamic learning, interacting
with external cues and input, but the effects are neither present nor implied on DNA.Another important claim needs to be
evaluated carefully. Gitt often states the premise thatThe storage and transmission of information requires a material
medium.23It is true that non-material messages can be coded and impregnated on material media. But information can be
relayed over various communication channels. Must all of them be material based? If so, then all, or virtually all,
theinformation-processing components in intelligent minds could only be material. Let us see why.Suppose one wishes to
translate ridiculous into German. The intention to translate, and the precise semantic concept itself, are surely encoded and
stored somewhere. This intention must be transmitted elsewhere to other reasoning facilities, where a search strategy will
also be worked out. All of this occurs before the search request is transmitted into the physical brain, but information is
already being stored and transmitted in vast amounts.Furthermore, is the mind/brain interface, part of the transmission path,
100% material?23 We begin to see that Gitts statement seems to imply that wilful decision making and the guidance of
decisions must be material phenomena.Now, as soon as the German word lcherlich is extracted from the biological
memory bank,24 it must be transferred from the brains apparatus into the wilful reasoning equipment and compared to the
information which prompted the search. A huge amount of mental processing (i.e. data storage and transmission) will now
occur: are the English and German words semantically synonymous for some purpose, or should more words be searched
for?Irrwitzig could be a new candidate, but which translation is better? What are all the associations linked to both German
words? Should more alternatives by looked up in a dictionary? Finally, decisions will be performed as to what to do with the
preferred translation (stored as the top choice and mentally transmitted to processing components where the intended
outcome will be planned).25More to the point, must angels, God, and the soul rely on a material medium to store and
transmit information?This objection is serious, because of the frequent statements that all forms of technology and art are
illustrations of information. An artist can wordlessly decide to create an abstract painting. Where are the statistics, syntax,
and semantics portion of the definition of UI? If in the mentally coded messages (which we read above must be material)
then either UI is material based or all aspects of created art and tool-making (technology) need not be UI.In part 3 of this
series Ill offer a simple solution to these issues.Gitt offers a new definition for UI in his 2011 book Without Excuse:Universal
Information (UI) is a symbolically encoded, abstractly represented message conveying the expected actions(s) and the
intended purposes(s). In this context, message is meant to include instructions for carrying out a specific task or eliciting a
specific response [emphasis added].26Let us call this Definition 2. This resembles one definition of information in Websters
Dictionary: The attribute inherent in and communicated by alternative sequences or arrangements of something that
produce specific effects.I dont believe Definition 2 is adequate yet. Only verbal communication seems to be addressed. It
implies that the symbolically encoded message itself must convey the expected actions and intended purposes, but in part 3
Ill show that this needs not be, and is probably never completely true. Sometimes the coded instructions themselves do
convey portions of the expected actions and purpose. This is observed when the message communicates how machines
are to be produced which are able to process remaining portions of the message (like DNA encoding the sequence data for
the RNA and proteins needed to produce the decoding ribosome machinery). I would agree that the messages often
contribute to, but do not necessarily themselves specify the purpose. Communicating all the necessary details would be
impractical.Consider the virus as an example. The expected actions(s) and the intended purpose(s) are not communicated
by the content of their genomes, nor are the instructions to decode the implied protein (the necessary ribosomes are
provided from elsewhere). Some viruses do provide instructions to permit insertion into the host genome and other
intermediary outcomes which can contribute to, but not completely specify, the final intended purposes.Another difficulty with
Definition 2 is that it does not distinguish between push and pull forms of coded interactions. The code message, What is
the density of benzene? could be sent to a database. This message, a pull against an existing data source, does not convey
the expected actions(s) or the intended purposes.Of the researchers discussed in part 1 of this series, Gitts model offers

the broadest framework for a theory of information for the purposes of analyzing the origin of life. He has refined his
thoughts continually over the years, but I fear the value will soon plateau out without the change of direction well see in part
3. One reason is that it wont permit quantitative conclusions.If an evolutionist is convinced that all life on Earth derived from
a single ancestor, then ultimately all the countless examples of DNA-based life are only the results of one single original
event. Therefore, Gitts elevation of his theorems to laws will seem weak compared to the powerful empirical and
mathematically testable laws of physics, for which so many independent examples can be found and validated
quantitatively.27 Im sure Gitts Scientific Laws of Information (SLI) will never be disproven because my analysis (introduced
in part 3 and 4 of this series) of what would be required to create code-based systems makes their existence without
intelligent guidance absurdly improbable. Others may find my reasoning in part 3 more persuasive than calling observed
code-based principles laws, since they seem to be based on such limited datasets.The contributions of other information
theoreticians are quantifiable. Although limited to the lower portions of Gitts five hierarchies, I find much merit in them, and
their ideas can be included as part of a general-purpose theoretic framework (see part 3). When Gitt wrote:To date,
evolutionary theoreticians have only been able to offer computer simulations that depend upon principles of design and the
operation of pre-determined information. These simulations do not correspond to reality because the theoreticians smuggle
their own information into the simulations.It is not clear, based on his own definition, what was meant by pre-determined
information. I will show in part 3 that the path towards pragmatics and apobetics can be aided with resources which do not
rely on the lower levels (statistics, syntax, and semantics). The notion of information being smuggled into a simulation is
widely discussed in the literature, and very competently by Dembski and Marks,28 who show how the contribution by
intelligent intervention can be quantified.Absurdly, Thomas Schneider claims his simulation begins with zero information 29
andThe ev model quantitatively addresses the question of how life gains information, a valid issue recently raised by
creationists (R. Truman, www.trueorigin.org/dawkinfo.htm) but only qualitatively addressed biologists.30,31Schneiders
simulations only work because they were designed to do so, and are intelligently guided. 32 This has been quantitatively
addressed by William Dembski.32 Furthermore, the framework in part 3 will show that Gitts higher levels can also be
quantified.
Gitts four most important Scientific Laws of Information, published in this journal,17,18 are:
SLI-1: A material entity cannot generate a non-material entity.
SLI-2: Universal information is a non-material fundamental entity.
SLI-3: Universal information cannot be created by statistical processes.
SLI-4: Universal information can only be produced by an intelligent sender.
Can we be satisfied that these are robustly formulated according to Definitions 1 and 2, above? For SLI-1 the question of
complete conversion of matter into energy should be addressed.
What about SLI-2 through SLI-4? I see no chance they would be falsified if we were to replace Universal information by
coded messages, which is integrated into UI. With a slight change in focus, introduced in part 3, I believe a stronger case
can be made.
SLI-2SLI-4 using Definition 1
For SLI-2 it is unclear what entity means, since the definition says, Information is always present when and the
grammar does not permit the thoughts to be linked. Since apobetics is not provided by the entity making use of DNA, this
definition still needs work. Nevertheless, the definition includes the thought in a system and this is a major move in the right
direction (see part 3).
SLI-3 surely cant be falsified, since the definition requires the presence of apobetics, which seems incompatible with
statistical processes. There seems to be a tautology here, since statistical processes describe outcomes with unknown
precise causes, whereas apobetics is a deliberate intention.
SLI-4 makes a lot of sense, but only if one understands UI to refer to a multi-part system and not an undefined entity.
SLI-2SLI-4 using Definition 2
For SLI-2 it is unclear what entity means, presumably the message. But it is questionable that the message must be
responsible to convey the expected actions(s) and the intended purpose(s). Decision-making capabilities could exist a priori
on the part of the receiver, who pulls a coded message from a sender, and then performs the appropriate actions and
purposes. The actions and purposes need not be conveyed by the message. Cause and effect here can be reversed.
Example 1. The receiver wishes to know what time it is. A coded message is sent back. The receiver alone decides what to
do with the content of the message.
Example 2. A rich man compares prices of various cars, airplanes, and motorboats. The coded information sent back
(prices) does not convey the expected actions(s) nor the intended purposes(s). The man provides the additional input, not
the message!
SLI-3 and SLI-4 make sense.
Value of Shannons work is underrated
Much criticism is voiced in the creation science literature about Shannons definition of information, which he preferred to
call communication, dealing as it does with only the statistical characteristics of the message symbols. Given Shannons
goal of determining maximum throughput possible over communication channels under various scenarios, it is true that the
meaning and intention of the messages play no role in his work.My concern is that I suspect his critics may have overlooked
some deeper implications which Shannon himself did not draw attention to. There are good reasons why all the researchers
mentioned8 in part 1 use expressions like, according to information theory or, the information content is when discussing
their analysis of biological sequences like proteins. Implicit in these researchers comments are notions of goals, purpose,
and intent. These are notions associated with information in generic, layman terms. Information theory inevitably refers to
Shannons work, even though the claims made about his work cannot be found directly in his own pioneering publications.
Are there reasons why the goal-directing effects of coded messages, like mRNA, remind us of Shannons information
theory? Is the wish to attain useful goals, intentionally implied in Shannons work? The answer is yes. Here are some
examples:Corruption of the intended message. The series of symbols (messages) transmitted can be corrupted in route.
Shannon devotes considerable effort in analysing the effects of noise and how much of the original, intended message can
be retrieved. But why should inanimate nature care what symbols were transmitted? Implicit is that there is a reason for
transmitting specific messages.Optimal use of a communication channel. If there are patterns in the strings of symbols to be
communicated, then better codes can often be devised. Suppose an alphabet consists of four symbols, used to
communicate sequences of DNA nucleotides (abbreviated A, C, G, or T) or perhaps to identify specific quadrants at some
location. Statistical analysis can reveal the probabilities, p, which need to be transmitted, e.g. A (p = 0.9), C (p = 0.05), G (p
= 0.04), and T (p = 0.01). We decide to devise a binary code. We could assign a two-bit codeword ( 00, 01, 10, 11) to each
symbol, so that on average a message requires two bits per symbol.People discuss frequently an immaterial entity called
information. Information Theory usually refers to Shannons work. The many alternative meanings of the word lead to
ambiguity, and detract from the issue of its origin. However, more compact codes could be devised for this example. Let us

invent one and assign the shorter codewords to the symbols which need to be transmitted more often: A = 0; C = 10; G =
111; T = 110. A message is easily decoded without needing any spacers. For example, 0010011000010 can only represent
AACATAAAC. On average, messages using this coding convention will have a length of 1 x 0.9 + 2 x 0.05 + 3 x 0.04 + 3 x
0.01 = 1.15 bits/ symbol, a considerable improvement.Implicit in this analysis is that it is desirable for some purpose to be
able to transmit useful content and to minimize waste of the available bandwidth. It is also implied that an intelligent
engineer will be able to implement the new code, an assumption which makes no sense in inanimate nature. 33Calculation of
joint entropy and conditional entropy. Various technological applications exist for mathematical relationships such as joint
and conditional entropies. Calculating these require knowing about the messages sent and those received. Nature has no
way or reason to do this. By performing these calculations one senses that intelligent beings are analyzing something and
for a purpose.Warren Weaver, Shannons mentor professor and co-author of the book edition published in 1949, discerned
that meaning and intentionality are implied in their work. In the portion he wrote we read,But with any reasonably broad
definition of conduct, it is clear that communication either affects conduct or is without any discernible and probable effect at
all.34And Gitts work is foreshadowed in insights like:
Relative to the broad subject of communication, there seems to be problems at three levels. Thus it seems reasonable to
ask, serially,
LEVEL A: How accurately can the symbols of communication be transmitted? (The technical problem.)
LEVEL B. How precisely do the transmitted symbols convey the desired meaning? (The semantic problem.)
LEVEL C. How effectively does the received meaning affect conduct in the desired way? (The effectiveness problem.)35
Concern about Shannons initiative
Two reasons are often mentioned for claiming information theory has no relevance to common notions of information:
More entropy supposedly indicates more information. But how can this be, since a crystal with high regularity surely
contains much order and little information? And the chaos-increasing effects of a hurricane surely destroy organization and
information.
Longer messages imply more information. Really? Does the message Today is Monday provide less information than
Today is Monday and not Tuesday? Or less than Tdayy/$ *!aau!##$ is Modddndday?
These two objections, commonly encountered, reflect a weak understanding of the topic and prevent extracting a significant
amount of value available.
For purposes of creation-vs-evolution discussions, a good suggestion is to profit from the mathematics Shannon drew
attention to but avoid referring to information theory entirely. Shannon himself only used the phrase theory communication
later in his life. For most purposes we are interested in probability issues: how likely are naturalist scenarios, based on
specific mechanisms?
Generally, we can limit ourselves to three simple equations which are not unique contributions from Shannon.
The definition of entropy was already developed for the field of statistical thermodynamics:

H refers here to the entropy per symbol, such as the entropy of each of the four nucleotides on DNA.
The ShannonMacMillanBreitmann Theorem is useful to calculate the number of high-probability sequences of length N
symbols, having an average entropy H per symbol:

An example is shown in table 1. When the distribution of all


possible symbols, s, at each site is close to fully random, the
number of messages calculated by 2 NH and sN are
reasonably similar. The symbols could be amino acids,
nucleotides or codons. Eqn (2) is important for low entropy
sets, see table 1.Table 1. Example of eqn (2) to calculate the
number of high-probability sequences based on the entropy,
H*. The first column shows the probability of one amino acid
being found at a site and in the second column we assume
the remaining 19 amino acids are distributed equally. A
protein with N = 200 amino acids (AA) is assumed here.The
difference in entropy at each site along two sequences is of
paramount interest:where H0 is the entropy at the Source
and Hf at the Destination. To analyze proteins, these two
entropies refer to amino acid frequencies, calculated at each aligned site. The sequences used to calculate Hf perform the
same biological function. Following a suggestion by Kirk Durston, let us call H0 Hf the Functional Information36 at a site.
The sum over all sites is the Functional Information of the sequence.
What does eqn. (3) tell us?
If the entropy of a Source is
unchanged, the lower the
entropy which is observed
at a receiver, the higher the Functional Information involved (figure 1).
Figure 1. (Entropy of the Source) (Entropy of the Receiver) defines Functional
Information (FI) for a specific purpose. In A and B, HSource is the same, so FI for
case A is greater than for B. Dark circles represent different messages, or strings
of symbols.
On the other hand, if the entropy of a receiver is unchanged, the higher the
entropy which is observed at a source, the higher the Functional Information
involved (figure 2).
The ideas expressed in these three equations can be applied in various
manners. Suppose the location at which arrows land on a target is to be
communicated via coded messages (figure 3). A very general-purpose design
would permit all locations in three dimensions over a great distance to be
specified at great precision, applicable to target practice with guns, bows, or
slingshots. The entropy of the Source would now be very great.

Another design would limit what could be communicated to small square areas on a specific target, with one outcome
indicating the target was missed entirely. The demands on this Source would be much smaller, its entropy more limited, and
the messages correspondingly simpler.A variant design would treat each circle on the target as functionally equivalent,
restricting the range of potential outcomes which need to be communicated by the Source even more.To prepare our
thinking for biological applications, suppose the Source can communicate locations anywhere within 100 m to high
precision, and that we know very little about the target. We wish to know how much skill is implied to attain a bullseye.
Anywhere within this narrow range is considered equivalent. We are informed of every outcome and whether a bullseye
occurred. We can use eqn (1) to calculate H 0 for all locations communicated and the entropy of the bullseye, H f. Eqn. (3) is
the measure of interest, and eqn. (2) can be used to determine the proportion of desiredtonon-desired outcomes.
Figure 2. (Entropy of the Source) (Entropy of the Receiver) defines Functional
Information (FI) for a specific purpose. In A and B, HReceiver is the same, so FI
for case A is greater than for B. Dark circles represent different messages, or
strings of symbols.
Of much interest for creation research is the proportion of the bullseye region
represented by functional proteins. This is calculated as follows. A series of
sequences for the same kind of protein from different organisms are aligned, and
the probability of finding each of the 20 possible amino acids, p i, is calculated at
each site. The entropy at each site is then calculated using (1), the value of
which is Hf in eqn (3). The average entropy of all amino acids being coded by
DNA for all proteins is the H0 in eqn (3).
To these three equations let us add three suggestions:
Always be clear whether entropy refers to the collection of messages being
generated by the Source; the entropy of the messages received at the
Destination; or the resulting entropy of objects resulting from processing the messages received.
Take intentionality into account when interpreting entropies.
Work with bits no matter what code is used. A bit of data can communicate a choice between two possibilities; two bits, a
choice from among four alternatives; and n bits, a choice from among 2n possibilities. If the messages are two bits long and
each symbol (0 or 1) are equiprobable, it is impossible to specify correctly one of possible eight outcomes.
The symbols used by a code are part of its alphabet. The content of messages based on non-binary codes can also be
expressed in bits, and the messages could be transformed into a binary code. For example, DNA uses four symbols
(A,C,G,T), so each symbol can be specific up to 2 bits per position. Therefore, a message like ACCT represents 2 + 2 + 2 +
2 = 8 bits, so 28 = 256 different messages of length four could be created from the alphabet (A,C,G,T). This can be
confirmed by noting that 44 = 256 different alternatives are possible.
Figure 3. Coded messages are to communicate the location at which arrows land on a
target. Various designs are possible, depending on the intended use. Precise locations could
be communicated, or only the relevant circle, or only location within the bullseye circle. If
the outcomes are far from random, effective and highly compressed codes can be devised
to shorten the average length of messages sent with no loss of precision.We are now armed
to clarify some confusion and to perform useful calculations. The analysis is offered as the
on-line Appendix 1.37 Appendix 237 (also on-line) discusses whether mutations plus natural
selection could increase information, using a Shannon-based definition of information.
Conclusion
People discuss frequently an immaterial entity called information. Information Theory usually
refers to Shannons work. The many alternative meanings of the word lead to ambiguity, and detract from the issue of its
origin. What could be meant when one claims many copies of the same information does not increase its quantity? It cannot
refer to Shannons theory. Information in this case could mean things like the explanatory details or know-how to perform a
task, usable by an intelligent being. Shannons model, however, claims that two channels transmitting the same messages
convey twice as much information as only one would.What about a question like, Where does the information come from in
a cell or to run an automated process? Here information could mean the coded instructions or know-how which guide
equipment and leads to useful results.The discussion in parts 1 and 2 of this series is not meant to favour nor criticize how
others have chosen to interpret the word information. Many valuable insights can be gleaned for this literature. For purposes
of gaining a broader view of all the components involved in directing processes to a desired outcome, I felt the need to move
in another direction, which will be explained in parts 3 and 4.The evolutionary community is uncomfortable with the topic of
information, but the issue is easier to ignore when there is disagreement on very basic issues, such as whether it can be
quantified and whether higher life-forms contain more information or not.Covering so many notions with the same word is
problematic, and in part 3 a solution will be proposed.
Information Theorypart 3: introduction to Coded Information Systems
by Royal Truman
The literature about information is confusing because so many properties are described for supposedly a singular entity. The
discussion can be more fruitful once we realize we are studying systems with many components, one of which is a coded
message. We introduce the notion of a Coded Information System (CIS) and can now pose an unambiguous question:
Where do CISs come from?, which should be more precise than the vague alternative, Where does information come
from? We can develop a
model which is quantifiable by
focusing on the effects a CIS
has on organizing matter
through a sequential set of
refining steps.
In part 1 of this series1 I
demonstrated that there are
many usages of the word
information,
with
many
specialists working on different
notions. Dretske points out
that It is much easier to talk

about information than it is to say what it is you are talking about . It has come to be an all-purpose word, one with the
suggestive power to fulfil a variety of descriptive tasks.2 In part 2 of this series3 I drew attention to issues in various
information theoretic models which seem problematic. There seems to be a common intuition that information leads to a
desired outcome. But is information only vaguely (if at all) involved in attaining the intended goal (as implied by Shannons
theory4) or fully, as Gitt maintains? 5Coded messages play a prominent role in Gitts framework, 6,7 and are clearly
indispensable for the first three levels of his model (statistics, cosyntics, and semantics) but it is not apparent how symbolic
messages appear directly in the last two levels (pragmatics and apobetics). And what exactly is a coded message? The gun
fired to start a race consists of only one symbol. Statistics and cosyntics are missing, but meaning (semantics) is present.
Was a message sent? Is this information?Schneider claims8 to show with a computer program that information can arise for
free, autonomously, but Dembski argues decisively9 that the necessary resources were intelligently embedded into the
program in different ways, and shows that information is provided whenever a suitable search algorithm is selected from
among other possible ones.1015 Can these ideas be reconciled to permit a coherent discussion?Surely information is more
than mere causeeffect mechanics. Sometimes information is claimed to cause something via mechanical means. For
example, the direction and force generated by a billiard cue has been said to provide the information to guide the ball. But
all natural causes lead to some effect! So when is information involved? Surely information is more than mere causeeffect
mechanics.Now, machines are also used by living beings to achieve a goal. Some, like computers, work with coded
messages. What about a watermill which grinds grain into meal? The water provides the energy needed for the machine to
work. One could adjust the amount of force delivered upon the rotating wheel by changing the amount of water provided,
and the drop height. But there is no coded message in this kind of machine.Although the disagreements about how to define
information are rampant, virtually no one would argue the subject matter is vacuous, a meaningless debate of empty words.
Pioneering thinker Norbert Wiener stated correctly that Information is information, neither matter nor energy, but this left
unanswered what it is. And even experts vacillate between different meanings of the word, so readers might not know
exactly what is implied in each case. The confusion arises from a multitude (of sometimes only weakly related) ideas applied
to a single word, information.To illustrate, Gitt assigns both statistics and apobetics to information in living systems, but how
and where? On DNA? Statistics can indeed be discerned from gene sequences, but surely not from intended purpose
(apobetics). The goal does not reside on DNA, fully nor implied. As well see later, DNA is only one of multiple contributing
factors to produce an intended outcome.As a second example, one of Gitts Universal Laws of Information is: SLI-4c. Every
information transmission chain can be traced back to an intelligent sender. 16 In the same paper he also writes, Remark R3:
The storage and transmission of information requires a material medium. It seems that transmission chain must be referring
to coded messages, a stream of symbols on a physical medium. However, why must the other mandatory elements of his
information or Universal Information (semantics, pragmatics, and apobetics) reside on, and be transmitted by, a material
medium? Must an intelligent mind and all its parts be 100% material? I am not claiming there is contradiction in what Gitt
writes. In fact, I edited and endorsed his last book. 5 Careful consideration of his work reveals that information is somehow
distributed in separate, organized ensembles of matter, energy, and mind (e.g. the statistics vs the pragmatics portion) with
different properties and functions. This makes an answer to What is information? almost impossible. And it leads to a
struggle to find words with compound meanings to convey the multiplicity of functions assigned to information. His second
law states, SLI-2 Universal information is a non-material fundamental entity.Entity, in this statement, merely replaces
Universal Information, and one does not know what it might mean. It reflects the search for a missing, suitable explanatory
construct and therefore provides no additional insight beyond the phrase Universal information is non-material. But I
believe the simple proposal introduced below will retain almost all his views in a coherent manner.When someone asks,
Where does the information come from which causes a fertilized egg to become an adult? it seems that a series of linked,
guided processes are implied. Processes is plural, whereas a singular word, information, does not capture this intuition very
well.What needs to be explained?
Analysing the world around us, we note a family of phenomena which are not explained by deterministic law or random
behaviour. Examples include:
Birds migrate to specific locations during certain time periods.
Thousands of proteins are formed each minute in a cell and their concentrations and locations are carefully regulated.
A few bacteria can reproduce into a large colony, metabolizing nutrients to survive.
A foetus develops into an adult.
Caterpillars metamorphose into butterflies.
Assembly lines produce hundreds of cars each day.
A few years after lava devastates a landscape, a new ecology develops.
Text on a computer screen can be transferred to a printed sheet of paper.
Deaf people communicate with a sign language.
Satellites are sent to a planet and back.
Figure 1. Complex equipment
sends symbols to a Receiver able
to receive and process the coded
message. The shapes between
Message Sender and Message
Receiver represent symbols of a
coded message.
The
above
outcomes
occur
repeatedly, and what we observe
does
not
follow
naturalistic
(mechanical) principles. Some observations become readily apparent.
Observation 1. A series of linked processes are involved.
Observation 2. Members of these processes sequentially refine and contribute towards a goal.
Observation 3. A coded message is used somewhere along the chain of processes. Complex equipment generates a series
of symbols, usually embedded on a physical medium, 17 which another piece of complex equipment receives, resulting in a
measurable change in behaviour of a system attached to the Message Receiver (figure 1).
Observation 4. All these kinds of systems are associated with living organisms.
Making a fresh start
To uniquely specify our area of interest, we exclude all systems and machines which do not use a coded message
somewhere in the process. We are left with phenomena which have something to do with information, and we wonder
where such systems come from. But asking, Where does information come from? is too vague for our scientific enterprise.

Clearly we are observing systems, with many independent, but linked, components. We need a definition for these
message-based systems and then we need to consider how they could arise. Based strictly on observation, we make the
following definition:A Coded Information System (CIS) consists of linked tools or machines which refine outcomes to attain a
specific goal. A coded message plays a prominent role between at least two members of this linked series.CIS theory
recognizes Gitts five sequential processes: statistics, cosyntics, semantics, pragmatics, and apobetics.5
Messages vs sensors in CIS theory
Coded messages are formed by ordered codewords, 18 which themselves consist of symbols from a coding alphabet.
Messages must conform to the grammatical rules devised for that coding system.Cues or sensors are often found in a CIS
but should not be considered coded messages. For example, a sensor could be composed of two metal parts, the volumes
of which respond differently to temperature. When the temperature increases, selective expansion of one of the metals
causes the construct to bend, bringing the tip of the sensor into contact with a critical element to trigger an action (such as
by permitting a current to flow).Taste and smell receptors are unique to specific chemical structures, and are also sensors. If
interaction at a detector is a simple physical effect, and a signal is transmitted without an alphabet of symbols which are
independent of the carrier, then we have a sensor and not a coded message. 19 However, sensors are often valuable
components of a CIS, and signals received by sensors could be converted into coded messages, as will be shown later.As
one example, barn owls use two methods to localize sounds: the time differential between the arrival of a sound at each ear
(the interaural time) and the variance in the sounds intensity as it arrives at each ear.20 Are these cues, interacting directly
with the external physical factors, coded messages? Not at the point of external contact, which is based on strict physical
relationships, with no alphabet, nor grammar.As another example, a photoreceptor on a retina absorbs a photon, causing
11-cis-retinal to isomerize to 11-transretinal, which is followed by a signal cascade. This initially strictly physical behaviour is
characteristic of sensors. The location at which the photon lands on the retina determines in most cases where the signal
will be transferred to in the primary visual cortex of the occipital lobe. 21 The cue is transmitted over a neural pathway, and
eventually coded messages are involved to communicate with the occipital lobe. Why do we make this claim?There are
approximately 260 million photoreceptors on human retina. The initial signal gets transmitted a short distance, but these
signals are subsequently distributed among only 2 million ganglion cells. This compression of information suggests that
higher-level visual centres should be efficient processors to recover the details of the visual world. 22The signals originating
from the retina are processed by specialized neurons which perform distributed processing to determine object attributes
such as colour, location, and movement. Low-level algorithms are available, able to identify edges and corners. 23,24
Somehow the whole needs to be combined into a coherent whole, taking context into account. The underlying language is
not yet known, but rules are beginning to be identified, such as the use of AND operators. 25Coded messages could also
precede and activate a specific sensor. And sometimes activation of a sensor can be supplemented with other contextual
inputs which are subsequently coded into a message. To illustrate, a biochemical can dock onto a receptor (a sensor!) of a
cells outer membrane, leading to a complex cascade of internal processes, culminating in regulation of several genes. The
resulting process is part of a cellular language, the details of which are not fully elucidated. 26 In this case, the signal from a
sensor contributes input to a coded message.The following example shows how sensors could be integrated into a CIS.
Suppose four departments {A,B,C,D} at a university participate in races which occur hourly. During even-numbered hours
men race; on odd-numbered hours the women do.A scoreboard is divided into eight portions, representing the four
departments and the gender. Each time a sensor on one of the eight squares is activated, the value displayed increases by
one. This could be implemented in a mechanical, strictly cause-effect manner. The sensors are identical and so are the cues
received. So far there is no alphabet of symbols or syntax. Therefore, a coded message was not received at the scoreboard,
although something useful did result. Nevertheless, well show that a coded message could precede or follow the work of
the sensor-based equipment.
Figure 2. Coded messages can precede
or follow the use of sensors. The
winners
from
four
departments
{A,B,C,D} could be communicated by an
initial message, e.g. B C C B D A. The
Decoder determines from the time (even
or odd hours) whether each symbol
received represents a mans or womans
race. Both facts permit activating one of
eight boxes on a scoreboard. The eight
sensors can transmit a signal elsewhere,
and at the end of the transmission a new
codeword unique to each sensor is
generated. The new code, e.g.
0111011001 can then be transmitted or stored.Let us assume the winning department for each hour is communicated by a
judge, using a single symbol from the quaternary alphabet {A,B,C,D} which is transmitted towards the scoreboard (figure
2).27 The Decoder is also endowed with an internal clock, thereby permitting the winners gender to be identified. Now four
departments two genders, or eight outcomes, can be communicated, to one of the eight portions of the scoreboard, 28
although each symbol alone can only provide two bits of data. The winner can be communicated by transmitting an electric
signal through the relevant wire on to the correct one out of eight sensors on the scoreboard (figure 2). 29The codes are
independent of the physical infrastructure, as must always be true of informative codes. Suppose the winning department
and gender are to be communicated to another location afterward. The back end of each of the eight boxes in the
scoreboard first transmit a signal (not a message) along a cable. A coded message, unique to each original sensor, is then
produced by encoders (the circles preceding the triangle in figure 2) using a new binary code, with codewords such as
(0010), (1100), or (0111), unique to each wire. The new coded message identifies the same facts as the original quaternary
one {A,B,C,D}, supplemented by the winners gender, and this message can now be transmitted far away or stored
somewhere for future retrieval.Note how the specific assignment of A, B, C, or D to either of two out of eight boxes was
arbitrary and so was the assignment of specific triplet binary codewords to each sensor. The codes are independent of the
physical infrastructure, as must always be true of informative codes.I believe the simple example illustrates a general
principle in cellular systems. Methylation at specific location on DNA or phosphorylation of portions of proteins are simple
signals which get supplemented with other details and converted into coded messages.
Senders and receivers in CIS theory
The CIS model focuses on empirical measurements. A series of refining processes are observed, at least one of which
results from receiving coded messages. Observations 13 above are illustrated in figure 3. Notice that after processing the
message, additional refinements can occur, represented by the ever smaller contours in figure 3.

Figure
3.
Coded
Information
Systems
sequentially refine behaviour through a series of
processes. At least one process is guided by
coded
instructions.
Each
goal-directing
refinement step could be influenced through
coded messages, sensors, physical hardware, or
pre-existing resources such as data or logicprocessing algorithms.
The emphasis of the CIS approach is on
observing the modified range of behaviour of the
target system, unlike Shannons theory which
analyzes the statistical features of messages.
The effects caused by other sequential
refinement components, which can precede or
follow receipt of the message, are also evaluated
based on resulting consequences. This will be
elaborated on in part 4 of this series.Apropos
quantifying information, Shannons model is
unsuitable to evaluate prescriptive instructions. Suppose a robot is to extract trees from a forest. An algorithmic message is
sent, indicating how to find the largest tree within 50 metres of the Receiver, step by step. Statistical analysis of the series of
0s and 1s transmitted would be of little value, but the approach of CIS is to measure the resulting outcome empirically. It is
the contribution to producing the correct outcome which matters, when compared to the (theoretical) reference state, that
defines improvement, measured in bits.The intention thus far is to introduce a more nuanced manner to discuss and
measure information. The range of behaviour, weighted by observed probability, is compared for initial and a refined state,
for each contour in figure 2. There can be many ways these improvements can be engineered, using software and
hardware. Intelligent intervention, what some call smuggling information into a system, can now easily be taken into
account. For example, any artificial guidance to select a genetic or other algorithm to attain a specific outcome is an input
which improves over the preceding, unguided state.The precise, regulated designs used in biology and technology can be
understood and quantified with this simple CIS approach. The details themselves, like gene expression or metabolic
regulation,30 are often exquisitely sophisticated, but are in a sense only details which can be understood by drilling down
from the high-level concepts of the CIS model. We will defer a description of the many designs found in nature, the purpose
of which is to ensure the right outcomes in a CIS. 31CISs are created to organize matter and energy in precise manner at the
correct time and location, a very dynamical challenge which requires sophisticated components. These integrated systems
can typically be reused many times. The variety of unsuitable parts, which includes incorrect coded messages, greatly
outweighs the functionally acceptable ones.The motivation behind this analysis is to force researchers to consider
everything involved to permit a message-processing system, such as cells, to work. One of Trumans harshest critiques 32 of
the Avida setup and claims is that virtually everything necessary for the simulation to work, such as physical replication of
the electronic organisms, the energy source, physical transfer of data to the appropriate logic processing locations, and so
on, were machines already made available. They made decisive contributions to ensure the desired outcomes. In nature all
these components are coded for on DNA, and therefore subject to the ravages of random mutations. In Avida, mutations
cannot destroy nor disrupt most of the fundamental system components. Virtually everything relevant to information was
overlooked in the discussions. Forcing the participants to discuss the complete CIS should have prevented such
foolishness.One final notion in the CIS model is to distinguish between two kinds of receivers: mechanical receivers, which
respond deterministically to the messages instructions; and autonomously intelligent receivers, who first evaluate and
decide how to respond. Between these extremes lie a range of intermediate possibilities, including programmed artificial
intelligence programs designed to incorporate various forms of reasoning, and systems able to query for additional relevant
details from environmental sources.Part 4 will introduce the fundamental theorems associated with the CIS model, and show
that this framework incorporates the insights from Shannons theory, Gitts model, Dembskis contributions and other
schemes. But consistent use of the CIS notions does lead to some different conclusions than those proposed by other
frameworks.
Conclusion
The literature attempting to describe information is very broad. It is generally accepted to be non-material, and many
attributes are assigned to it. But it seems that people are generally referring to a system which contains physical
components, and not to a single entity. Analyzing components of a coded information system, such as coded messages,
signals, and physical hardware separately, solves several conceptual difficulties. And as will be further elaborated on in part
4, the effects produced by a CIS as a whole offer a means to quantify what is accomplished by portions, or the complete
CIS.
Information Theorypart 4: fundamental theorems of Coded Information Systems Theory
by Royal Truman
In parts 1 and 2 of this series the work of various information theoreticians was outlined, and reasons were identified for
needing to ask the same questions in a different manner. In Part 3 we saw that information often refers to many valid ideas
but that the statements reflect we are not thinking of a single entity, but a system of discrete parts which produce an
intended
outcome
by
using
different
kinds
of
resources.
We introduced in Part 3 the model for a new approach, i.e. that we are
dealing with Coded Information Systems (CIS). Here in Part 4 the
fundamental theories for CIS Theory are presented and we show that novel
conclusions are reached.
freeimages.com/flaivoloka
In Part 3 of this series1 we emphasized that the word information, although
singular, often refers to separate entities. This led to the notion that we are
often describing a system, parts of which involve coded messages.

Definition. A Coded Information System (CIS) consists of linked tools and machines designed to refine outcomes to attain a
specific goal. A coded message plays a prominent role between at least two members of this linked series.
Theorems to explain CISs
A series of theorems are presented next, to clarify what a Coded Information System (CIS) is. These are based on
observation and analysis of all coded information systems known to us.
Theorem 1. A CIS is used to organize matter and energy to satisfy an intended goal. All components of the system which
guide towards the final outcome, including timing and location, are part of a CIS.
The resulting organization of portions of the material world reflect intended goals. All the components involved in a series of
refinements to attain the final state are part of the CIS, and their effect must be quantitatively measurable, at least in
principle.2
Theorem 2. A CIS can be used to achieve a mental goal.
In the absence of wilful input, the organization of matter can be explained by deterministic laws of nature and statistical
principles of randomness. Mental processes, however, are not controlled deterministically or by randomness, and include:
making choices; seeking to understand; and developing a strategy.Suppose you are learning German, and are reflecting on
what Unsinn might mean. The intention to translate is surely not deterministic, nor explained by randomness. Perhaps the
intention is stored temporarily (physically?) in the brain, with which an immaterial you interacts almost instantly. You know
what you wish to do and can easily communicate this to others. This intention is converted somehow into a physical search
through the data stored in your neurons. This requires very special mental equipment, since a multitude of kinds of searches
are possible: for a discrete telephone number; for how a face looks; for a melody. The list is near endless. In this case were
searching for a concept which we believe reflects the meaning of Unsinn.The concept we seek to translate must be encoded
in some manner, and the searches directed efficiently. Suitable data must somehow be extracted from the neurons,
requiring further mental machinery, and the results must be encoded and transferred somewhere for the mind to evaluate.
Another tool then compares a candidate English word, the associations of which get compared with those of Unsinn. It is
absurd to argue neurotransmitter concentrations or electrical signals are being compared across billions of neurons. The
logical processing must involve some kind of compression and high-performance language. Eventually the mind decides
whether a potential translation of Unsinn, like nonsense, is correct or not.
All the resources involved in mental processes like these are part of a CIS, and some are not physical.
All the resources involved in mental processes like these are part of a CIS, and some are not physical. Various resources
narrow the range of possibilities, including when the translation is to occur and where.
Theorem 3. Coded messages do not arise from the properties of the physical carrier medium.
An implication which results is that the symbols used by the coding alphabet can appear in any order and combination,
whether the resulting messages serve a purpose or not. Ideally the carrier must not place any constraints on the potential
messages which could be created.3By our definition, a CIS must use a coded message at some point. Otherwise well treat
the phenomenon as a tool. Some coded messages provide step-by-step instructions on how to accomplish something.
Examples include computer programs and algorithms. These messages must be supplemented with hardware able to carry
out the instructions.Another class of coded messages only specify specific outcomes or choices, without any instructions on
how to attain them. A communication convention must be established a priori. The message 01101 might mean bring me
menu number thirty one, or pitch a curve ball. Combinations of these two extremes are possible, such as when a computer
program invokes a subroutine (or method or function) using parameter values.Coded messages permit intended outcomes
to be communicated between flexible tools and machines, which have been designed to solve a class of problems. This is
an efficient manner to use resources to solve problems. The alternative would be to build assembly lines of machines to
solve each individual problem and then to communicate which ensemble is required for each problem.Instead, to illustrate,
billions of dollars of complex logistics components of an overnight delivery service can be put to use flexibly by only
associating a coded delivery address to the object to be transported.The use of coded messages characterizes living
organisms, to control their development and response to novel situations; and to interact among each other. Humans devise
coding conventions with so little effort that few realize what an extraordinary feature this is. Being so fundamental to a wide
class of life-related observations, the presence of a coded message is a requirement for a system to be considered a CIS.
Theorem 4. The coded message does not provide the energy which causes the intended changes.The outcomes produced
by messages must not be caused only by the carrier medium, to distinguish from mere mechanical effects. The symbols in a
coding alphabet could indeed require different amounts of energy to be generated or processed. For example, in alphabet
{0, 1} the 0 could be communicated by lifting one arm, and the 1 by lifting both arms. But the energy to produce the symbols
must not lead to the resulting changes upon processing the message (e.g. by providing different levels of force, or resulting
momentum in a specific direction, caused directly by the symbol). Theorem 4 draws attention to the need for independent
components to be engineered for a CIS to work. Energy in the right form, time, and place must work with the intent
expressed by the message.
Theorem 5. Outcomes improved beyond what coded messages alone convey imply additional refining components are
involved. The additional contributions can be expressed quantitatively.
Figure 1. A jet interceptor is instructed to fly off in one of four possible directions.
This provides log2(4) = 2 bits of information.
An example in Part 3 of this series 1 revealed this principle. The coded message
communicated only four possible choices (two bits of information), but an internal
clock revealed in addition whether a race had been carried out during odd or even
hours, thereby indicating the gender of the winner. Therefore, the correct one out of
eight choices was able to be determined with the help of the clock.Example 1.
Assume the jets on an aircraft carrier are only told whether to fly off in one of four
quadrants, figure 1. Suppose careful observation shows that the pilots begin search
manoeuvres only once beyond a certain distance from the ship (therefore, the
central square in figure 1 was excluded).Although log2(4) = 2 bits of information can
only communicate the correct quadrant, we observe that the target is usually
identified, although located within a small portion within a quadrant. How is this
possible? Clearly additional refining components were available. Repeated
observation would allow the scientist to identify at least three sequentially refining components (without knowing anything a
priori about the details of the coded message): a) a coded message directs into one of four directions; b) searching begins
some distance from the carrier (there is prior knowledge that an alarm will occur when the enemy is still far away); c) there
are specific kinds of targets to search for, plus logic and special equipment to perform the searches. Example 2. Proteincoding portions of DNA are the messages which communicate the order in which each of twenty possible amino acids are

linked. But notice that almost always only l-form amino acids appear in the proteins. The choice of isomer was not
communicated by the mRNA messages, but optically pure amino acids were independently manufactured as feedstock. We
also observe that undesired chemical reactions amino acids normally undergo are prevented when forming the proteins. For
example, the side-chains of amino acids dont react together, nor do short five-to seven-membered chains form.4 This is true
because outcome-guiding equipment was deliberately included in the design.
Figure 2. Trade-offs between message complexity and engineering
design. A) The message communicates only the final destination.
Resources receiving the message must interpret it and have the
means to act upon the communicated intention. B) The message
communicates step-wise what is to be done. U = Up; L = Left one
unit. Now the messages are more complex but the equipment can
be simplified.
Other illustrations (examples A1A6) are offered online. Figure 2 is
explained in example A4 on-line.
Theorem 6. Receipt of a message often communicates more than
just the coded content.
The bits of information provided in a message provide an
incomplete picture from the point of view of resulting outcomes.
Although choices between alternatives can be communicated, the
changes which result occur in narrow time and location ranges. The
equipment receiving the message could be used more than once,
and the correct Receiver could be targeted, at the correct time and location.
Example 3. A zip code can communicate where to deliver a package, but when it is generated and on which object make a
difference! The intended goal must be taken into account.
Theorem 7. Refinement components are integrated into a sequence to produce the intended outcome.
Refinement components, designed to refine towards a goal, include:
received coded messages
engineered constraints or guidance
external cues
preloaded algorithms or reasoning resources.
The state of affairs achieved from one component of the CIS becomes the starting point for additional improvement by other
components.
Theorem 8. The quantitative contribution of Refining Components can be calculated by comparing ranges of behaviour
before and after the goal-directing activity.
A fundamental notion in CIS Theory is to identify the contribution provided by each discrete component in the processing
chain, by identifying the range of behaviour before the refinement and afterward. The theory applies to any kind of behaviour
in time and space. And the improvement can be due to receipt of a coded message and/or of other factors.The range of
possible outcomes will be represented by a discrete number n; the entropy H; L or a probability distribution function. We will
use some mathematical ideas developed by Shannon, as discussed in Part 3, 1 to define the Refinement Improvement5 as
Hbefore Hafter, which is measured in bits. This permits the improvement by each member of the chain, expressed in bits, to be
additive.
Theorem 9. Quantifying the contribution from a received message may require analysis of a single final state or of
intermediate ones along the way. It is necessary to evaluate what the intention is.
Figure 3. Convoluted messages could reveal incompetence or deliberate intention.
Upon processing the message, the vehicle here ends at the same place as shown in
figure 2, although the message and trajectory is now more complex. If the intention
was only to deliver to a specific location, then the message select one out of 64
possibilities, or 6 bits of information, would suffice. But if the trajectory had a
deliberate purpose, the outcome would have to be compared to the relevant reference
alternatives, taking each subgoal into account.Example 4. In the CIS methodology we
focus on behaviour which results from refining factors, and not the statistical details of
coded messages (which is Shannons methodology). Unlike Example A4 online,
suppose the intention of a message was to provide an itinerary, figure 3. Comparing
one final destination with all possible outcomes (1/64) would be wrong if the intention
was a milk run,6 like a parcel service delivering packages, or a path to transverse a
mine field, or to avoid incoming missiles. Then the result of each successful decision
would need to be compared to the relevant reference state and not the one-time final
destination.If the intention was a one-time delivery of a package to a final destination,
then the space of random possibilities around the starting point would be the
message-less state and would define the possible outcomes.Alternatively, if the
intention was to avoid in-coming missiles again and again, the random behaviour
around each decision point would define the reference state. Note that in these kinds
of analysis another resource is at play. In the random reference state, the vehicle has
no reason to move at all. Independent of the messages content, its receipt communicates that something is to be
done.7Often a message communicates more than necessary. This could be due to incompetence; or to refine or to correct
instructions already sent. It is also possible that other resources (logic, stored data) available to the Receiver indicate that
messages, or parts of them, are to be discarded. For example, the message to print a page with various colours could be
corrected by software which is aware that a colour cartridge is missing and only black and white outputs can be generated.
Theorem 10. Part of a CIS may permit behaviour to occur which otherwise wouldnt be observed. To quantify the
improvement provided by one of the CIS resources, a realistic hypothetical reference system behaviour needs to be
defined.
Example 5. DNA encodes the order in which amino acids link to form proteins. For this to occur, the carboxyl group at the
end of one amino acid must react with the amino group at the other end of another amino acid, to form peptides bonds. But
amino acids in free nature or a laboratory undergo a variety of other chemical reactions. As an example, amino and carboxyl
groups that are present on side-chains can also react. In addition, the carboxyl and amino acid ends of a growing

polypeptide will react in an intra-molecular fashion, creating cyclic rings; and other reactions also occur.In cells, clever
design prevents the wrong reactions from occurring. Computer simulations could be built to estimate the proportion of
protein-like chains which amino acids would form compared to all possible reactions, based on d and l racemic mixtures.8
Example 6. In water, peptides hydrolyze instead of forming long chains. For even a very short protein with 100 peptide
bonds (101 amino acids), the equilibrium concentration would be about 3 10 216.9 So how can proteins form at all in cells?
It is because water is excluded from the interior of the ribosomes, and energy is provided by ATP to drive the polymerization
reaction forward.These examples show it is impractical (and unnecessary) to always perform empirical studies on how
nature would react in the absence of a CIS. But a reasonable estimate is still useful, and the probabilities can easily be
converted into bits of information.10 It is usually sufficient to determine when a probability is so miniscule that nobody will
ever see the event unless something, like intelligence, provides a new pathway for it to occur.
Theorem 11. Individual Refinement Components can contain multiple improvement steps.
We saw in Examples 5 and 6 that specialized machinery like ribosomes can make multiple contributions. One can combine
contributions to simplify the analysis if one wishes.
Theorem 12. Accumulated goal refinements, defined by CIS in bits, reveal far more about what is accomplished than
Shannon Information Theory implies.
Molecular machines are built to solve thousands of kinds of problems such as catalyzing metabolic reactions, transporting
bio-chemicals, and replicating chromosomes. The quantitative CIS theory seeks to explain how so much more is
accomplished than is implied by the statistical studies of coded messages, such as of gene sequences.
Example 7. In the brain there are special kinds of cells called neurons, organized into specialized signal-processing
subsystems.11 These come in many sizes and shapes with very different designs and functions. There are about 10 11
neurons in the human brain and about 1014 synapses,12 which must be placed at the right locations, and interconnected
correctly. As an example, the Purkinje cells of the cerebellar cortex have about 200,000 synaptic contacts each. 13 The first
question is, where do the instructions come from to physically build such complex brains? And second, where does the input
come from which permits brains to make thousands of multimedia decisions each second?
These requirements cannot be explained by the bits of Shannon information implied on the chromosomes of the fertilized
egg. The coded information is embedded in a context which provides additional refinements, and the neurons are refined by
their ability to learn.
Theorem 13. There can be trade-offs in how a CIS can be designed. The contributions towards the goal, expressed in bits,
can be distributed between the message and the hardware equipment.
Examples A4 and A5 online illustrate this principle.
Example 8. Suppose 20 copies of five books are to be printed out. One solution would be for the message to transmit the
relevant text each time for every book, which flexible printing equipment must then process. Another solution would be to
build five machines, each of which mechanically prints out a single book. Now one only needs to send a signal to the
appropriate machine, 20 times, communicating to start printing. The final outcome is the same, but the effort, expressed in
bits according to resulting outcome, are distributed over different refining components.
Example 9. Printers can often handle papers of different standard formats. The content to be printed, plus instructions on
how to manipulate all the physical parts to position the paper and ink in the right position, could be part of a huge coded
message. A better design would be to engineer the printer to always position the paper for each standard size in the same
manner, so that the message only needs to communicate the content and paper size.
Example 10. Many business presentations benefit from the use of colour. The background colour and display for PowerPoint
presentations are communicated along with the content to be presented. This is a better design than to send content to a
large number of differently designed printers, each filled with paper prepared with a specific kind of coloured background.
Theorem 14. The hardware components found in an integrated CIS do not arise from the properties of the physical carrier
medium.
For example, many kinds of media can be used to store the same computer data. These materials could be made into
memory sticks, DVDs, hard disks, archival systems, etc. The origin of these engineered parts, as also for biological parts,
are not simple extrapolations of atomic properties. They have to be wilfully organized.
Theorem 15. The contribution towards a goal provided by a particular refinement component cannot be more than the
improvement observed, expressed in bits.
This is related to Theorem 5.
This simply means that guiding towards a goal cannot come for free. If there are eight equally likely outcomes,
communicating the correct one each time cannot be done with less than three bits of coded information.14 These must come
from somewhere. This theorem is intuitively obvious but woefully neglected in the evolutionary literature. Bartlett 15
recognized correctly that rapid change can, and does, occur in nature, if the guiding inputs have already been made
available and only need to be activated. The notion of preloading of information to ensure future outcomes is also common
among Intelligent Design thinkers.
Scientists realize intuitively that purposeful behaviour implies that guidance is coming from somewhere. The fact that the
same kinds of proteins always ended up in the same place in cells led researchers to look for special signals guiding this
process. And the rapid response of whole populations in short time periods to environmental changes led to the search for,
and discovery of, epigenetics.16 What is overlooked is the fundamental insight that planning and ensuring desired outcomes
are characteristics of intelligent agency, and that the methods used to store intent are not found anywhere in inanimate
nature.
Theorem 16. There is no direct relationship between goal refinement in bits and importance of the outcome.
Thumbs up or down decided life or death of a Roman gladiator. One mere bit of information, two possible outcomes, but with
a dramatic impact!
Theorem 17. Bits in CIS theory are not a direct indicator of difficulty in achieving the goal.
It is true that there is an inverse relationship between many bits in outcome, and likelihood the effect could arise by chance.
This is especially clear in CIS theory, where outcomes are compared to what would happen by natural processes. But one
must recall (Theorem 13) that there are trade-offs between what the message and the hardware could provide. A simple
message to an aircraft carrier flotilla to turn left or turn right represents only one bit of information because the rest of the
details necessary are handled by other parts of the CIS. These one-bit coded messages have a huge lever effect. If the
design of two CISs have identical final outcomes, then a comparable number of bits should be calculated. But focusing on
the number of bits provided by intermediate CIS services can be misleading.
Theorem 18. Wilful, intelligent decision-making occurs during the processing of a CIS; or decision-making has been preloaded for it to occur autonomously.
In some CISs, parts can respond mechanically, whereas in other CIS designs, intelligent decision-making is involved. In the
mechanical version, intelligence is used to ensure intended outcomes. Complex algorithms can be devised to free active

intelligence from having to be present during future execution of a CIS. Examples are techniques used in artificial
intelligence. In addition, sensor and queries to the environment can be automated to ensure reliability of the automated
portions of a CIS.
Theorem 19. The performance of a CIS will not improve over time in the absence of intelligently provided guidance.
Refinement in the outcome or adjustment to new circumstances requires preloaded facilities in some part of the CIS. One
must not overlook, however, that improvement is possible through an algorithmic, iterative process of selection. This occurs
for -cell maturation17 and there are many examples in numerical analysis, like Runge-Kutta methods. 18 Natural selection
could conceivably be an example, if a small number of organisms, like bacteria, were initially created with the intent of
diversifying and specializing. But for this strategy to work, outcomes must be fed back into the causal instructions and an
effective method already built in to move in a promising new direction.Natural processes are not capable of creating a CIS.
Only sentient, intelligent beings able to identify desired goals can create a CIS.
Theorem 20. Natural processes are not capable of creating a CIS. Only sentient, intelligent beings able to identify desired
goals can create a CIS.
The justification for this is two-fold. First, we notice how easily intelligent beings like humans design a CIS, whereas nothing
resembling a CIS occurs in the abiotic universe. Second, by examining in depth how outcomes are guided, we notice that
the resulting bits of improvement are huge. These are calculated by comparing to a reference state which lacks the CIS,
which ultimately means comparison to random processes, or to those guided by natural law.
Every bit represents a factor of two change in probability, where two scenarios are being compared: that the initial state
migrated into the new one via the natural processes already operating vs via deliberate intervention.
Results from replacing information by CIS
Given the many meanings of information, asking where it comes from is too vague and ignores the full picture the Coded
Information System approach offers. The issues already introduced in the literature 19,20,1 about information are all subsets of
a CIS. For instance, one can always ask what the source of a coding convention is, and coded messages are part of a CIS.
Or what guided a particular message to the specific Receiver. CIS goes beyond what Shannon looked into. For example, the
decision when to send the message is also unique to the CIS approach.
The underlying notions presented in this paper lead to different answers than generally offered about information.
Gitt and others say that multiple copies of an identical message do not provide more information. Once how to accomplish
something has been communicated, extra copies are not considered to offer anything additional. This has been a criticism of
Shannons approach, where if two communication channels transmit the same message, twice as many bits of information
are claimed.However, the effect of a CIS is to reorganize matter and energy for some purpose. Therefore, if a coded
message is used repeatedly in a CIS at different times and locations, then more matter and energy have been organized.
The effect, in bits, is greater from the universes point of view. This means that if there are identical copies of a bacteria, the
effects of each of these CIS would be additive. Is this not reasonable?We prefer this view than to ask where a gene comes
from, and then report the bits from only one copy. Furthermore, reuse of a CIS (including after genetic reproduction) requires
the existence of other complex components, which automatically get neglected if one copy and one event only are reported.
The total effect of multiple copies and reuse give credit naturally to the additional components which make this
possible.Cellular machines can process similar metabolites, using the exact same genes. But in one microenvironment a
nutrient might be present but not in the other. Therefore, the measurable effect of two separate but identical CISs at different
times and places can vary!The earth contains about 6 x 10 27 gm of matter.21 And 12 gm of the isotope carbon-12 contains 6
1023 atoms. The number of entities on Earth is very large, whether we mean atoms or molecules, on the order of roughly
1050.22 Potentially any entity on Earth could be associated with any other: as part of a chemical reaction; as part of a new
object; or to modify the properties of other entities. Merely moving an entity during a second changes about 10 (50)2 = 10100
pairwise distance relationships, and sometimes multiple other properties besides only their spatial relationships. The
organization of all objects on Earth related to living organisms is a vast number, which places great demands on the
organizing effects of the available CISs.Organizing nature on Earth, with its complex ecosystems, means rearranging all this
matter and energy in the face of the unimaginably large number of possible distributions. The CIS model credits contribution
to this effort to the multiple copies and reuses of the message-containing information systems.Gitt considers his theorems
laws of nature. For example, Scientific Law of Information (SLI) 3C states, It is impossible to generate UI without an
intelligent sender.23 The justification seems to be that the claimed SLIs should be considered laws until disproved. This
seems like a weak argument, since there are many statements which reflect all known experience so far and are difficult to
disprove. All facts to date support a claim such as, It is impossible to build a manned station on another solar system, but
is this a law of nature? We certainly agree that UI (Universal Information) cannot arise by natural processes. But by UI we
mean the whole package, which is a CIS. Based on known science, we are persuaded that bringing together all the
components needed by a CIS, at the right time and location, including a coding convention, is never going to happen. But
we believe the CIS justification is sounder, since quantitative and measurable criteria underlie this belief.
A new view of nature
CISs can be embedded hierarchically. A low-level CIS could synthesize an amino acid, which is embedded in a higher CIS
to produce proteins. The system analysis would now include all factors involved in reproducing DNA; 24 decoding DNA;25
regulating location;26 timing;27 and number28 of enzymes (mostly proteins); and formation of the tertiary 29 and quaternary30
protein structures, including bonding to other bio-chemicals. An example of a higher-level CIS, with embedded subsystems,
would be a multi-cellular organism and include all processes to develop into the final, mature state. An example of a still
higher order CIS, with a hierarchy of embedded sub-CISs would be an ecological system, consisting of a variety interacting
species.In Part 3 we provided a figure to help visualize how a series of embedded, refining contributors narrow the range of
behaviour, using a combination of a) coded messages; b) signals; b) preloaded logic processing and knowledge; and d)
engineered components. This is, of course, merely conceptual, and leaves out the exact details used. These four generic
classes of refining contributions can be re-invoked to understand the deeper levels of refinement, level by level.
This analysis offers a new way of looking at the world we live in. Vast quantities of matter and energy have been organized
within hierarchies of dynamic CISs, leading to a cascade of intermediate goals. And our world itself is embedded in higher
CISs as part of ultimate goals.
Conclusion
The CIS model considers the quantitative contribution of all goal-refining components linked by the system. Instead of
asking where information comes from in nature, we propose to ask where Coded Information Systems come from, which
ensures a more complete coverage of all the issues which need to be addressed.
The twenty theorems are based on observation and serve to clarify the key ideas of CIS theory.
Additional examples of CIS are discussed in the on-line appendix to illustrate these principles.31

Genetic code optimisation: Part 1


by Royal Truman and Peter Borger
The genetic code as we find it in naturethe canonical codehas been shown to be highly optimal according to various
criteria. It is commonly believed the genetic code was optimised during the course of an evolutionary process (for various
purposes). We evaluate this claim and find it wanting. We identify difficulties related to the three families of explanations
found in the literature as to how the current 64 21 convention may have arisen through natural processes.
123rf.com/Sergey Sundikov
The order of amino acids in proteins is determined
by information coded on genes. There are over
1.51 1084possible1 genetic codes based on
mapping 64 codons to 20 amino acids and a stop
signal2 (i.e. 64 21). The origin of code-based
genetics is for evolutionists an utter mystery,3 since
this requires a large number of irreducibly complex
machines: ribosomes, RNA and DNA polymerases,
aminoacyl tRNA synthetases (aaRS), release
factors, etc. These machines consist for the most
part of proteins, which poses a paradox: dozens of
unrelated proteins are needed (plus several special
RNA polymers) to process the encoded
information. Without them the genetic code wont
work, but generating such proteins requires that
the code already be functional.This is one of many
examples of chicken-and-egg dilemmas faced by
materialists. Another is the need for a reliable
source of ATP for amino acids to polymerise to
proteins: without the necessary proteins and genes
already in place such ATP molecules wont be
produced. In addition, any genetic replicator needs a reliable feed stock of nucleotides and amino acids, but several of the
metabolic processes used by cells are interlinked. For example, until various amino acid biosynthetic networks are
functional, the nucleotides cant be metabolised. These are some of the reasons we believe natural processes did not
produce the genetic code step-wise. We hope to present a detailed analysis of the minimal components needed for a
genetic code to work in a future paper, but this is not the topic we wish to address here.The literature is full of papers which
claim the universal code4 has evolved over time and is in some sense now far better than earlier, perhaps even near
optimal. We cannot address all the models and claims here, but we hope to present a few thoughts which we hope will show
that these claims are flights of fantasy. No real workable mechanism has yet been offered 1,3 as to how a simpler genetic
system could have increased dramatically in complexity and in robustness towards mutations. If a primitive replicator had
gotten started,contra all chemical logic, would it be possible according to various evolutionary scenarios to refine the system
to generate the 64 codon 20 amino acid + stop signal convention used by the standard genetic code?
Origin of any genetic code
Before an evolutionary process could optimise a code, a replicating lifeform must first exist with some kind of information
processing capabilities. Trevors and Abel published one of the most honest and illuminating papers 3 on the issues which
confront a naturalistic explanation for the origin of life. In particular the origin of an information storing and processing
system, able to guide the synthesis of proteins, is recognized as incomprehensible. In their own words, Thus far, no paper
has provided a plausible mechanism for natural-process algorithm-writing.5 Abel is well known for his attempts to find a
natural origin for the genetic code and naturalistic explanation of the origin of life. He and The Origin-of-Life Foundation, Inc.
have a standing offer of $1 million to anyone providing a plausible natural solution. 6 In stark contrast to the straightforward
honesty offer in this paper3are a large number of Origin-of-Life papers which appeal to no recognizable chemistry and offer
no conceptually feasible path as how to go from their vague notions to extant genetic systems.There are three basic
approaches7 used by materialists to explain the 64 21 mapping of the genetic code: (I) chemical/stereochemical theories,
(II) coevolution of biosynthetically related amino acid pathways and (III) evolution and optimisation by natural selection to
prevent errors. There is a logic to the order in which we present these three approaches. (I) is closest to the question of a
natural origin for a biological replicator. (II) already requires a large number of complex and integrated biochemical networks
to be in place. Attempts to explain the 64 21 code mapping at this level would clearly mean ignoring the question as to
where all these molecular machines and genes came from. (III) Evolutionary hypotheses to explain the 64 21 mapping at
this level would require assuming all 20 amino acids are already present in a genetic code and that most genes already
code for highly optimised proteins.
(I) Chemical/stereochemical theories
All the suggestions in this area assume some kind of simple starting system, being guided by natural chemical processes.
These primitive systems then accumulated vast amounts of complexity and sophistication.Attempts have been made to find
direct chemical interactions between portions of RNA and amino acids.8 These are supposed to have led to the genetic
code. Amino acids might bind preferentially to their cognate codons, 9 anticodons,10 reversed codons,11 codon-anticodon
double helices12 or other chemical structures.After admitting that there is little evidence for selective binding of amino acids
to isolated codons or anticodons, Alberti13 proposed that chains of mRNA would interact with special tRNA chains, and short
peptides would attach specifically to these tRNAs. Being now brought close together, the short peptides would polymerise to
form proteins. A number of cofactors would stabilize the tRNA-mRNA interactions, eventually becoming ribosomes. Another
set of cofactors would decrease the number of amino acids needed to provide a specific interaction with the various tRNA,
which today is done by aaRSs.
Objections. None of the reports in this area reveal any kind of consistent association between codons and the amino acid
expected based on the genetic code.14 The wide variety of chemical systems intelligently conceived in the various scenarios
cannot be justified for free nature conditions, and excessive freedom exists in the interpretation of such models, undermining
the significance of any particular one. 7 Therefore, it is often alleged15 that the original chemical interactions can no longer be
identified through the present coding assignments of the genetic code, but that such putative
interactions may have gotten the process started.16

Figure 1. How the genetic code works. Three specific nucleotides (the anticodon) on a tRNA interact with their cognate
codon on mRNA, thereby adding the correct amino acid to the growing protein chain. The sequence of nucleotide in each
code on the mRNA determines which tRNA will attach, and this communicates the order of amino acids which are to
constitute a protein. Each tRNA is charged by aminoacyl tRNA synthetase, using ATP (not shown).Amino acids created
under abiotic conditions are assumed to have been introduced first in a primitive code. 17 But, since all but glycine come
in d and l mirror-image forms18 such a source of amino acids would lead to chaos. In addition, the 3 chiral C atoms in ribose
in RNA would produce even more stereoisomers in free nature. Furthermore, claiming 17,19 that the amino acids found in the
Miller experiment would have been the first to be used by a genetic code makes a dope 20 out of the reader who accepts this,
since geologists today believe the gases used in such experiments have no relevance to a putative early
atmosphere.18,21,22 Subsequent experiments with more reasonable gas mixtures generated very little organic material and
virtually no amino acids at all.18,23,24At this time, the order in which amino acids are to polymerise is not communicated by the
genetic code through direct amino acid interactions with DNA or RNA polymers. Transfer RNA is used to map codons to their
specific amino acids. Three specific nucleotides (the anticodon) are part of the tRNA molecules, and these interact
transiently with their cognate codons on mRNA. In figure 1 we show how specific codon-anticodon interactions determine
which amino acid is coded for by a mRNA nucleotide triplet. The codon-anticodon interactions must be weak enough to
permit separation once no longer needed, but with sufficient specificity to prevent incorrect binding. But in the absence of
additional machinery such as ribosomes to help hold everything in place, the interactions between codons and the adaptors
anticodon would be too weak to be of any value. At a distant and physicochemically unrelated portion of the tRNA adaptor a
specific amino acid must therefore be attached (with the consumption of a high energy ATP molecule) (figure 1).
How is nature supposed to have gone from an initial system, involving a chemical or a physical interaction of amino
acid i (AAi) (where irepresents version 1, 2, 3 ) with RNA tri-nucleotide i (codoni), to the current scheme based on
adaptor i (adapi)? Two things must now occur simultaneously (see figure 1). One part of a given adaptor number i, adapi,
must replace the original AAi/codoni interaction, and to a second part of adap 1 the same AA; must now be attached (figure
2). These cannot occur sequentially, as both kinds of bonds must occur simultaneously if the primitive code based on direct
interaction is to be retained. Since the spatial relationship with other amino acids is now very different, any putative chemical
reactions with other amino acids can no longer occur. This means all the amino acid to template interactions must be
replaced simultaneously! One cannot have a mixed strategy, since then only part of the putative original polypeptide could
form.

Figure 2. Evolving from direct amino acid-template interaction to an adaptor molecule. Amino acids are claimed 37 to have
originally
interacted
physically with specific triplet
nucleotide
sequences,
forming the ancient basis of
the
genetic
code.
Subsequent insertion of an
adaptor molecule, such as
tRNA, requires anchoring
one end of the adaptor at the
original location of amino
acid interaction, and that
amino acid must now be
covalently bonded at another
portion of the adaptor. Note
that no specific kind of
interaction
(such
as
formation of an ester bond
between template and amino
acid) needs to be claimed, as Figure 3. An adaptor molecule must satisfy various geometric constraints. Amino acids must
long as there is a strong be attached to adaptor molecules (e.g. tRNA i) in a manner which permits peptide bonds to
preference for interaction form. As shown here the amino acids cant react together since the two amino acids are too
between a specific amino part apart. In the absence of a complex molecular machine, such as a ribosome, it is
acid and some unique inconceivable that single adaptor molecules alone could force the reacting amino acids into a
suitable geometry to form the correct kind of chemical bond.
nucleotide acid sequence.
If the ancestral replicator
functioned reliably without an
adaptor, the new system
using
many
specialized Figure 4. Adaptor molecules must fold reliably to bring the reacting amino acids and cognate
adaptor molecules must be codons into the correct geometry. (A) is the approximate shape of folded tRNA molecules.
at
least
as
effective Base-pairs at strategic locations hold the various arms together, permitting recognition by the
immediately, otherwise the aaRS machinery, and reliable anticodon interaction with the cognate mRNA codons. (B) and
former would out-populate (C) represent hypothetical RNA strands which do not fold consistently into reliable structures,
the new evolutionary attempt. or into shapes not suitable for the adaptor.

This means that attachment of AAi to adapi must be highly reliable, as is the case with modern aminoacyl tRNA synthetases.
Among other implications, this requires a reliable source of the different adaptors i=1,2,3 (adap 1) during the lifetime of
this organism and during the subsequent generations. Specifically, all these adaptor sequences must be immediately
metabolized consistently and in large amounts for the new coding scheme to function.The adaptor molecules must satisfy
several structural requirements. The location where amino acid i is attached to its cognate tRNAi must be at an acceptable
distance and geometry to facilitate formation of the peptide bond (figure 3). Each kind of adaptor molecule must fold reliably
into a consistent three-dimensional structure which is able to bring the reacting amino acids and cognate codons into the
correct geometry with respect to each other (figure 4). In tRNAs this is accomplished by strategically located base pairing
and RNA strands of just the right length.Even if two sets of tRNA-amino acid complexes were to be bonded simultaneously
somewhere along the template, these wont form a peptide bond in the absence of the carefully crafted translation
machinery. Unless carefully engineered, the adaptors would tangle together with themselves and with the template triplet
nucleotides (figure 5). Even if these theoretical adaptors could hold two amino acids close enough to react, the endothermic
peptide-forming reaction isnt going to occur spontaneously. Formation of a peptide bond in living organisms is driven by
high-energy ester bonds between amino acids and tRNAs, with the help of aminoacyl tRNA synthetases. Theoretical
adaptors which merely hold the reactants physically close together is not sufficient. Should on rare occasions a peptide
bond actually form, the resulting molecule would probably remain covalently bonded to one of the adaptors (figure 6)
afterwards. One of the design requirements of ribosomes is to move the mRNA along in a ratchet-like manner, detaching the
tRNA whose amino acid has already been used. For this purpose energy is provided by GTP, and a complex scheme is
used to remove the final polypeptide from the mRNA. This requirement has also been overlooked in the conceptual model
presented.

Figure 5. Unless carefully engineered,


evolving adaptors would tangle together
and with the template triplet nucleotides.
RNA, DNA or other sugar template is not
explicitly assumed to permit other
theoretical chemical proposals. HOOC-XiNH2 represent amino acids, where i = 1 toFigure 6. Without a ribosome, dipeptides will rarely form. If a dipeptide should
20, and Xi = CHRi (Ri are the side chains). form, if would remain covalently bonded to one of the adaptors. RNA, DNA or
other sugar template is not explicitly assumed, to permit other theoretical
chemical proposals. HOOC-Xi-NH2 represent amino acids, where i = 1 to 20, and
Xi = CHRi (Ri are the side chains).
If, in spite of the above observations, polypeptides were to start forming; intramolecular reactions, in which the carboxyl end
portion of one amino acid bonds to the amino group of the other amino acid in a growing chain would dominate (figure 7).
This is simply because they are close to each other and would probably react with themselves before other amino acids
show up to extend the chain length. The ribosome machinery is designed to prevent this from occurring.
Figure 7. Unless deliberately constrained, amino
acids undergo intramolecular reactions. The
carboxyl end portion of a growing peptide will
almost always react with the amino group at the
other end to form an intramolecular amide. n
represents two or more amino acids. HOOC-XiNH2 represent
amino
acids,
where
i = 1 to 20, and Xi = CHRi (Ri are the side
chains).
Furthermore, peptide bonds involving the side chains of amino acids can also form, leading to complex and biologically
worthless mixtures. For example, amino groups (-NHR) are present on the side chains of amino acids tryptophan, lysine,
histidine, arginine, asparagine and glutamine and can react with the carboxylic acid (-COOH) groups of other amino acids.
This is especially true if hot conditions are assumed 25 to permit peptide bonds to form. Conversely, some side chains also
have carboxylic acids (aspartate and glutamate), which can form amides with any amino group. The highly complex portions
of the ribosome machinery were designed to prevent such undesirable side reactions from occurring, by holding the
functional groups precisely in place to guide the peptide reactions, and by isolating the functional groups which are not
supposed to react together. This very problem is a real issue with automated peptide synthetic chemistries used today,
requiring complex side-chain blocking strategies in order to allow the correct peptide extension reactions.
Alberti, mentioned above,13 introduced a different scenario: the adaptor is part of the genetic apparatus from very early on.
Basically, one must assume that mRNAs, ribosomes, amino acids and tRNAs all came together long ago with a minimum of
complexity. Then evolution performed a series of unspecified steps approaching the miraculous, resulting in the genetic
code. The initial system somehow added a multitude of molecular tools and was relentlessly fine-tuned. Any other
evolutionary model based on similar premises would resemble closely in many details what he proposes. The necessary
subsequent stages must occur if these assumptions are used. Therefore, it is worthwhile to devote some thought as to
whether the various processes could reasonably occur naturally. Our comments
necessarily apply to other possible variants of the basic thesis.
Figure 8. Co-evolution of tRNA, mRNA and polypeptides is assumed to have
led to the genetic Code. Different peptides are assumed to be able to interact
uniquely with a sequence-specific tRNA, which itself base-pairs at a specific
portion of an mRNA. are alpha helix polypeptides. (A) Sequence-specific

interactions between ancestral t-RNAs and portions of peptides are assumed to have formed, and between these tRNAs
and longer regions of mRNA. (B) Different sequence-specific tRNAs are assumed to attach to portions of mRNA, thereby
bringing their attached amino acids close together. (C) A trans-esterification reaction between tRNA-bound peptides is
assumed to have occurred in the ancestral genetic code. (D)Release of tRNA which no longer has an amino acid attached is
shown, permitting further polymerization. (From Alberti13).
The basic notion is shown in figure 8. In practice we will show that virtually none of the necessary claims in such scenarios
would work. From start to end, chemical and physical realities are abused.
Nature does not produce stereochemically pure polypeptide and polyribonucleotide chains. Therefore, there is no way to
initiate a minimally functional proto-code. First, there is the problem of the source of optically pure 26 starting materials.
Second, in an aqueous solution, a maximum of 810 RNA-mers can polymerise 27 and polypeptide chains would be even
shorter, even after optimizing for temperature, pressure, pH, and concentration of amino acid, plus addition of CuCl 2and
rapidly trapping the polypeptide in a cooling chamber.28,29 The reactants would be extremely dilute, since the thermodynamic
direction
would
be
to
hydrolyse
back
to
starting
materials.
Alternative, non-aqueous environments, such as the side of a dry volcano, would be chemically unpromising. If optically
pure nucleotides and amino acids were present, under dry, hot reaction conditions, then larger molecules would form. But
the result would be gunk or tar, since a complex mixture of three-dimensional non-peptide bonds would form.30
The great majority of random chains of amino acids, even if optically pure, do not conveniently form complex secondary
structures such as helices, as assumed (figure 8). 13 It is certainly true that alpha-helices of specific extant proteins do
interact at precise portions of DNA; but this is neither coincidence nor a universal feature, and is caused by a precisely
tailored set of spatial and electrostatic relationships, designed to serve a regulatory function.
A large collection of mRNAs and tRNAs are needed at the same time and place. And these must provide or transmit the
information to specify protein sequences! Sections of mRNAs must have exact sequences, and the complementary tRNAs
to base-pair with them must already be available. Not only must the sequences be correct, their order with respect to each
other must also be correct. And there must be a large number of such mRNAs, since many different proteins are needed.
With a palette of only four nucleotides (nt) even a miniscule chain of 300 nucleotides offers 4 300, or 4 10180 alternatives
(ignoring all the structural isomers which could also form), the vast majority of which would be worthless. What natural
process then, could have organized or programmed the mRNAs, and created the necessary tRNAs?
This is a fatal flaw in such models. The proportion of random polypeptides based on the 20 amino acids which are able to
fold reliability to offer the chance of producing a useful protein is miniscule, 31,32 to the order of one out of 1050. To provide the
necessary information to generate one of the useful variants, something must organize the order of the bases (A,G,C and T)
in the mRNAs. But nothing is available in nature which organizes the nucleotides into informationally meaningful sequences.
All the various peptides which need to be condensed together must be present. Where did these come from? Alberti writes,
Relatively short peptides (down at least to 17mers) recognize short specific sequences of double-stranded RNA or
DNA.33 The environment of the double strand chain offers far more useful physicochemical patterns to recognize than the
single strand tRNA in the model, and even then, this would represent about one correct sequence out of 10 22 (= 2017). Where
did these peptides come from, and how was generation of the vast majority which are not desired avoided? Note that the
necessary peptides would be of different lengths, depending on what needs to be recognized on a specific tRNA.

Figure 9. Peptides will not always associate at the same location of the same tRNA. Many kinds of interaction between
tRNA and peptides can occur. For example, ester formation using a free OH group in ribose could occur at many alternative
positions. A, B and C illustrate three examples.Whether through ester bonds, weak hydrogen bonds or other interactions,
without specific base-pairing as mediated by nucleotide polymers, all the countless varieties of polypeptides
would notassociate consistently at the same location on a tRNA-like molecule. For example, any free hydroxyl group of
ribose is free to react with the carboxyl group of the peptide, forming an ester. All kinds of van der Waal or hydrogen bond
interactions could also occur (figure 9). Therefore, the location of the peptide will not be reliably determined by any particular
codon of the mRNA template.The mRNA-tRNA interaction alone is not reliable, requiring a considerable number of suitably
located base-pairings between these strands, especially in the absence of any repair machinery, over long regions which is
absurd. There will often be internal single-strand loops (figure 10), on the tRNA and mRNA. This will prevent a single codon
on the mRNA from specifying uniquely and reliably the location of a putative polypeptide attached to the tRNA.
Figure 10. Imperfect base-pairing between the primitive mRNA-tRNA strands would lead to variable
placements of the amino acid associated with the ancestral tRNA. Different ancestral tRNA and
mRNA strands could base pair by chance at various locations. Nature would not accidentally provide
regions of both molecules which just happen to base pair perfectly at the right locations, and simultaneously provide a
region on the tRNA at which the right polypeptide would preferential interact. Imperfect base-pairing and coincidences would
lead to internal loops on tRNA and on mRNA. Any amino acid or polypeptide attached to the tRNA will then show up at
different positions along the templating mRNA. Even if polypeptide chains would form, their sequences would be random,
since nothing resembling a code would exist.It is important to understand what the author is calling tRNA (see figure
8A).13Key to his reasoning is that Sequence-specific interactions between polypeptides and polynucleotides would result in
the accumulation of specific polypeptide-polyribonucleotide pairs.25 Proximity between a peptide and an RNA molecule is
likely to favour the formation of ester bonds between them. 25 The author assumes this results in the ancestral tRNA.
Each such tRNA consists of a specific polypeptide sequence (and not singleamino acids), which is chemically bonded to a
unique single-strand RNA (figure 8B). Multiple tRNAs must then strongly base-pair to a matrix mRNA 25 and be held
rigidly25 at specific locations on the template mRNA. But a new peptide bond can only form between adjacent tRNAs if these
are able to come into contact. This implies they must be attached at the ends of tRNAs, as shown in the original literature
drawings,13 and that the tRNAs must be located close together on the mRNA. Otherwise the ester bond (between the
peptide and RNA to form tRNA) 25 would be buried and be inaccessible to the amino group of the second peptide it is to

bond with. In figure 11 the carboxyl group of tRNA1 is shown inaccessible to the amino group of tRNA2.
To produce the tRNAs the author assumes that portions of alpha-helices, each consisting of different series of amino acids
to provide specificity, would ensure the unique interactions. 25 However, random peptides can fold in an almost infinite
number of ways and will not only form alpha-helices at specific locations (especially if racemic mixtures of amino acids are
used). We must assume that polypeptide chains formed under natural conditions would be almost always amorphous
polymers.Perhaps there is an alternative to having to place proto-tRNAs very close together along the matrix mRNA.
Suppose the locations where the tRNAs:mRNA base-pairings are more flexible, permitting them to eventually come close
enough to react. This would happen when portions of tRNA.mRNA cannot base-pair, forming small bulges. Or if the tRNA
would dissociate from mRNA and find itself in the vicinity of another tRNA it can react with. In other words, where the
reacting
tRNAs
are
actually
located
with
respect
to
the
template
mRNA
would
vary.
However, this would then destroy the notion of the ancient mRNA strand being a true coding template. It would not specify
protein-sequences nor permit eliminating of tRNA-mRNA base-pair interactions (with the help of undefined cofactors to hold
tRNA and mRNA together) converging to the single codon used in the genetic code.In this grand mixture of tRNAs and
mRNAs what is to prevent their cross base-pairing? This would permit all the wrong kinds of peptides to be brought together
where they could also polymerise.As peptide chains lengthen, they will start to fold into three-dimensional structures which
would surround the esterized point of attachment with the tRNA. This would prevent for steric reasons other tRNAs from
attaching in the area on the same mRNA, and the functional groups which are to react from approaching each other.Such a
system has no means of self-replicating. Furthermore, postulating multiple covalent ester bonds implies some kind of
hot, dryenvironment, which is inconsistent with the favoured evolutionary environments presented as candidates for where
life would have arisen.
Figure 11. Ester bonds between peptides and
templating mRNAs would be buried in polypeptide
chains, preventing further polymerization. In ribosomes
the protein chains being formed are held in place such
that the reactive carboxyl (-COOH) and amine (-NH2)
groups can easily react together, no matter how large
the growing protein becomes. This fact is overlooked in
the simplistic model being discussed. As the protein
size grows, the ester bond would become ever more
protected by a mass of amorphous polypeptide. After a
short polypeptide has formed, further polymerization
would be prevented, since the carboxyl and amine
function groups wont come into contact together.Our greatest objection: nothing which needs to be explained has been
seriously addressed. Precisely what are these cofactors which are supposed to permit evolution to real ribosomes and
aaRSs? These machines (ribosomes, aaRSs, etc.) require dozens of precisely crafted proteins, and it would take multiple
miracles to generate precise molecular tools to systematically replace the base-pairings used to link the tRNA and mRNA
strands,13 leaving only the codon-anticodon interactions. This is how modern ribosomes supposedly eventually arose. Note
that in the earlier evolutionary stages a huge number of unique base-pairings were postulated, which permitted
unambiguous association of each ancestral tRNA with a precise portion of an mRNA. In the model, these base-pairings are
systematically eliminated but the specificity (i.e. which tRNA attaches to which portion of an mRNA) must not be lost.
Concurrently, other undefined evolving cofactors are responsible to eventually link a single amino acid to the correct tRNA,
as modern aaRSs do. Is this feasible? According to the model, 13 initially a multitude of different polypeptides (with 17 or
more residues)13 each bonded to a specific RNA, leading to an ancient tRNA. (By tRNA the author actually means an
ancient charged tRNA which uses a polypeptide and not a single amino acid). Twenty amino acids at seventeen positions
leads to 2017 = 1.3 1022 possible tRNAs plus many others having longer or shorter attached polypeptides. The carboxyl
and amino ends of these large polypeptides then bond to form the primitive proteins (figure 11). The author does not explain
how a tiny fraction of the more than 10 22 alternatives were selected, nor does he consider whether a miniscule subset used
would
suffice
to
provide
the
minimal
biological
needs
based
on
such
crude
proteins.
In the modern code, every residue of each protein is coded for, which permits any sequence of residues to be produced.
The proposed ancient code, however, would only be able to code for individual large, discrete amino acid blocks.
Alberti believes that shorter and shorter polypeptide chains would eventually be needed to identify the correct RNA they
must bond to. This process must culminate in true aaRSs, which charge a single amino acid to a specific RNA strand (i.e.
real tRNAs). (Recall that initially longer polypeptides, which form alpha-helices, would be required to permit specific
identification of the RNA they are to form an ester bond with). The author has provided no details which justify the claim that
unguided
nature
could
produce
this
effect
with
cofactors
or
any
other
natural
method.
But yet another fundamental point has been overlooked. It is assumed that originally discrete blocks of polypeptide bonded
together, providing the necessary proteins. Amino acids are now being eliminated, leading to shorter blocks. As the
polypeptides attached to the RNA strands shorten, different sequences would bond to the same RNA strand as before,
producing an evolving code in which each tRNA would be charged with different polypeptides. It is not obvious why
modification of an individual block by eliminating amino acids would still lead to acceptable primitive proteins. And evolving
all the blocks would lead to utter chaos. The exact same mRNA would now produce vastly different protein versions.As
cofactors are introduced between proto-tRNAs and mRNAs, and between peptides and tRNAs, the spatial relationships
permitting earlier bonding of peptides together will be destroyed.Instead of continuing with these kinds of vague chemical
hypotheses, it seems more sensible for evolutionists to avail themselves of any chemical materials they wish (knowing full
well they were of biological origin) and to show in a laboratory something specific and workable. If intelligently organizing all
the components in any manner desired (besides simply reproducing an existing genetic system) cant be made to work, then
under natural conditions with > 99.999% contamination, UV light and almost infinite dilution, a code-based replicator is
simply not going to arise.
(II) Coevolution of biosynthetically related amino acid pathways
In this view, the present code reflects a historical development. New, similar amino acids would evolve over time from
existing synthesis pathways and be assigned to similar codons. Several researchers claim 34 that biosynthetically related
amino acids often have codons which differ by only a single nucleotide. It is also claimed 35 that the class II synthetases are
more ancient than class I, and so the ten amino acids served by class II would have arisen earlier in the development of the
genetic code.
Objections. We cannot provide a thorough analysis of this hypothesis here. The argument is weakened considerably,
however, by the fact that many amino acids are interconvertible. Even randomly generated codes show similar associations
between amino acids which are biosynthetically related, 34 and it is not at all clear which amino acids are to be considered

biosynthetically related.36Nature would have to experiment with many possible codes and have created many new
biochemical networks to provide new amino acids to test. This would require novel genes. Nature cannot look ahead and
sacrifice for the future, so each of the multitudes of intermediate exploratory steps cannot require deleterious stages. This
poses impossible challenges to what chance plus natural selection could accomplish. We discussed the notion of testing
different codes elsewhere.1If only a subset of amino acids were used in an earlier life form, the necessary evidence should
be available. The highly conserved proteins, presumed to be of very ancient origin, should demonstrate a strong usage of
the originally restricted amino acid set. This expectation is especially true if the extant sequences demonstrate little
variability at the same residue positions. Furthermore, the first biosynthetic pathway could only have been built with proteins
based on the amino acids available at that time. The residue compositions of members from both ancient and more modern
pathways could be compared to see if a bias exists.Is it unreasonable to demand this kind of supporting evidence? Suppose
someone reported that the proteins used by the class II synthetases machinery relied on only the amino acids produced
thereby. Every evolutionist alive would use this as final and conclusive proof for the theory. Then why should one be
reluctant to make such a prediction? Without looking at the data yet, we predict this will not be the case.
(III) Evolution and optimisation to prevent errors
Some have proposed3739 that genetic codes evolved either to minimize errors during translation of mRNA into protein, or the
severity of the outcome40,41 which results. A similar proposal40,42 is that the effects of amino acid substitution through
mutations are to be minimized by decreasing the chances of this occurring and the severity of the outcome should they
occur. It would be desirable if random mutations would merely introduce residues with similar physicochemical
properties.43,44Amino acids can be characterized by at least 134 different physicochemical properties, 45 begging the question
as to which property or cluster of properties are most important. For example, measures of amino acid volumes seem less
important than polarity criteria.46 In addition, C G mutations tend to be more frequent than A U mutations, 47 for which
an optimised genetic coding convention would need to take into account. Transition mutations 48 tend to occur more
frequently than transversion mutations.48 During translation (and DNA replication), transitional errors are most likely, since
mistaking a purine for the other purine or a pyrimidine for the other one is, for stereochemical reasons, more
likely.Therefore, the best genetic codes would provide redundancy such that the most likely translation errors or mutations
would result in the same amino acid very often. Freeland and Hurst49 took this into account when comparing with a computer
a million randomly generated codes having the same pattern of codon assignments to different amino acids as the standard
code. Using a measure of hydrophobicity as the only key attribute to be protected by a coding convention (and taking
nucleotide mutational bias into account) they found only one code out of a million which by the hydrophobicity criterion
alone, would be better. We are convinced that taking more factors to be optimised into account would reveal this proportion
to be much smaller.Hydrophobicity reflects the tendency of amino acids to avoid contact with water and to be present in the
buried inner core of folded proteins. Unfortunately, no best measure of hydrophobicity for amino acids has been agreed
upon, and at least 43 different laboratory test methods have been suggested. 50 The different criteria often lead to very
different ranking of amino acid hydrophobicity.50Others have thought that mutability played an important role: robustness was
important for conservation of some proteins but mutability was required to permit evolution also. 51 Still others have focused
on overall effects of mutations on protein surface interactions with solvent52which lead to protein secondary features such as
alpha helices and beta sheets.53Having the option of using different codons to code for the same amino acid can be
advantageous. For example, if a low concentration of the protein is desired, synonymous codons can be used which lead to
slower translation54,55 by taking advantage of the fact that the corresponding aaRSs are often present in very different
proportions. If a specific tRNA is only present in a low concentration, the target codon must wait much longer to be
translated than if the tRNA is highly available. Sharp et al. reported56 that highly-expressed genes indeed preferentially use
those codons which lead to faster translation. This is realized by maintaining different concentrations of the corresponding
aaRSs. Translation of an mRNA can be slowed down if a rare codon being translated by a ribosome needs to wait until the
appropriate charged tRNA stumbles into that location, for example to give time for a portion already translated to initiate
folding.57It is not obvious which property or properties of amino acids should be conserved in the presence of mutations.
One suggestion by Freeland and colleagues 16 is to use point accepted mutations (PAM) 74100 matrix data. Comparing
aligned versions of genes which have been mutating (in organisms presumably sharing a common ancestor) after about 100
million years would presumably reveal which amino acid substitutions are more variable or on the other hand, more
intolerant to substitution. The authors then examined whether the assignment of synonymous codons protected against
such changes, and concluded58 the universal genetic code achieves between 96% and 100% optimisation relative to the
best possible code configuration.
Mechanisms for codon swapping. There are various scenarios1 as to how codons could begin to code for a different
amino acid. According to the Osawa-Jukes model59 mutations cause some codons to disappear from the genome, and the
relevant tRNA genes, being superfluous, disappear. At this point these genomes would not have all 64 of the possible
codons present in protein-coding regions. This process is thought to be caused by a mutational bias leading to higher A-T or
G-C genome content. When later this mutational bias reverses, the missing codons would begin to show up somewhere in
the genome. These could no longer by translated, since the corresponding tRNA is lacking. But duplication of a gene for
tRNA followed by mutations at the anticodon position might permit recognition of the new codon on the mRNAs, which would
now translate for a different amino acid.The Schultz-Yarus60 model is similar but permits the codon to remain partially
present in the genome. Mutations on a duplicated tRNA produces a different anticodon or a new amino acid charging
specificity and thereby ambiguous translation of a codon (i.e. the same codon could be identified by different tRNAs).
Natural selection would then optimise a particular combination. Incidentally, in some Candida species CUG will encode
either serine or leucine,60 depending on the circumstances.
Objections. We have discussed various difficulties with the notion of trial-and-error attempts to find better coding
conventions elsewhere.1There are over 1.5 1084 codes which could map 64 codons to 20 amino acids plus at least one
stop signal.1 This is a huge search space, and most of the alternatives would have to be rejected. But when would nature
know a better or worse coding convention is being explored? Several stages are needed.Many genes would have to be
functionally close to optimal so that natural selection could identify when random mutations would produce inferior versions.
This means that an unfathomably large number of mutational trials would be needed to produce many optimal genes.
Interference with a mutating genetic code would hinder natural selections efforts.One or more codons would have to be
recoded and the effects throughout the whole genome ascertained. During this process many codons would be ambiguous,
such that a myriad of protein variants would be generated by almost all genes, in the same individual. Natural selection
would be faced with a continuously changing evaluation as to whether the evolving codon would be advantageous.One
evolving coding convention needs to be completed, before another one can be initiated. For example, if during the interval
when 70% of the time a codon leads to amino acid a and 30% of the time to b additional codons were to also become
ambiguous, cellular chaos would result. Besides, we see nowhere in nature examples of a multitude of ambiguous codons
present simultaneously in an organism.Generating a new code demands removing the means of producing the original

coding option. Depending on the mechanism of code evolution, this could mean removing duplicate tRNA or aaRS variants
throughout the whole population. This is going to be near impossible since the selective advantage would be minimal, and at
best would consume a huge amount of a key evolutionary resource, time.Nature cant know in advance which coding
convention would eventually be an improvement. An initial 0.1% ambiguity in a single codon, which may be limited to a
single gene (such as the case of specific chemical modifications of mRNA), is hardly going to be recognized by natural
selection. Note that this 0.1% alternative amino acid would be distributed randomly across all copies of this codon on a
gene, and the resulting proteins would be present in multiple copies. The alternative residue would be present in only a
small minority of these proteins, and randomly.Once a new code has been fixed, this limits the direction future evolutionary
attempts can take. There is no mechanism in place to allow a return to a previous code once it was abandoned other than to
re-evolve back to that system. Given the large number of unrelated factors which determine prokaryote survival from the
external environment and quality of the genetic system, natural selection would not be provided with any consistent
guidance. The rules would change constantly. And a multitude of criteria need to be taken into account simultaneously in
deciding what to do with each codon. Codons can be used by several codes not related to specifying amino acids, 61 and the
relative importance of the tradeoffs will change constantly.
Discussion
We believe the genetic apparatus was designed, and agree there must be a logical reason for the codon amino acid
mapping chosen. We suspect that protection against the effects of mutations is indeed one of the factors which went into the
choice made. This would require foreknowledge of all the kinds of genes needed by all organisms and a weighting of the
damage each kind of amino acid substitution could cause. Optimal design may also require variants of the code to be used
for some of the intended organisms. But we wish to emphasize that the code to determine amino acid order in proteins is
not the whole story. Many other codes6163 are superimposed on the same genes and noncoding regions, and must also be
taken into account in the design of the code. Various nucleotide patterns are used for DNA regulatory and structural
purposes. DNA must provide information for many other processes besides specifying protein sequences. These
requirements affect which code would be universally optimal.Interestingly, a design theoretician may well make a similar
suggestion to that of Freeland and colleagues16 mentioned above, but based on other reasoning. To a first approximation,
the optimal design of the same proteins in different organisms would be similar. For various reasons, occasionally
substituting an amino acid would be better. For example, in hot environments the proteins may have to fold more tightly,
whereas this design could prevent enzymatic activity under cooler conditions, by embedding a reactive site too deeply in a
rigid hydrophobic code. In general, optimal protein variants must often use residues with similar properties, such as
hydrophobicity or size, at a given position. The genes would not be similar due to common descent but by design
requirements. Mutations would subsequently generate less than optimal variants which would still be good enough.An
intelligently planned genetic code would have taken this into account. Therefore, to a first approximation, comparing aligned
genes and determining substitutability patterns would indeed provide useful information as to amino acid requirements and
use of alternatives. If enough taxa living in many environments are used as a dataset, we should be able to obtain a good
idea as to the amount of variability homologous proteins would have. Of course noise, in the form of random mutations, will
also be present. Knowledge of other superimposed codes not responsible for coding for protein sequences, would permit
even better quantification as to how optimal the standard code really is. Various alternative codes must satisfy many design
requirements, and the optimal one will do best for all demands placed on it.There is however one key difference in the
reasoning. We propose that the designer knew what the ideal protein sequences should be, and therefore which needed
protection from mutations, and all the other roles nucleotide sequences need to play. The evolutionist here has a problem.
Fine-tuning hundreds or thousands of genes concurrently via natural selection to produce a near optimal ensemble is
absurd. During the time when the regulation of biochemical networks and enzymes are being optimised, the rules in the
form of the code would also be changing. Yet a Last Universal Common Ancestor (LUCA) supposedly already had
thousands of genes64 and the full set of tRNA synthetases and tRNAs 7about 2.5 billion years ago.65 Actually, other lines of
reasoning66 have led to the belief that the genetic code is almost as old as our planet. In other words, it had virtually no time
to evolve and yet is near optimal in the face of over 1.5 10 84 alternative 64 21 coding conventions.We see evidence
everywhere of cellular machinery designed to identify, ameliorate and correct errors. In sexually reproducing we observe
that genes are duplicated, which mitigate the effects of many deleterious mutations and thereby help organisms retain
morphologic function. Many evolutionists now propose nature has attempted to conserve complex functionality from
degradation. All this implies that a highly optimal state has been achieved which nature is trying to retain. More consistent
with evolutionary thought would be proposals which encourage evolvability or adaptation. Evolution from simple to specified
complexity is not achieved by hindering change.
Summary
A key element in evolutionary theory is that life has gone from simple to complex. But requiring the minimal components of a
genetic code to be simultaneously in place without intelligent guidance is indistinguishable from demanding a miracle. No
empirical evidence motivated searches for simpler or less optimal primitive genetic codes. Once the possibility of Divine
activity has been excluded as the causal factor, an almost unquestioning willingness to accept absurd notions is created
among many scientists. After all, it must have happened!
We conclude that no one has proposed a workable naturalistic model that shows how a genetic code could evolve from a
simpler into a more complex version.

Evidence for the design of life: part 1Genetic redundancy


by Peer Terborg
Knockout strategies have demonstrated that the function of many genes cannot be studied by disrupting them in model
organisms because the inactivation of these genes does not lead to a phenotypic effect. For living systems, this peculiar
phenomenon of genetic redundancy seems to be the rule rather than the exception. Genetic redundancy is now defined
as the situation in which the disruption of a gene is selectively neutral. Biology shows us that 1) two or more genes in an
organism can often substitute for each other, 2) some genes are just there in a silent state. Inactivation of such redundant
genes does not jeopardize the individuals reproductive success and has no effect on survival of the species. Genetic
redundancy is the big surprise of modern biology. Because there is no association between redundant genes and genetic
duplications, and because redundant genes do not mutate faster than essential genes, redundancy therefore brings down
more than one pillar of contemporary evolutionary thinking.

Figure 1. To create a mouse knockout for a


particular gene, a selectable marker is
integrated in the gene of interest in an
embryonic stem cell. The marker disrupts
(knocks out) the gene of interest. The
manipulated embryonic stem cell is then
injected into a mouse oocyte and transplanted
back into the uterus of pseudo-pregnant
mouse. Offspring carrying the interrupted
gene can be sorted out by screening for the
presence of the selection marker. It is now
fairly easy to obtain animals in which both
copies are interrupted through selective
breeding. Mendels law of independent
segregation assures that crossbreeding
littermates will produce individuals that lack
both genes.The discovery of the primary rules
governing biology in the second half of the
20th century paved the way for a more
fundamental understanding of the complexity
of life. One of the spin-offs of this knowledge
has been the development of sophisticated
techniques to elucidate the function of proteins. When molecular biologists want to know the function of a particular human
protein they genetically modify a laboratory mouse so that it lacks the corresponding gene (for the laboratory procedure see
figure 1). Mice that have both alleles of a gene interrupted cannot produce the corresponding proteinthey are
called knockouts. Theoretically, the phenotype of a mouse lacking specific genetic information could provide essential
information about the function of the gene. Over the years, thousands of knockouts have been generated. The knockoutstrategy has helped elucidate the functions of hundreds of genes and has contributed immensely to our biological
knowledge. However, there has been one unexpected surprisethe no-phenotype knockout. This is unexpected, because
according to the Darwinian paradigm, all genes should have a selectable advantage. Hence, knockouts should have
measurable, detectable phenotypes. The no-phenotype knockouts demonstrate that genes can be disrupted withoutor
with only minordetectable effects on the phenotype. Many genes seem to have no measurable function! This is known
as genetic redundancy and it is one of the big surprises of modern biology.
Molecular switches
One of the most intriguing examples of genetic redundancy is found in the SRC gene family. This family comprises a group
of eight genes that code for eight distinct proteins all with a function that is technically known as tyrosine kinase. SRC
proteins attach phosphate groups to other proteins that contain the amino acid tyrosine in a specific amino acid context. The
result of this attachment is that the protein becomes activated; it is switched on, and can hence pass down information in a
signalling cascade. Four closely related members of the family are named SRC, YES, FYN and FGR, and the other related
members are known as BLK, HCK, LCK and LYN. Both families are so-called nuclear receptors, and transmit signals from
the exterior of the cell to the nucleus, the operation centre where the information present in the genes is transcribed into
messenger RNA. The proteins of the SRC gene family operate as molecular switches that regulate growth and
differentiation of cells. When a cell is triggered to proliferate, tyrosine kinase proteins are transiently switched on, and then
immediately switched off.The SRC gene family is among the most notorious genes known to man, since they cause cancer
as a consequence of single point mutations. A point mutation is a change in a DNA sequence that alters only one single
nucleotidea DNA letterof the entire gene. When the point mutation is not on a silent position, it will cause the organisms
protein-making machines to incorporate a wrong amino acid. The consequence of the point mutation is that the organism
now produces a protein that cannot be switched off. Mutated SRC genes are of particular danger because they will
permanently activate signalling cascades that induce cell proliferation: the signal that tells cells to divide is permanently
switched on. The result is uncontrolled proliferation of cellscancer. The growth-promoting point mutations cannot be
overcome by allelic compensation because a normal protein cannot help to switch off the mutated protein.Despite the SRC
protein being expressed in many tissues and cell types, mice in which the SRC gene has been knocked out are still viable.
The only obvious characteristic of the knockout is the absence of two front teeth due to osteoporosis. In contrast, there are
essentially no point mutations allowed in the SRC protein without severe phenotypic consequences. Amino acid changing
point mutations in most, presumably all, of the SRC genes can lead to uncontrolled cellular replication. 1 Knockout mice
models have been generated to reveal the functions of all the members of the SRC gene family. Four out of eight knockouts
did not have a detectable phenotype. Despite their cancer-inducing properties, half of the SRCgenes appear to be
redundant. Standard evolutionary theory tells us that redundant gene family members originated through gene duplications.
Duplicated genes are truly redundant and as such they are expected to reduce to a single functional copy over time through
the accumulation of mutations that damage the duplicated genes. Such mutations can be frame-shift mutations that
introduce premature stop signals, which are recognized by the cellular translation-machines to terminate protein synthesis.
The existence of the SRC gene family has been explained as follows:
In the redundant gene family of SRC-like proteins, many, perhaps almost all point mutations that damage the protein also
cause deleterious phenotypes and kill the organism. The genetic redundancy cannot decay away through the accumulation
of point mutations.1This scenario implies that the SRC genes are destined to reside in the genome forever. Point mutations
that immediately kill raise an intriguing origin question. If the SRC genes are really so potently harmful that point mutations
induce cancer, how could this extended gene family come into existence through gene duplication and diversify through
mutations in the first place? After the first duplication, neither of the genes is allowed to change because it will invoke a
lethal phenotype and kill the organism through cancer. Amino acid changing mutations in the SRC genes will permanently
be selected against. The same holds true for the third, fourth and additional gene duplication. New gene copies are only
allowed to mutate at neutral sites that do not replace amino acid in the protein. Otherwise the organism will die from
tumours. Because of this purifying selection mechanism, the duplicates should remain as they are. Yet the proteins of
the SRC family are distinctly different, only sharing 6080% of their sequences.
Redundancythe rule not the exception

In 1964, a knockout cross-country skier won two gold medals during the Winter Olympics in Innsbruck. In true Olympic
tradition, Eero Maentyrantas 15 and 30 km success was surrounded by controversy. Tests showed that he had 15% more
red blood cells than normal subjects and Eero was accused of using doping to increase his level of red blood cells. Yet no
trace of blood doping could be found. In 1964 nobody knew, but modern biology showed Maentyranta had a
mutated EPO gene, which codes for erythropoietin, a messenger protein that tells the bone marrow to increase the
production of red blood cells. To increase red blood levels, EPO binds to the EPO receptor that generates two opposite
signals: one to instruct bone marrow cells to become red blood cells (the on-switch) and one to reduce production of red
blood cells (the off-switch). This auto-regulatory mechanism assures a balanced production of red blood cells. In 1993, it
turned out that the Olympic medallist had a mutation that knocked out the off-switch.2 The EPO receptor of the Finnish
athlete generated a normal activation signal, but not the deactivating one. People can do well without the off-switch.In
humans, the muscle-fiber-producing ACTN3 gene can also be missed entirely and without consequences for
fitness.3 Humans can also do without the GULO gene,4 the gene coding for caspase 12,5 the CCR5gene6 and some of
the GST genes that are involved in the detoxification of polycyclic aromatic hydrocarbons present in cigarette smoke. 7 All
these genes can be found inactivated in entire human populations (GULO, caspase 12) or subpopulations thereof. The
Douc Langur (Pygathrix nemaeus), an Asian leaf-eating Colobine monkey, is the natural no-phenotype knockout for
the angiogenin gene that codes a small protein that stimulates the formation of blood vessels. 8 Bacterial genomes can be
reduced by over 9% without selective disadvantages on minimal medium, 9 and mice in which 3 megabases of conserved
DNA was erased showed no signs of reduced survival and there was no indication of overt pathology. 10 Fewer than 2% of
approximately 200 Arabidopsis thaliana (Mouse-Ear Cress) knockouts displayed significant phenotypic alterations. Many of
the knockouts did not affect plant morphology even in the presence of severe physiological defects.11 In the nematode
worm Caenorhabditis elegans a surprising 89% of single-copy and 96% of duplicate genes show no detectable phenotypic
effect when they are knocked out.12 Prion proteins are thought to have a function in learning processes, but when they are
misfolded they can cause bovine spongiform encephalitis (BSE) or KreutzfeldJacob disease. In order to make BSE
resistant cows, a knockout breed has been created lacking the prion protein. A thorough health assessment of this knockout
breed revealed only small differences from wild-type animals. Apparently, cows can thrive very well without the prion
protein.13 Research on histone H1 genes, once believed to be indispensable for DNA condensation, suggest that any
individual H1 subtype is not necessary for mouse development, and that loss of even two subtypes is tolerated if a normal
H1-to-nucleosome stoichiometry is maintained.14 Even complete highly specialized cells can be redundant. A strain of
laboratory mouse, named WBB6F1, lacks a specific type of blood cells known as mast cells. The reported no-phenotype
knockouts are probably only the tip of the iceberg. As reported in Nature below, few knockout organisms in which no
phenotype could be traced ever see the light of day: a lot of those things [no-phenotype knockouts] you dont hear about.
No-phenotype knockouts are negative results, and as such they are usually not reported in scientific journals; because they
do not have news value. To address the problem, the journal Molecular and Cellular Biology has since 1999 a section given
over to knockout and other mutant mice that seem perfectly normal. 15So how are genes, cells and organisms supposed to
have evolved without selective constraints? If organisms can do without complete cells, it would be outlandish to assert that
natural selection was the driving force shaping those cells. Two decades of knockout experiments has made it clear that
genetic redundancy is a major characteristic of all studied life forms.
Paradigm lost
Genetic redundancy falsifies several evolutionary hypotheses. Firstly, truly redundant genes are impossible paradoxes
because natural selection cannot prevent the accumulation of harmful mutations in these genes. Hence, natural selection
cannot prevent redundancies from being lost. Secondly, redundant genes do not evolve (mutate) any faster than essential
genes. If protein evolution is due in large part to neutral and slightly deleterious amino acid substitutions, then the incidence
of such mutations should be greater in proteins that contribute less to individual reproductive success. The rationale for this
prediction is that non-essential proteins should be subject to weaker purifying selection and should accumulate mildly
deleterious substitutions more rapidly. This argument, which was presented over twenty years ago, is fundamental to many
theoretical applications of evolutionary theory, but despite intense scientific scrutiny the prediction has not been confirmed.
In contrast, a systematic analysis of mouse genes has shown that essential genes do not evolve more slowly than nonessential ones.16 Likewise, E. coli proteins that operate in huge redundant networks can tolerate just as many mutations as
unique single-copy proteins,17 and scientists comparing the human and chimpanzee genomes found that non-functional
pseudogenes, which can be considered as redundancies, have similar percentages of nucleotide substitutions as do
essential protein-coding genes.18 Thirdly, as discussed in more detail below, several recent biology studies have provided
evidence that genetic redundancy is not associated with gene duplications.
What does the evolutionary paradigm say?
An important question that needs to be addressed iscan we understand genetic redundancy from Darwins natural
selection perspective? How can genetic redundancy be maintained in the genome without natural selection acting upon it
continually? How did organisms evolve genes that are not subject to natural selection? First, lets look at how it is thought
genetic redundancies arise. Susumo Ohnos influential 1970 book, Evolution by Gene Duplication, deals with this
idea.19 Sometimes, during cell divisions, a gene or longer stretch of biological information is duplicated. If duplication occurs
in germ line cells and become inheritable, the exact same gene may be present twofold in the genome of the offspringa
genetic back-up. Ohno argues that gene and genome duplications are the principal forces that drive the increasing
complexity of Darwinian evolution, referring to the evolution from microbes to microbiologists. He proposes that duplications
of genetic material provide genetic redundancies which are then free to accumulate mutations and adopt novel biological
functions. Duplicated DNA elements are not subject to natural selection and are free to transform into novel genes. With
time, he argues, a duplicated gene will diverge with respect to expression characteristics or function due to accumulated
(point) mutations in the regulatory and coding segments of the duplicate. Duplicates transforming into novel genes with a
selective advantage will certainly be favored by natural selection. Meanwhile, the genetic redundancy will protect old
functions as new ones arise, hence reducing the lethality of mutations. Ohno estimates that for every novel gene to arise
through duplication, about ten redundant copies must join the ranks of functionless DNA base sequence.20 Diversification of
duplicated genetic material is now the accepted standard evolutionary idea on how genomes gain useful information. Ohnos
idea of evolution through duplication also provides an explanation for the no-phenotype knockouts: if genes duplicate fairly
often, it is then reasonable to expect some level of redundancy in most genomes, because duplicates provide an organism
with back-up genes. As long as duplicates do not change too much, they may substitute for each other. If one is lost, or
inactivated, the other one takes over. Hence, Ohnos theory predicts an association between genetic redundancy and gene
duplication.
The evolutionary paradigm is wrong

Figure 2. A very simple scheme of a small robust network


comprised of AE, where several nodes are redundant.
Some biologists have looked into this matter specifically using
the wealth of genetic data available for Saccharomyces
cerevisiaethe common bakers yeast. A surprising 60%
ofSaccharomyces genes could be inactivated without producing
a phenotype. In 1999, Winzeler and co-workers reported
in Science that only 9% of the non-essential genes
ofSaccharomyces have sequence similarities with other genes
present in the yeasts genome and could thus be the result of
duplication
events.21 Most
redundant
genes
ofSaccharomyces are not related to genes in the yeasts
genome, which suggests that genetic duplications cannot explain
genetic redundancy. In 2000, Andreas Wagner confirmed
Winzelers original findings that weak or no-effect (i.e. nonessential and redundant) genes are no more likely to have
paralogousthat is, duplicatedgenes within the yeast genome
than genes that do result in a defined phenotype when they are
knocked out. Wagner concluded that the robustness of mutant
strains cannot be caused by gene duplication and redundancy, but is more likely due to the interactions between unrelated
genes.22 More recent studies have shown that cooperating networks of unrelated genes contribute significantly more to
robustness than gene copy number.23 Redundant genes are proposed to have originated in gene duplications, but we do not
find a link between genetic redundancy, and duplicated genes in the genomes. Gene duplication is not a major contributor to
genetic redundancy, and the robust genetic networks found in organisms cannot be explained. The predicted association
between genetic redundancy and gene duplication is non-existent. Ohnos interesting idea of evolution by gene duplication
therefore cannot be right.
The non-linearity of biology
The no-phenotype knockouts can only be explained by taking into account the non-linearity of biochemical systems. It is
ironic that standard wall charts of biochemical reactions show hundreds of coupled reactions working together in networks,
while graduate students are tacitly encouraged to think in terms of linear cause and effect. The linear cause-and-effect
thinking in ancient Greek philosophy was adopted by nineteenth century European scholars, and is still dominating most
fields of science, including biology. We cannot understand that genetic redundancy and biological robustness in linear terms
of single causality, where A causes B causes C causes D causes E. Biological systems do not work like that. Biological
systems are designed as redundant scale-free networks. In a scale-free network the distribution of node linkage follows a
power law in that it contains many nodes with a low number of links, few nodes with many links and very few nodes with a
high number of links. A scale-free network is very much like the Golden Orbs web: individual nodes are not essential for
letting the system function as a whole. The internet is another example of a robust scale-free network: the major part of the
websites makes only a few links, a lesser fraction make an intermediate number of links, and a minor part makes the
majority of links. Usually hundreds of routers routinely malfunction on the Internet at any moment, but the network rarely
suffers major disruptions. As many as 80% of randomly selected Internet routers can fail, but the remaining ones will still
form a compact cluster in which there will still be a path between any two nodes. 24 Likewise, we rarely notice the
consequences of thousands of errors that routinely occur in our cells.
Scale free networks
Genes never operate alone but in redundant scale-free networks with an incredible level of buffering capacity. In a simple
non-linear biological systempresented in figure 2with nodes A through E, A may cause B, but A also causes D
independent of B and C. This very simple network of only five nodes demonstrates robustness due to redundancy of B and
C. If A fails to make the link with D, there are still B and C to make the connection. Extended networks composed of
hundreds of interconnected proteins ensure that if one network becomes inactivated by a mutation, essential pathways will
then not be shut down immediately. A network of cooperating proteins that can substitute for or bypass each others
functions makes a biological system robust. It is hard to imagine how selection acts on individual nodes of a scale-free,
redundant system. Complex engineered systems rely on scale-free networks that can incorporate small failures in order to
prevent larger failures. In a sense, cooperating scale-free networks provide systems with an anti-chaos module which is
required for stability and strength. Scale-free genetic and protein networks are an intrinsic, engineered characteristic of
genomes and may explain why genetic redundancy is so widespread among organisms. Genetic networks usually serve to
stabilize and fine-tune the complex regulatory mechanisms of living systems. They control homeostasis, regulate the
maintenance of genomes and provide regulatory feedback on gene expression. An overlap in the functions of proteins also
ensures that a cell does not have to respond with only on or off in a particular biochemical process, but instead may
operate somewhere in between.Most genes in the human genome are involved in regulatory networks that detect and
process information in order to keep the cell informed about its environment. The proteins operating in these networks come
as large gene families with overlapping functions. In a cascade of activation and deactivation of signalling proteins, external
messages are transported to the nucleus with information about what is going on outside so it can respond adequately. If
one of the interactions disappears, this will not immediately disturb the balance of life. The buffering capacity present in
redundant genetic networks also provides the robustness that allows living systems to propagate in time. In a linear system,
one detrimental mutation would immediately disable the system as a whole: the strength of a chain is determined by its
weakest link. Interacting biological networks, where parallel and converging links independently convey the same or similar
information, almost never fail. The Golden Orbs web only crumbles when an entire spoke is obliterated in a crash with a
Dragonfly, an event that will hardly ever happen. Biological systems operate as a spiders web: many interacting and
interwoven nodes produce robust genetic networks and are responsible for genetic redundancy.23
Conclusion
Genetic redundancy is an amazing property of genomes and has only recently become evident as a result of negative
knockout experiments. Protein-coding genes and highly conserved regions can be eliminated from the genome of model
organisms without a detectable effect on fitness. There is no association between redundant genes and gene duplications,
and redundant genes do not mutate faster than essential genes. Genetic redundancy stands as an unequivocal challenge to
the standard evolutionary paradigm, as it questions the importance of Darwins selection mechanism as a major force in the
evolution of genes. It is also important to realize that redundant genes cannot have resided in the genome for millions of
years, because natural selection, a conservative force, cannot prevent their destruction due to debilitating mutations.
Mainstream biologists who are educated in the Darwinian framework are unable to understand the existence of genes

without natural selection. This is clear from a statement in Nature a few years ago by Mario Cappecchi, a pioneer in the
development of knockout technology:I dont believe that there is a single [knockout] mouse that does not have a phenotype.
We just arent asking the right questions.15The right question to be asked is: is the evolutionary paradigm wrong? My
answer is yes, it is. Current naturalistic theories do not explain what scientists observe in the genomes. Genetic redundancy
is the actual key to help us understand the robustness of organisms and also their built-in flexibility to rapidly adapt to
different environments. In part 2 of this series of articles, I will explain genetic redundancy in the context of baranomes, the
multipurpose genomes baramins were originally designed with in order to rapidly spread to all the corners and crevices of
the earth.
Evidence for the design of life: part 2Baranomes
by Peer Terborg
The major difference between the evolution and creation paradigms is that the evolutionist believes that the natural variation
found in populations can explain microbe-to-man evolution via natural selection (Darwinism), while the creationist believes it
cannot. This is because the evolutionary, naturalistic framework requires something creationists hold impossible: a
continuous addition of novel genetic information unrelated to that already existing. In the creation paradigm neither variation
nor selection is denied; what is rejected is that the two add up to explain the origin of species. In part 1, I discussed genetic
redundancy and how redundant genes are not associated with genetic duplications and do not mutate faster than essential
genes. These observations are sufficient to completely overturn the current evolutionary paradigm and could form the basis
for a novel creationist framework to help us understand genomes, variation and speciation. In this second part, I argue and
provide biological evidence that life on Earth thrived due to frontloaded baranomespluripotent, undifferentiated genomes
with an intrinsic ability for rapid adaptation and speciation.
Where redundancy leads
The canonical view is that most variation in organisms is the result of different versions of genes (alleles) and genetic
losses. The variation Mendel studied in peas and that led him to discover several basic inheritance laws, was the result of
different alleles. At least, so it is taught. One of the seven traits Mendel described in peas was what he called the I locusit
referred to the colour of the seeds. In Mendels jargon, I stood for dominance (yellow), whereas i meant recessive (green).
Plants carrying I had yellow seeds, plants lacking I had green seeds. Mendel shed scientific light on inheritance.
Now, 140 years after Mendels findings, we
know how the yellow-green system works at
the molecular level. The colour is determined
by the stay-green gene (abbreviated: STG)
that codes for a protein involved in the reabsorption of green pigments during
senescence.1The recessive trait i is the
mutated form of the STG gene; an inactive
variant that cannot re-absorb pigments, so
the seeds keep their green colour.Is
the STG gene essential for survival? Most
likely it is not. Molecular biology shows
Mendel studied the effects of non-essential
and redundant genes. Dominance means at
least one redundant or non-essential gene is
functional; recessive means both copies of
redundant and non-essential genes are
defunct. In Bacillus subtilis only 270 of the
4,100
genes
are
essential,2and
in Escherichia coli this is a meagre 303 out
of a total of almost 4,300 genes.3 Genetic
redundancy is present everywhere,4 and this
lead me to believe that biology is quite
different from what Darwinians think it is.
Namely, organisms are full of genetic tools
that are handy but not essential for survival,
and selection cannot be involved in shaping these genes. Apparently, genomes are loaded with genetic elements that reside
in the genome without selective constraints. This makes sense in the creation paradigm, because the genomes we observe
today are remnants of the original genomes in the created kinds. And, apparently, they were created as
pluripotent,5 undifferentiated genomes with an intrinsic ability for rapid adaptation and speciation. I have called the
undifferentiated, uncommitted, multipurpose genome of a created kind a baranome.6Baranomes explain genetic
redundancy: there is no association with gene duplication, and redundant genes do not mutate faster than essential genes.4

The lack of understanding of baranomes recently led to a severe misinterpretation of the origin of genes in the secular
literature. Eager to find evidence for the evolution of novel biological information, a novel de novo protein-coding gene
in Saccharomyces cerevisiae was reported on the basis of genome comparison among several species ofSaccharomyces.
The BSC4 gene had an open reading frame (ORF) encoding a 132-amino-acid-long polypeptide. It was reported that there
is no homologous ORF in all the sequenced genomes of other fungal species, including closely related species such as S.
paradoxus and S. mikatae. The sequences presented in the figure above demonstrate, however, that the BSC4 gene can be
found interrupted and inactivated in S. paradoxus, S. mikatae andS. bayanus. These data confirm the baranome hypothesis,
which holds that all Saccharomyces descended from one original undifferentiated genome (Saccharomyces bn) containing
all information currently found in the isolated species. This alleged novel gene is in fact ancient frontloaded information that
became redundant and inactive in most S. spp but was subject to sufficient constraints to be retained in S.
cerevisiae. BSC4 codes for a protein involved in DNA repair; an elaborate and integrated mechanism involving dozens of
redundant systems. Therefore, it is predicted that BSC4 knockouts of S. cerevisiae will not show major problems. The top
part of the figure shows the alignment of 320 base pairs of the orthologous sequences of BSC4 from Saccharomyces
bayanus (S.bay), S. mikatae(S.mik), S. paradoxus (S.par) and S. cerevisiae (S.cer). The conserved nucleotides are shown
in bold. (Adapted from Cai and Zhao et al.34). The bottom of the figure shows how only S. cerevisiae retained an
active BSC4 gene.
The multiple genomes of Arabidopsis
In 2007, Science reported on the genome of Arabidopsis thaliana, a flowering plant of the mustard family with a small
genome that is suitable for extensive genetic studies.7 This report was of particular interest because it showed the genomes
of 19 individual plants collected from 19 different stands, ranging from sub-arctic regions to the tropics. According to a
commentary summarizing the results of this painstaking analysis about four percent of the reference genome
either looks very different in the wild varieties, or cannot be found at all. Almost every tenth gene was so defective that it
could not fulfill its normal function anymore!Results such as these raise fundamental questions. For one, they qualify the
value of the model genomes sequenced so far. There isnt such a thing as the genome of a species, says Weigel. He adds
The insight that the DNA sequence of a single individual is by far not sufficient to understand the genetic potential of a
species also fuels current efforts in human genetics.Still, it is surprising that Arabidopsis has such a plastic genome. In
contrast to the genome of humans or many crop plants such as corn, that of Arabidopsis is very much streamlined, and its
size is less than a twentieth of that of humans or corneven though it has about the same number of genes. In contrast to
these other genomes, there are few repeats or seemingly irrelevant filler sequences. That even in a minimal genome every
tenth gene is dispensable has been a great surprise, admits Weigel [emphases added].8Among the
19 stands of Arabidopsis we find dramatic genetic differences. We observe genetic losses as well as genetic novelties.
Although the dispensability of genes is easy to understand with respect to genetic redundancy, the observed novelties are
much harder to conceive unless we accept that all observed novelties are not novelties at all but genetic tools that have
resided in the genome since the dayArabidopsis was created. The genetic novelties may simply reflect environmental
constraints that have helped preserve these genetic tools.There is indeed no such thing as the genome of a species,
because what we observe today are rearranged and adapted genomes that were all derived from an original genome that
contained all genetic tools we find scattered throughout the population. The great surprise is only a great surprise with
respect to the Darwinian paradigm. With a pluripotent Arabidopsis genome in mind, the data are not surprising at all. It is in
accord with what we might expect from the perspective of a rapid (re)population of the earth.Modern Arabidopsis genomes
look as if they were derived from much larger genomes containing an excess of genetic elementsboth coding and noncoding (repetitive) sequencesthat can easily be lost, shuffled or duplicated. The dispensable genes outlined above can
be understood as genetic redundancies originally present in the baranome that over time slowly but steadily fell apart in the

19 individuals, because the environment did not select for them. The study strongly suggests that isolated stands of plants
originated as a result of loss of genetic redundancies, duplication and rearrangement of genetic elements. The dispensability
of 10% of the genes of Arabidopsis could have been predicted because most of the genes still present in individual
genomes are redundant.9 In my opinion, these observations strongly favour the baranome hypothesis.
The law of natural preservation
Genetic redundancy, dispensable genes and disintegrating genomes are scientific novelties revealed to us by modern
biology. How can we understand all this? Darwinians hypothesize that life evolved from simple unicellular microbes to
multicellular ones via a gradual build-up of biological information, the driving force supposedly being natural selection.
According to biology, there is no gradual accumulation of information; biology originated from a big bang. Sponges, worms,
plants and man all have approximately the same genetic content, so the number of genes does not seem to be related to
the complexity of organisms.10 In addition, the complex organisms we observe today were not derived from a single or a few
simple organisms, but must have derived from a global community of organisms.11 The observations of modern biology pose
so many untenable hurdles for naturalistic philosophy that it would be better to simply leave Darwinism for what it is: a set of
falsified 19th century hypotheses that do not and cannot explain the origin of species.The way to understand variation and
speciation is through disintegration and rearrangement of primordialbaranomes created with an excess of genetic elements.
Baranomes initially contained all mechanisms required to quickly respond and adapt to changing environments. They
provided organisms with the tools needed to invade many distinct niches, and were ideal for the swift colonization of all
corners of the world. Baranomes were multifunctional genomes which can be compared to a Swiss army knife. A Swiss knife
contains many functions which are not immediately necessary in a particular environment; but some of them are extremely
handy in the mountains, others in the woods, still others are made for opening bottles and cans, or tools for making a fire.
Depending on where you are, you may require different sets of functions. Similarly, depending on where the organism lives,
it demands different functions (i.e. protein-coding genes and their protein products) from its genome. The environment then
determines what part of the non-essential genome is under constraint and it is only this part that will be conserved. In other
words, the law of natural preservationconventionally coined natural selectiondetermines the differentiation of the
pluripotent genome.12

Figure 1. Phylogeny of modern Flaveria species demonstrates independent losses of the C3 and C4 photosystems from the
baranome of Yellowtops species (Flaveria bn). Some have either the C3 or the C4 photosystems, others have both C3 and
C4 (or parts there of). Isolated species are in the process of losing redundant parts of theFlaveria baranome. (Adapted from
Kutchera and Niklas14).
C3 and C4 plants
From the creation paradigm, we might expect to find more than one carbon-fixation system. For example a system that
functions optimally in warm, tropical regions, also systems that operate at sub-arctic temperatures. We find that plants do
indeed have two photosystems for fixing carbon fixation; they are known as C3 and C4. The optimum temperature for
carbon fixation in C3 plants is between 1520C, whereas the C4 plants have an optimum around 3040C.13 Today many
plants are either C3 plants or C4 plants, but we also find plants that have both C3 and C4. There is clear indication for
redundancy of the two photo-systems, because many plants have only one of both systems operable; either C3 or
C4, plus remnants of the other system.For instance, in modern Yellow tops (Flaveria spp) we not only see functional C3, C4
or the combination of C3 plus C4 photo-systems, but we also observe C4 remnant in C3 plants. 14 The presence of remnants
of one of the systems qualifies as evidence for a baranome containing both photo-systems, and indicates that the C4
system is not stringently preserved when the C3 system is also present (figure 1). The two frontloaded photosystems
ensure a rapid colonization of both high and low altitudes, and hot and cold environments. In the tropics, the C4 system,
which functions optimally at high temperatures, should be active, whereas the C3 system is redundant. Here, the hot
system would be under permanent environmental constraint and be conserved. Due to accumulation of debilitating
mutations in the genetic elements comprising cold systems, these would rapidly disintegrate. A genetic program designed
for tropical regions does not make sense in arctic regions, and vice versa. It is the organisms environment or habitat that
determines whether genetic elements are useful or not. Conforming to the baranome hypothesis, the habitat determines
genetic redundancy. There is no biological reason to assume why unused, habitat-induced redundancies should be
preserved. The law of natural preservation tells us that unused genes will rapidly degrade.

What baranomes contain


Baranomes are information carriers. They were frontloaded with three classes of DNA elements: essential, non-essential
and redundant. When essential elements mutate to change the amino acid sequences, the information carrier as a whole is
immediately subject to a severe reproductive disadvantage. In the worst case the mutation is incompatible with life and
mutated essential DNA elements will not be present in the gene pool of the next generation. Essential DNA elements can be
defined as biological information that is unable to evolve. Non-essential genes are genes that are allowed to mutate and
may thus contribute to allelic variations. As they produce non-lethal phenotypes, they contribute to the variation observed in
populations. Classic Mendelian genetics is largely due to variation in non-essential genes. Variation in non-essential genes
is what geneticists call alleles. Recessive Mendelian traits can usually be attributed to dysfunctional non-essential genetic
elements; in particular elements that determine expression of morphogenesis programs, including those that determine
length and shapethe morphometryof the organism. To induce the recessive trait the disrupted (or inactivated) alleles
must be inherited from both parents, because an active wild-type gene usually compensates for an inactivated gene. In
Mendels jargon, this compensation is known asdominance.The third class of frontloaded genetic elements are correct
genes that underlie genetic redundancy. They make up a special class of non-essential genes and have only recently been
discovered. That is because their existence cannot be deduced from genetic experiments: they do not contribute to a
detectable phenotype. Their peculiarity is that redundant genes may be completely lost from the genome without any effect
on reproductive success. That redundant genetic elements make up a major part of the genome of all organisms became
evident when biologists interested in gene function developed gene-knockout strategies; with the remarkable observation
that many knockouts do not have a phenotype. 4 Genetic redundancy is an intrinsic property of pluripotent baranomes. It
should be noted, however, that the environment also plays a crucial role in determining whether a genetic algorithm is
redundant, non-essential or essential. The pathway for vitamin C synthesis, for instance, is a diet-induced genetic
redundancy which is inactive in humans, four primates, guinea pigs and fruit-eating bats as a result of two debilitating
mutations.15The law of natural preservation often dictates the course for the development of baranomes. In addition,
baranomes initially contained variation-inducing genetic elements (VIGEs) that helped to induce rapid duplications and
rearrangements of genetic information. Modern genomes of all organisms are virtually littered with VIGEs (which are usually
referred to as remnants of retroviruses; LINEs, SINEs, Alus, transposons, insertion sequences, etc.) and due to their ability
to duplicate and more genetic material they facilitate and induce variation in genomes.16
Speciation from baranomes
Variation in reproducing populations is mostly due to position effects of VIGEs. That is because the presence of VIGEs in or
near genes determines the activity of those genes and hence their expression. 17 Variation is a result of a change in gene
expression. In addition, VIGEs that function as chromosome swappers may also help us understand reproductive barriers. A
reproductive barrier between organisms is in fact another term for speciation, the formation of novel species. Species is
meant here in the sense of Ernst Mayrs species concept, which includes intrinsic reproductive isolation.18 Indeed, the gene
swapping mechanism present in primordial pluripotent genomes also allowed for intrinsic reproductive isolation. If we want
to understand how chromosome-swapping VIGEs are involved in speciation, we first have to look into some details of sexual
reproduction.In all cells of sexually reproducing organisms the chromosomes are present as homologous pairs. One is
inherited from the father and the other from the mother. The arrangement of homologous chromosomes allows them
to easily pair up. Each parental chromosomes recognizes the other and they easily align. The alignment is necessary for the
formation of gametes during meiosis, where the two sets of parent chromosomes are reduced to one set. Differences in
chromosome pattern impede the pairing of chromosomes at meiosis, resulting in hybrid sterility. Chromosomal
rearrangements may be one of the most common forms of reproductive isolation, allowing rapid adaptive radiation of
multipurpose genomes without the need for geographic isolation or natural selection. The activity of chromosome-swapping
VIGEs may thus have produced reproductive barriers and hence facilitating speciation.If it is true that chromosomal order
determines whether organisms are able to reproduce, speciation can theoretically be reversed by chromosomal
adjustments. In other words, we must be able to produce viable offspring from two reproductively isolated species just by
rearranging their chromosomes. This may sound like an untestable hypothesis, but experimental evidence demonstrates
that it is indeed possible to unspeciate distinct, reproductively isolated, species by chromosomal adjustments. Using Mayrs
species definition, yeasts of the genus Saccharomyces comprise six well-defined species, including the well known bakers
yeast.19 The Saccharomyces species will readily mate with one another, indicating that they stem from one single baranome
(figure 2), but pairings between distinct species produce sterile hybrids. Three of the six species are characterized by a
specific genome rearrangement, known as reciprocal chromosomal translocations. Reciprocal chromosome translocation
occurs when the arms of two distinct chromosomes are exchanged. Analysis of the six species revealed that translocations
between the chromosomes do not correlate with the groups sequence-based phylogeny, a finding that has been interpreted
as translocations do not drive the process of speciation. However, a study carried out by the Institute of Food Research in
Norwich, United Kingdom, showed that the chromosomal rearrangements in Saccharomyces do indeed induce reproductive
isolation between these organisms.19 The reported experiments were designed to engineer the genome of Saccharomyces
cerevisiae (bakers
yeast) so as to make it
collinear
with
that
of Saccharomyces
mikatae, which normally
differs
from
bakers
yeast by one or two
translocations.
The
results showed that the
constructed strains with
imposed
genomic
collinearity allow the
generation of hybrids
that produced a large
proportion of spores that
were
viable.
Viable
spores
were
also
obtained in crosses
between
wild-type

bakers yeast and the naturally collinear speciesSaccharomyces paradoxus, but not in crosses between species with noncollinear chromosomes.
Figure 2. Left panel: Adaptive radiation from one single pluripotent baranome. The figure shows a hypothetical model for the
radiation of the Saccharomyces bn into the six Saccharomyces species we observe today. Initially, the uncommitted
pluripotent baranome radiated in all possible directions. Due to intrinsic mechanisms, variation is constantly generated but
slowed down over time because of the redundant character of the variation-inducing genetic elements (and were easily lost).
Speciation may occur when a reproductive barrier is thrown up, for instance as the result of chromosomal rearrangements.
Genetic elements that facilitate variation are specified in the genome and there is no need for the millions of years that are
required for Darwinian evolution. This is clear from the long-running (20 years) evolutionary experiments which show that
the major adaptive changes occurred during the first 2 years. 35 Right panel: Hypothetical time courses for the total amount of
information in a baranome (black line) and the number of species derived from that baranome (red line). Over time there is a
tendency to lose biological information with an increase of the number of species.This is empirical proof that a reproductive
barrier between species can be reversed just by reconfiguration of their chromosomes. In addition to reciprocal
chromosomal translocations, many small-scale genomic rearrangements, involving the amplification and transposition of
VIGEs may cause reproductive isolation. VIGEs are thus basic to understanding variation and speciation of baranomes.
Modern biology demonstrates that although the six species of Saccharomyces yeasts are all derived from one single
baranome, their individual karyotypes20 determine whether they can interbreed and leave offspring.That the karyotype is an
important determinant of reproductive isolation is also observed in deer. Eight species of Asian deer of the
genusMuntiacus inhabit an area speading from the high mountains in the Himalay to the low land forests of Laos and
Cambodia. Their karyotypes differ dramatically; chromosome number varies from a low of only three pairs to a high of
23.21 The muntjack species demonstrate that individuals that differ substantially by chromosomal reorganizations of
otherwise identical genetic material will invariably be sterile. The sterility of muntjack hybrids is exclusively due to the
inability of the chromosomes to pair. The chromosomes of distinct species simply cannot form pairs, and formation of viable
reproductive cells is impossible. The karyotype accounts for reproductive isolation, and the baranome hypothesis leaves
room for speciation events through adaptive radiation.
Identification of baranomes
How do we identify whether organisms descended from one primordial multipurpose genome? Darwinians claim a
continuum between genomes of distinct species and view all modern species as transition stages, so this question is not of
particular interest. For micro-organisms this may well be true. Between bacteria, the exchange of biological information is
common and for this purpose they possess elaborate mechanisms to facilitate the uptake of foreign DNA from the
environment. Still, over 5,000 distinct bacteria have been scientifically described, indicating distinctive borders between
bacterial species.22Likewise, the biological facts in higher organisms these are distinctive borders between genomes;
borders determined by reproductive barriers. For instance, humans and chimps both have comparable genomic content, but
very distinctive karyotypes, so the species cannot reproduce with each other. Therefore, the question raised above is not
easy to answer. Because genomes tend to continuously lose unused genetic information over time, genomic content may
not be suitable to identify common descent from the same primordial baranome.A first indication that two distinct species
have descended from the same baranome is this ability to mate. The offspring does not have to be fertile; neither does it
have to be viable at birth. Zygote formation is a significant indication that the organisms were derived from the same
baranome.The best tool for baranome identification currently available, however, may be indicator genes. Indicator genes
are essential genes with a highly specific marker. In the human baranome (Homo bn) we indeed observe indicator genes,
such as FOXP223 and HAR1F.24 Both genes are also present in primates, but in humans they have highly specific
characteristics not found in primates, indicating that human genomes stem from a distinct baranome. Specific characteristics
typify humans. A comparative analysis of indicator genes in primates is sufficient to discriminate between man and
chimpanzee (Pan bn) baranomes, or whether ancient bones belong to the human baranome. Recent research shows that
indicator genes may indeed be a promising tool for baranome detection. Analyses of ancient Neandertal DNA, revealed
typical human FOXP2 characteristics.25 This observation is compelling evidence that both modern humans and Neandertals
originate from one and the same baranome. Further research is required to develope a full range of baranome indicator
genes for other organisms.
Darwin revisited
From the baranome hypothesis we can begin to understand how Africas Rift Valley lakes became populated with hundreds
of species ofCichlids within a mere few thousand years. We can also understand the origin of dozens of (sub) species of
woodpeckers, crows, finches, ducks and deer. And we begin to see how wings could develop many times over in stick
insects.26 We also understand why two distinct sex systems operate in Japanese wrinkled frogs (Rana rugosa),27 and
why Dictyostelium has genetic programs for both sexual and asexual reproduction. 28 And we begin to see why ancient
trilobites radiated so rapidly.29 The required genetic programs (Dictyosteliums sexual reproduction program make up over
2,000 genes!) did not have to evolve step-by-step under the guidance of natural selection. Rather, these programs were a
dormant frontloaded part of the baranome and only required a wake-up call. If Darwin had the knowledge of 21 st century
biology, I believe his primary conclusion would be similar to what I propose: limited common descent through adaptive
radiation from pluripotent and undifferentiated baranomes. The limits to common descent are determined by the elements
that have been frontloaded intothat baranome. Natural varieties of sexually reproducing organisms can be established by
means of differential reproductive success, but reversal to wild types follows as soon as selective constraints are relieved
and hybridization between previously isolated populations occurs. Hybridization is, in fact, nothing but reversal to a more
original multipurpose genome; the wild type is the more stable (robust) form of the baranome because it contains more
redundancies. Reversal to the wild-type was a known principle in Darwins day, but Darwin dismissed the obvious and
invented his own naturalistic biology:To admit this view [of unstable created species] is, as it seems to me, to reject a real
for an unreal, or at least for an unknown, cause. Darwin rejected the baranome for theological, or at best for philosophical,
reasons. Why would a flexible, highly adaptable, pluripotent genome present in primordial creatures make the the work of
the designer mere mockery and deception? A pluripotent genome with an intrinsic propensity to rapidly respond to
changing situations elegantly explains the co-adaptations of organic beings to each other and to their physical conditions.
Organisms that cannot adapt or, in other words, organisms that lose their evolvability are bound to become extinct. The
baranome hypothesis with frontloaded VIGEs is sufficient to explain what Darwin observed and there is no need to invoke a
gradual and selection-mediated evolution from microbe-to-man, which is non-existing and not in accord with scientific
observations anyway. In every generation, VIGE activity generates novel genetic contexts for pre-existing information and
hence gives rise to novel variation. VIGEs were an intrinsic property of baranomes, and they are the source of variation and
adaptive radiation. It must be emphasized that, because all elements that induce variation are already in the genome, there
is no need for the millions of years required for Darwinian evolution.
Conclusion and perspective

The findings of modern biology show that life is quite different from that predicted by the evolutionary paradigm. Although
the evolutionaryparadigm assumes an increase of genetic information over time, the scientific data show that an excess of
biological information is present even in the simplest life forms, and that we instead observe genetic losses. A
straightforward conclusion therefore should be that life on Earth thrived due to frontloaded baranomespluripotent,
undifferentiated genomes with an intrinsic ability for rapid adaptation and speciation. Baranomes are genomes that
contained an excess of genes and variation-inducing genetic elements, and the law of natural preservation shaped
individual populations of genomes according to what part of the baranome was used in a particular environment.With so
many genomes sequenced and an ever increasing knowledge of molecular biology, we will find more and more evidence to
support thebaranome hypothesis. We will increasingly recognize traces and hints of frontloaded information still present in
the genomes of modern species. We may expect that the genomes of the descendants of the same multipurpose genome
independently lost redundant genetic elements. We may expect to find impoverished genomes and also reproductively
isolated populations at different latitudes to be highly distinct with respect to their genomic content. We may even be able to
piece together the genomic content of the original multipurpose genome of these species simply by adding up all unique
genetic elements present in the entire population.Finally, it will be possible to detect indicator genes, such as the FOXP2,
which may become genetic tools for establishing the borders between distinct baranomes. Frontloaded baranomes are an
important tool to help us understand biology. I believe there is grandeur in this view of life, where the Great Omnipotent
Designer chose to breathe life into a limited number of undifferentiated, uncommitted, pluripotent baranomes; and from
these baranomes all of the earth was covered with an almost endless variety of the most beautiful and wonderful
creatures.31
Karyotype rearrangements
In 1970, Neil Todd developed the karyotypic fission hypothesis (KFH)32 to correlate the physical appearance of chromosomes
with the evolutionary history of mammals. Todd postulated wholesale fission of all medio-centric chromosomes. Todds fasttrack, single event, genome rearrangement still is the most parsimonious theory to account for mammalian karyotypes and
potentially explains rapid speciation events. Todds was rejected mainly because it postulated something opposing the
dominant Darwinian paradigm.In 1999, Robin Kolnicki revived Todds KFH. Although her kinetochore reproduction
hypothesis33 was largely theoretical, each step had a known cellular or molecular mechanism. During DNA replication, just
before meiotic synapsis and sister chromatid segregation, the formation of an extra kinetochore on all chromosomes is
facilitated. The kinetochore is the organizing centre that holds the sister chromatids together during meioses and is
composed mainly of repetitive DNA sequences. The freshly added kinetochores do not disrupt the distribution of
chromosomes to daughter cells during meiosis because tension-sensitive checkpoints operate to prevent errors in
chromosome segregation. The result is a new cell with twice the number of telocentric chromosomes. 10The duplication of the
kinetochores on many chromosomes at the same time is highly unlikely in a naturalistic model, but the telocentric
chromosomes of rhinoceros, rock wallaby and many other species are physical evidence that their genomes were formed
instantly.I postulate that the genomes, as we observe them today, are the result of thousands of years of rearrangements
(fission, fusion and duplications) brought about by specific variation-inducing genetic elements (VIGEs). Initially, well
controlled rearrangements may have been facilitated by these elements, but over time the control over regulated genome
rearrangement deteriorated. VIGEs may be the genetic basis to help us understand wholesale genomic rearrangements from
pluripotent baranomes.In order to rapidly occupy novel niches, a mechanism or ability to create reproductive barriers may
have been intrinsic to baranomes. The ability to adapt, including speciation-events, is merely due to neutral rearrangements
of chromosomes, and the VIGEs involved may easily become inactive because of the permanent accumulation of debilitating
mutations. The remnants of VIGEs can still be found in contemporary genomes; they are known as (retro)(trans)posons,
LINEs, SINEs, Alu insertion sequences, etc. Some VIGEs may have started a life of their own and now jump around more or
less uncontrolled.

The design of life: part 3an introduction to variation-inducing genetic elements


by Peer Terborg
The inheritance of traits is determined by genes: long stretches of DNA that are passed down from generation to generation.
Usually, genes consist of a coding part and a non-coding regulatory part. The coding part of the gene determines the
functional output, whereas the non-coding portion contains switches and units that determine when, where and how much of
the functional output should be generated. Point-mutations in the coding part are predominantly neutral or slightly
detrimental genetic noise that accumulates in the genome, whereas point-mutations in the regulatory part of DNA units can
induce variation with respect to the amount of output. Previously, in part 2, I argued that created kinds were frontloaded with
baranomes: that is, pluripotent genomes with an ability to induce variation from within. The output of (morpho)genetic
algorithms present in the baranome can readily be modulated by variation-inducing genetic elements (VIGEs). VIGEs are
frontloaded genetic elements normally referred to as endogenous retroviruses, insertion sequences, LINEs, SINEs, microsatellites, transposons, insertion sequences, and the like. In the present report, these transposable and repetitive DNA
sequences are redefined as VIGEs, which solves the RNA virus paradox.
The (morpho)genetic algorithms were designed in such way that VIGEs
easily integrated into it and became a part of it, hence making the program
explicit.
The variation that Darwin saw in pigeons can be explained with the
activation or deactivation of existing genetic sequences for feather
production in different parts of the body. This gives no basis for asserting
that pigeons could change into something which is not a pigeon.In order to
fight off invading bugs and parasites, higher organisms have an elaborate
mechanism that induces variation in immunological defence systems. One
particular type of immune cells (B cells) produces defence proteins known
as immunoglobulins. Immunoglobulins are very sticky; they bind to
intruders as biological tags and mark them as alien. Other cells of the
immune system then recognize the intruder, and a destruction cascade is
activated. To have a tag available for every possible alien intruder, millions
of B cells have their own highly specific gene for immunoglobulin
production. In the genome there is only limited storage space for biological
information, so how can there be millions of genes? Well, there arent.

Immunoglobulin genes are assembled from several pre-existing DNA sequences that can be independently put together.
The part of the immunoglobulin that does the alien recognition contains several domains which are each highly variable.
Every single B cell forms a unique immunoglobulin gene by picking from several short pre-existing DNA sequences. We also
observe that later generations of immunoglobulins are more specific than the earlier generations, in the sense that they bind
more tightly to invading microorganisms. Binding affinity to an invader is equivalent to recognition of that invader. And the
better the immune system is able to recognize an intruder, the better it is able to clear it. The increased specificity is due to
somatic mutations deliberately introduced in the genes of the immunoglobulins. A mechanism to rapidly induce mutations in
immunoglobulin genes is present in the B cell genome. This mechanism ensures that the recognition pattern specified by
the genes becomes increasingly specific for the intruder. This ability to recognize and defeat all potential microorganisms is
characteristic of the immune systems of higher organisms, including humans. The genomes contain all the necessary
biological information required to induce variation from within. A flexible genome is required to effectively ward off diseases
and parasitic infections. B cells dont wait for mutations to happen; they generate the necessary mutations themselves.
Darwin revisited
Previously, in part 2,1 I argued that organisms are equipped with flexible, highly adaptable, pluripotent, multipurpose
genomes. Organisms are able to conquer the world through adaptive radiation of baranomes. But how do baranomes
unleash information? Do organisms have to wait for selectable mutations to occur in order to rapidly invade and occupy
novel ecological niches? Or were the baranomes of created kinds equipped with mechanisms to rapidly induce mutations,
similar to the variation generated by B cells? Lets turn to Darwins The Origin of Species, where we will find some clues.
Darwin wrote quite extensively on variation, and in particular on the variation of feather patterns in pigeons:
Box 1. Common names of some well-known
variation-inducing genetic elements (VIGEs) in
prokaryotes (bacteria) and eukaryotes (yeast,
plants, insects and mammals).
Some facts in regard to the colouring of
pigeons well deserve consideration. The rockpigeon is of a slaty-blue, and has a white rump
(the Indian sub-species, C. intermedia of
Strickland, having it bluish); the tail has a
terminal dark bar, with the bases of the outer
feathers externally edged with white; the
wings have two black bars; some semidomestic breeds and some apparently truly
wild breeds have, besides the two black bars,
the wings chequered with black. These several marks do not occur together in any other species of the whole family. Now, in
every one of the domestic breeds, taking thoroughly well-bred birds, all the above marks, even to the white edging of the
outer tail-feathers, sometimes concur perfectly developed. Moreover, when two birds belonging to two distinct breeds are
crossed, neither of which is blue or has any of the above specified marks, the mongrel offspring are very apt suddenly to
acquire these characters; for instance, I crossed some uniformly white fantails with some uniformly black barbs, and they
produced mottled brown and black birds; these I again crossed together, and one grandchild of the pure white fantail and
pure black barb was of as beautiful a blue colour, with the white rump, double black wing-bar, and barred and white-edged
tail-feathers, as any wild rock pigeon! We can understand these facts, on the well-known principle of reversion to the
ancestral characters, if all the domestic breeds have descended from the rock-pigeon. 2Darwin arguesand correctly so
that all domestic pigeon breeds have descended from the rock-pigeon. He even knew, as demonstrated above, how to
breed the rock-pigeon from several distinct pigeon races following a breeding pattern. Darwin describes a breeding
algorithmfor pigeons, to obtain the ancestor to all pigeons! But does he also describe an algorithm for breeding turkeys from
pigeons? No. Darwin doesnt know such an algorithm. If he had found an algorithm for breeding ducks or magpies from
pigeon genomes, he would have had solid evidence in favour of his proposal On The Origin of Species Through the
Preservation of Favoured Races. His breeding experiments led him to discover the principle of reversion to ancestral
characters, but contrary to common Darwinian wisdom, it is also the falsifying observation to his proposal for the origin of
species. The observation that pigeons bring forth pigeons, and nothing else but pigeons, is not exactly the evidence needed
to argue for the common descent of all birds. On the contrary! Darwins breeding experiments demonstrated that a pigeon is
a pigeon is a pigeon. Characteristics and traits within single species of pigeons may vary tremendously, but he always
started and ended with pigeons. Breeding experiments have always shown, without exception, that novel and distinct bird
species do not arrive through artificial selection. Even Darwin argues that there is no doubt that all varieties of ducks and
rabbits have descended from the common wild duck and rabbit. 3 From the variation Darwin observed in wild and
domesticated populations, it does not follow that rabbits and ducks have some hypothetical common ancestor in a fuzzy
distant past. Darwin observed inborn, innate variation that already existed in the genomes of the pigeons and it only had to
be activated or expressed.From the excerpt above, we may even get an impression of how it works. A genetic algorithm for
making feathers (a feather program) is part of the pigeons genome and is present in every single cell. The feather program
is present in billions of pigeon cells, but it is NOT active in all those cells. Feathers are only formed when the program is
activated. The feather program is silent in cells where it should normally not operate. Activation of the feather program in the
wrong cells may often be incompatible with life, but sometimes it may produce pigeons with (reversed) feathers on the feet.
The program may be derepressed or activated through a mechanism that operates in the pigeons genome. Whether
feathers appear on the feet or on the head, and whether they appear normal or reversed is merely a matter of activation and
regulation of the feather program. But Darwin didnt know about silent genomic programs or how they could become active.
He didnt know about gene regulation and molecular switches. Darwin did not know anything about genes and genomes.
Analogous variation
The idea that Darwin had been working on for over two decades prior to the publication of Origin, his ide fixe, was how
organic change (i.e. variation) present in populations might explain how novel species came into being. Unchanging, stable
species is not what Darwin had in mind. He pondered the riddles of variation; he thought about laws and principles
associated with the process of variation and believed he could disclose them by the study of the formation of new breeds.
Drawing from what he knew about pigeon breeding and equine varieties, Darwin describes some of his ideas about the
laws of variation in chapter five of Origin:Distinct species present analogous variations; and a variety of one species often
assumes some of the characters of an allied species, or reverts to some of the characters of an early progenitor. These
propositions will be most readily understood by looking to our domestic races. The most distinct breeds of pigeons, in
countries most widely apart, present sub-varieties with reversed feathers on the head and feathers on the feet, characters
not possessed by the aboriginal rock-pigeon; these then are analogous variations in two or more distinct races. 4Darwin

describes that the exact same traits can appear in distinct breeds of pigeons andimportantlythese traits
appeared independently in countries most widely apart. If several breeds arrive with the same characteristics
independently, it is unlikely they do so because of chance. Rather, the pigeon genomes may activate or derepress the same
feather program independently. The effect is that distinct breeds in countries most widely apart acquire the same
characteristics. Over and over the same traits appear in separated populations of organisms as the result of mutations from
within. Animal breeders like exuberant patterns and rarities; that is exactly what they are looking for to select. Aberrant traits
that are normally under stringent negative selection, as might be the case for the pigeons reversed feathers, may readily
become visible as soon as the selective pressure is relieved; that is, when organisms are reared and fed in the protective
environment of captivity. Darwin called the phenomenon of independent acquisition of the same traits analogous variation. It
is a common phenomenon well known to breeders, and Darwin easily found more examples of analogous variation:The
frequent presence of fourteen or even sixteen tail-feathers in the pouter, may be considered as a variation representing the
normal structure of another race, the fantail. I presume that no one will doubt that all such analogous variations are due to
the several races of the pigeon having inherited from a common parent the same constitution and tendency to variation,
when acted on by similar unknown influences. In the vegetable kingdom we have a case of analogous variation, in the
enlarged stems, or roots as commonly called, of the Swedish turnip and Ruta baga [sic] plants which several botanists rank
as varieties produced by cultivation from a common parent: if this be not so, the case will then be one of analogous variation
in two so-called distinct species; and to these a third may be added, namely, the common turnip. According to the ordinary
view of each species having been independently created, we should have to attribute this similarity in the enlarged stems of
these three plants, not to the vera causa of community of descent, and a consequent tendency to vary in a like manner, but
to three separate yet closely related acts of creation. 5Analogous variation originates in the genome. Through rearrangement
and/or transposition of DNA elements, previously silent (cryptic) traits can be activated. The underlying molecular
mechanism cant be merely random; if it were, then Darwin, and other breeders, would not have observed the expression of
the same traits independently of each other. A more contemporary translation of analogous variation would benonrandom (or: non-stochastic) variation, and it implies some sort of mechanism.
Reversions
In the excerpt above, Darwin also describes what he calls reversions. By this term he meant traits that are present in
ancestors, then disappear in first generation offspring, and then reappear in subsequent generations. Darwin acknowledged
that unknown laws of inheritance must exist, but still he talks about the proportion of blood. Reversions are easily explained
as traits present on separate chromosomes, and the inheritance of such traits is best understood from Gregor Mendels
inheritance laws. Through Mendels discovery of the genetic laws that underlie the inheritance of traits associated with
chromosome segregation (a hallmark of sexual reproduction), Mendel gave us a quantum theory of inheritance. He found
that traits are always inherited in well-defined and predictable proportions, and do not just come and go. Darwins
reversions are traits that reappear in later generations due to the inheritance of the same genes (alleles) from both
parents.5 Darwin didnt know about Mendels laws of inheritance, neither did he know about how variation is generated in
genomes. What Darwin described inOrigin, however, is that variation in offspring is a rule of biology. What Darwin described
in isolated species (whether domesticated breeds or island-bound birds) was the result of a burst of abundant speciation
resulting from multipurpose genomes. Variant breeds of pigeons are the phenotypes of a rearranged multipurpose pigeon
genome. The Galpagos finches (with their distinct beaks and body sizes) are the phenotypes of a rearranged multipurpose
finch genome. Where does the variation stem from in populations of Galpagos finches?Darwin was well aware of the
profound lack of knowledge on the origin of variation, and did not exclude mechanisms or laws to drive biological variation:I
have hitherto sometimes spoken as if the variations so common and multiform in organic beings under domestication, and in
a lesser degree in those in a state of nature had been due to chance. This, of course, is a wholly incorrect expression, but it
serves to acknowledge plainly our ignorance of the cause of each particular variation. 6Since Darwins days, almost all
corners of the living cell have been explored and our biological knowledge has expanded greatly. Through a vast library of
data generated by new research in biology, we now have the answers to many questions of a biological nature that had
puzzled Darwin. We may also have the answer to the cause of each particular variation, although we may not be aware of
it (yet). That is not because it is hidden between billions of other books and hard to find. No, it is because of the Darwinian
paradigm. The mechanism(s) that drive biological variations have been elucidated but are not yet recognized as such.One
of the findings of the new biology was that the DNA of most (if not all) organisms contains jumping genetic elements. The
mainstream opinion is that these elements are the remnants of ancient invasions of RNA viruses. RNA viruses are a class of
viruses that use RNA molecule(s) for information storage. Some of them, such as influenza and HIV, pose an increasing
threat to human health. Are virus invasions responsible for all the beautiful intricate complexity of organic beings? Is a virus
a creator? Most likely it is not. Otherwise why would we pump billions of research dollars into research to fight off viruses?
Could it be that mainstream science is mistaken?
The RNA virus paradox
Here is one good reason for believing that mainstream science is indeed mistaken: the RNA virus paradox. It has been
proposed that these RNA viruses have a long evolutionary history, appearing with, or perhaps before, the first cellular life
forms.7 Molecular genetic analyses have demonstrated that genomes, including those of humans and primates, are riddled
with endogenous retroviruses (ERVs), which are currently explained as the remnants of ancient RNA virus-invasions. RNA
virus origin can be estimated using homologous genes found in both ERVs and modern RNA virus families. By using the
best estimates for rates of evolutionary change (i.e. nucleotide substitution) and assuming an approximate molecular
clock,8,9 the families of RNA viruses found today could only have appeared very recently, probably not more than about
50,000 years ago.10 These data imply that present-day RNA viruses may have originated much more recently than our own
species. The implication of a recent origin of RNA viruses and the presence of genomic ERVs poses an apparent paradox
that has to be resolved. I will argue, in order to resolve the paradox, we should abstain from the mainstream idea that ERVs
are remnants of ancient RNA virus invasions.Solving the RNA paradox can only be accomplished by asking questions. First,
we have to ask ourselves, What do scientists mean when they refer to genetic elements as endogenous
retroviruses (ERVs)? In addition, we have to ask, How do ERVs behave, and whatif anyare their functions? ERVs have
been extensively studied in microorganisms, such as bakers yeast (Saccharomices cerivisiae) and the common gut
bacterium Escherichia coli. Most of our knowledge on the mechanisms of transposition of ERVs comes from those two
organisms. In yeast, the ERV known as Ty is flanked by long terminal repeats and specifies two genes, gag and pol, which
are similar to genes found in free operating RNA viruses. This is the main argument why scientists believe RNA viruses and
ERVs are evolutionarily closely related. The long terminal repeats enable the ERV to insert into the hosts DNA. The
transposition and integration is a stringently regulated process and seems to be target or site-specific. 11,12 During the
transpositions of an ERV, the hosts RNA polymerase II makes an RNA template, which is polyadenylated to become
messenger RNA. Thegag and pol mRNAs are translated and cleaved into several individual proteins. The gag gene
specifies a polyprotein that is cleaved into three proteins, which form a capsid-like structure surrounding the ERVs RNA. We

may ask here: why is a capsid involved? It should be noted that single stranded RNA molecules are very sticky nucleotide
polymers and the capsid may prevent the ERV from sticking at wrong places. The capsid may also be required to direct the
ERV to the right spots in the genome. The pol polyprotein is cleaved into four enzymes: protease, reverse transcriptase,
RNase and integrase. Protease cleaves the polyproteins into the individual proteins and then the RNA and proteins are
packed into a retrovirus-like particle. Reverse transcriptase forms a single-stranded DNA molecule from the ERV RNA
template, whereas RNase removes the RNA. The DNA is then circularized and the complementary DNA strand is
synthesized to create a double-stranded, circular copy of the ERV, which is then integrated into a new site in the hosts
genomic DNA by integrases activity. This intricate mechanism for transposition of ERVs seems to be irreducibly complex
(and thus a sign of intelligent design) since all ERVs and RNA viruses use the same or similar genetic components.
Variation-inducing genetic elements (VIGEs).
What can the function, if any, of ERVs be? If we follow the mainstream opinion, ERVs integrated into the genomes a very
long time ago as viral infections. Currently, ERVs are not particularly helpful. They merely hop around in the genome as
selfish genetic elements that serve no function in particular. They are mainly upsetting the genome. Long ago, however,
RNA viruses are alleged to have significantly contributed to evolution by helping to shape the genome.Its hard to imagine
this story to be true, and not only because of the RNA virus paradox. Modern viruses usually do not integrate into the DNA of
the germ line-cells; that is, the genes of an RNA virus dont usually become a part of the heritable material of the infected
host. If we obey the uniformitarian principle, we are allowed to argue: What currently doesnt happen didnt happen a long
time ago, either. To answer the question raised above, we must start finding out more about some biological characteristics
of a less complicated jumping genetic element, the so-called insertion-sequence (IS) element. IS elements are DNA
transposons abundantly present in the genomes of bacteria. IS elements share an important characteristic with ERVs:
transposition. Genome shuffling takes place in bacteria so frequently that we can hardly speak of a specific gene order. The
shuffling of pre-existing genetic elements may unleash cryptic information instantly as the result of position effects. Shuffling
seems to be an important mechanism to generate variation. But what is the mechanism for genome shuffling? The answer
to this question comes unexpectedly from evolutionary experiments, in which genetic diversity (evolutionary change) was
determined between reproducing populations of E. coli. During the breeding experiment, which ran for two decades, it was
observed that the number and location of IS (insertion sequence) elements dramatically changed in evolving populations,
whereas point mutations were not abundant.13After 10,000 generations of bacteria, the genomic changes were mostly due to
duplication and transposition of IS elements. A straightforward conclusion would thus be that jumping genetic elements,
such as the IS elements, were designed to deliberately generate variationvariation that might be useful to the organism. In
2004, Lenski, one of the co-authors of the studies, demonstrated that the IS elements indeed generate fitness-increasing
mutations.14 In E. coli bacteria IS elements activate crypticor silentcatabolic operons: a set of genetic programs for food
digestion. It has been reported that IS element transposition overcomes reproductive stress situations by activating cryptic
operons, so that the organism can switch to another source of food. IS elements do so in a regulated manner, transposing at
a higher rate in starving cells than in growing cells. In at least one case, IS elements activated a cryptic operon during
starvation only if the substrate for that operon was present in the environment. 15It is clear that in Lenskis experiments, IS
elements did not evolve over night. Rather, the IS elements reside in the genome of the original strain. During the two
decades of breeding, the IS elements duplicated and jumped from location to location. There was ample opportunity to
shuffle genes and regulatory sequences, and plenty of time for the IS elements to integrate into genes or to simply redirect
regulatory patterns of gene expression. Microorganisms may thus induce variation simply through shuffling the order of
genes and put old genes in new contexts: variation through position effects that can be inherited and propagated in time. Its
hardly an exaggeration to state that jumping genetic elements specified by the bacteriums genome generated the new
phenotypes.Transposition of IS elements is mostly characterized by local hopping, meaning that novel insertions are usually
in the proximity of the previous insertion and may be a more-or-less random phenomenon; the site of integration isnt
sequence dependent. Bacteria have a restricted set of genes and they divide almost indefinitely. Therefore, sequencedependent insertion and stringent regulation of transposition may not be required for IS-induced reshuffling of bacterial
genomes; in a population of billions of microorganisms all possible chromosomal rearrangements may occur due to
stochastic processes. In higher organisms the order of genes in the chromosomes is more important, but there is no
reason to exclude jumping genetic elements as a factor affecting the expression of genetic programs through position
effects. Transposable elements may therefore be a class of variation-inducing genetic elements (VIGEs) in higher
organisms. Indeed, ERVs, LINEs and SINEs resemble IS elements in bacteria in that they are able to transpose. In fact,
these elements may be responsible for a large part of the variability observed in higher organisms and may even be
responsible for adaptive phenotypes. The genomic transposition of VIGEs is not just a random process. As observed
for Ty elements in yeast, integration of all VIGEs may originally have been designed as site or sequence specific. It should
be noted that VIGEs might qualify as redundant genetic elements, of which the control over translocation may have
deteriorated over time.
VIGEs in humans
Mobile genetic elements make up a considerable part of the eukaryotic genome and have the ability to integrate into the
genome at a new site within their cell of origin. Mobile genetic elements of several classes make up more than one third of
the human genome.Human endogenous retroviruses (ERVs) are, as with yeast ERVs, first transcribed into RNA molecules
as if they were genuine coding genes. Each RNA is then transformed into a double stranded RNA-DNA hybrid through the
action of reverse transcriptase,
an enzyme specified by the
retrotransposon itself. The hybrid
molecule is then inserted back
into the genome at an entirely
different location. The result of
this copy-paste mechanism is two
identical copies at different
locations in the genome. More
than 300,000 sequences that
classify as ERVs have been
found in the human genome,
which is about 8% of the entire
human DNA.16
Figure
1. Variation-inducing
genetic elements (VIGEs) are
found throughout all biological

domains, ranging from bacteria to mammals. In yeast, insects and mammals we observe similar designs. (Homologous
sequences are indicated by the same colour).Long terminal repeats retrotransposons (LTR retrotransposons) are
transcribed into RNA and then reverse transcribed into a RNA-DNA hybrid and reinserted into the genome. LTRs and
retroviruses are very similar in structure. Both contain gag and pol genes (figure 1), which encode a viral particle coat
(GAG), reverse transcriptase (RT), ribonuclease H (RH) and integrase (IN). These genes provide proteins for the conversion
of RNA into complementary DNA and facilitate insertion into the genome. Examples of LTR retrotransposons are human
endogenous retroviruses (HERVs). Unlike RNA retroviruses, LTR retrotransposons lack envelope proteins that facilitate
movements between cells.Non-LTR retrotransposons, such as long interspersed elements (LINEs), are long stretches
(4,0006,000 nucleotides) of reverse transcribed RNA molecules. LINEs have two open reading frames: one encoding an
endonuclease and reverse transcriptase, the other a nucleic acid binding protein (figure 1). There are approximately
900,000 LINEs in the human genome, i.e. about 21% of the entire human DNA. LINEs are found in the human genome in
very high copy numbers (up to 250,000).17Short interspersed elements (SINEs) constitute another class of VIGEs that may
use an RNA intermediate for transposition. SINEs do not specify their own reverse transcriptase and therefore they
are retroposons by definition. They may be mobilized for transposition by using the enzymatic activity of LINEs. About one
million SINEs make up another 11% of the human genome. They are found in all higher organisms, including plants, insects
and mammals. The most common SINEs in humans are Alu elements. Alu elements are usually around 300 nucleotides
long, and are made up of repeating units of only three nucleotides. Some Alu elements secondarily acquired the genes
necessary to hop around in the genome, probably though recombination with LINEs. Others simply duplicate or delete by
means of unequal crossovers during cell divisions. More than one million copies of Alu elements, often interspersed with
each other, are found in the human genome, mostly in the non-coding sections. Many Alu-like elements, however, have
been found in the introns of genes; others have been observed between genes in the part responsible for gene regulation
and still others are located within the coding part of genes. In this way SINEs affect the expression of genes and induce
variation. Alu elements are often mediators of unequal homologous recombinations and duplications.18
Figure 2. Schematic view of the central role VIGEs may play to
generate variation, adaptations and speciation events. Lower part:
VIGEs may directly modulate the output of (morpho)genetic
algorithms due to position effects. Upper part: VIGEs that are located
on different chromosomes may be the result of speciation events,
because their homologous sequences facilitate chromosomal
translocations and other major karyotype rearrangements.Repetitive
triplet sequences (RTSs) present in the coding regions of proteins are
a class of VIGEs that cannot actively transpose. RTSs are usually
found as an intrinsic part of the coding region of proteins. For
instance, RTSs can be formed by a tract of glycine (GGC), proline
(CCG), or alanine (GCC). Usually RTSs form a loop in the messenger
(m)RNA that provides a docking site for chaperone molecules or
proteins involved in the mRNA translation. RTSs may increase or
decrease in length through slippery DNA polymerases during DNA
replication.
Conclusions and outlook
Now that we have redefined ERVs as a specific class of VIGEs, which
were present in the genomes from the day they were created, it is not
difficult to see how RNA viruses came into being. RNA viruses have
emerged from VIGEs. ERVs, LINEs and SINEs are the genetic
ancestors of RNA viruses. Darwinists are wrong in promoting ERVs as remnants of invasions of RNA viruses; it is the other
way around. In my opinion, this view is supported by several recent observations. RNA viruses contain functional genetic
elements that help them to reproduce like a molecular parasite. Usually, an RNA virus contains only a handful of genes.
Human Immunodeficiency virus (HIV), the agent that causes AIDS, contains only eight or nine genes. Where did these
genes come from? An RNA world? From space? The most parsimonious answer is: the RNA viruses got their genes from
their hosts.The Rous arcoma virus (RSV), which has the ability to cause tumours, has only 4 genes: gag, pol, envand src. In
addition, the virus is flanked by a set of repeat sequences that facilitate integration and promote
replication. Gag, pol and env are genes commonly present in ERVs. The src gene of RSV is a modified hostderived src gene that normally functions as a tyrosine kinasea molecular regulator that can be switched on and off in order
to control cell proliferation. In the virus, the regulator has been reduced to an on-switch only that induces uncontrolled cell
proliferation. The src gene is not necessary for the survival of RSV, and RSV particles can be isolated that have only
the gag, pol and env genes. These have perfectly normal life cycles, but do not cause tumours in their host. It is clear the
virus picked up the src gene from the host. Why wouldnt the whole vector be derived from the host? VIGEs may easily pick
up genes or parts thereof as the result of an accidental polymerase II read-through. This will increase the genetic content of
the VIGE because the gene located next to the VIGE will also be incorporated. An improper excision of VIGEs may also
include extra genetic information. Imagine for instance HERV-K, a well-known human-specific endogenous retrovirus,
transposing itself to a location in the genome where it sits next to thesrc gene. If in the next round of transposition a part of
the src gene was accidentally added to the genes of HERV-K, it has already transformed into a fully formed RSV (see figure
3). It can be demonstrated that most RNA viruses are built of genetic information directly related to that of their hosts.
Figure 3. RNA viruses originate from VIGEs through
the uptake of host genes. In the controlled and
regulated context of the host DNA, genes and VIGEs
are harmless. A combination of a few genes
integrated in VIGEs may start an uncontrolled
replication of VIGEs. In this way, VIGEs may take up
genes that serve to form the virus envelope (to wrap
up the RNA molecule derived from the VIGE) and
genes that enable them to leave and re-enter host
cells. Once VIGEs become full-blown shuttle vectors
between hosts, they act as virulent, devastating and
uncontrolled replicators. Hence, harmless VIGEs may
degenerated into molecular parasites in a similar way normally harmless cells turn into tumors once they lose the power to
control cell replication. VIGEs are on the basis of RNA viruses, not the other way around. The scheme outlined here shows

how the Rous sarcoma virus (RSV) may have formed from a VIGE that integrated the env gene and part of the src gene (a
proto-oncogene: for details see text).The outer membranes of influenza viruses, for instance, are built of hemagglutinin and
neuraminidase molecules. Neuraminidase is a protein that can also be found in the genomes of higher host organisms,
where it serves the function to modify glycopeptides and oligosaccharides. In humans, neuraminidase deficiency leads to
neurodegenerative lysosomal storage disorders: sialidosis and galactosialidosis. 19 Even so-called orphan genes, genes that
are only found in viruses, can usually be found in the host genomes. Where? In VIGEs!To become a shuttle-vector between
organisms, all that is required is to have the right tools to penetrate and evade the defenses of the host cell. HIV, for
instance, acquired part of the gene of the hosts defence system (the gp120 core) that binds to the human beta-chemokine
receptor CCR5.20These observations make it plausible that all RNA viruses have their origin in the genomes of living cells
through recombination of hosts DNA elements (genes, promoters, enhancers). Every now and then such an unfortunate
recombination produces a molecular replicator: it is the birth of a new virus. Once the virus escapes the genome and
acquires a way to re-enter cells, it has become a fully formed infectious agent. It has long been known that bacteria use
genes acquired from bacteriophagesi.e. bacterial viruses that insert their DNA temporarily or even permanently into the
genome of their hostto gain reproductive advantage in a particular environment. Indeed, work reaching back decades has
shown that prophage (the integrated virus) genes are responsible for producing the primary toxins associated with diseases
such as diphtheria, scarlet fever, food poisoning, botulism and cholera. Diseases are secondary entropy-facilitated
phenomena. Virologists usually explain the evolution of viruses as recombination: that is, a mixing of pre-existing viruses, a
reshuffling and recombination of genes.21 In bacteria, viruses may therefore be recombined from plasmids carrying survival
genes and/or transposable genetic elements, such as IS elements.
Discussion
Where did all the big, small and intermediate noses come from? Why are people tall, short, fat or slim? What makes
morphogenetic programs explicit? The answer may be VIGEs. It may turn out that the created kinds were designed with
baranomes that had an ability to induce variation from within. This radical view implies that the baranome of man may have
been designed to contain only one morphogenetic algorithm for making a nose. But the program was implicit. The program
was designed in such way that a VIGE easily integrated into it, becoming a part of it, hence making the program explicit.
Most inheritable variation we observe within the human population may be due to VIGEsElements that affect
morphogenetic and other programs of baranomes. It should be noted that a huge part of the genomic sequences are
redundant adaptors, spacers, duplicators, etc., which can be removed from the genome without major affects on
reproductive success (fitness). In bacteria, VIGEs have been coined IS elements; in plants they are known as transposons;
and in animals, they are called ERVs, LINEs, SINEs, and microsatellites. What these elements are particularly good at is
inducing genomic variation. It is the copy number of VIGEs and their position in the genome that determine gene expression
and the phenotype of the organism. Therefore, these transposable and repetitive elements should be renamed after their
function: variation-inducing genetic elements. VIGEs explain the variations Darwin referred to as due to chance.I will
address the details of a few specific classes of VIGEs and argue why modern genomes are literally riddled with VIGEs in a
future article. With the realization that RNA viruses have emerged from VIGEs the RNA paradox is solved. For many
mainstream scientists this solution will be bothersome because VIGEs were frontloaded elements of the baranomes of
created kinds and that implies a young age for their common ancestor and that all life is of recent origin.
The design of life: part 4variation-inducing genetic elements and their function
by Peer Terborg
Endogenous retroviruses (ERVs) are claimed to be the selfish remnants of ancient RNA viruses that invaded the cells of
organisms millions of years ago and now merely free-ride the genome in order to be replicated. This selfish gene thinking
still dominates the public scene, but well-informed biologists know that the view among researchers is rapidly changing.
Increasingly, ancient RNA viruses and their remnants are being thought of as having played (and still do) a significant role in
protein evolution, gene structure, and transcriptional regulation. As argued in part 3 of this series of articles, ERVs may be
the executors of genetic variation, and qualify as specifically designed variation-inducing genetic elements (VIGEs)
responsible for variation in higher organisms. VIGEs induce variation by duplication, transposition, and may even rearrange
chromosomes. This extraordinary claim requires extraordinary scientific support, which is present throughout this paper. In
addition, the VIGE hypothesis may be a framework to understand the origin of diseases and explain rapid speciation events
through facilitated chromosome swapping.
The idea that mobile genetic elements are involved in creating variation is not new. Barbara McClintock, who discovered the
first mobile genetic elements in maize, was also the first to recognize the true nature of such jumping genetic elements. In
1956, she suggested that transposons (as she coined them) function as molecular switches that could help determine when
nearby genes turn on and off. Her key insight was that all living systems have mechanisms available to restructure and
repair the chromosomes. When it was discovered that more than half of the human genome consists of (remnants of)
mobile elements, McClintocks ideas were revived and further developed by Roy Britten and Eric Davidson. 1 It is only
recently that we have begun to understand the power of VIGEs (variation-inducing genetic elements) as genetic regulators
and switches. A team of investigators lead by Haussler recently provided direct evidence that even when a short
interspersed nucleotide element (SINE) lands at some distance from a gene, it can take on a regulatory role with powerful
regulatory functions.2Haussler and his colleagues then looked at a particular examplea copy of the ultra-conserved
element that is near a gene called Islet 1 (ISL1). ISL1 produces a protein that helps control the growth and differentiation of
motor neurons. In the laboratory of Edward Rubin at the University of California, Berkeley, postdoctoral fellow Nadav Ahituv
combined the human version of the LF-SINE sequence with a reporter gene that would produce an easily recognizable
protein if the LF-SINE were serving as its on-off switch. He then injected the resulting DNA into the nuclei of fertilized mouse
eggs. Eleven days later, he examined the mouse embryos to see whether and where the reporter gene was switched on.
Sure enough, the gene was active in the embryos developing nervous systems, as would be expected if the LF-SINE copy
were regulating the activity of ISL1.3This excerpt shows that some functions of SINEs are easily uncovered because they
are directly affecting the expression of a particular gene. However, most functions of SINEs may not be as easily detected
as described of above, because they can integrate in gene desertsregions of the genome where the chromosomes are
devoid of any recognizable protein-coding genesor they may only subtly affect expression of morphogenetic programs.
Gene expression patterns largely determine how cells behave and determine the morphology of organisms. VIGEs
integrated in such genetic programs will change expression patterns of genes that will result in different cellular behaviour
and morphology. Whether the ultimate effect on the phenotype of the organism can be predicted, however, remains to be
established. This is largely due to the fact that we still do not know what morphogenetic algorithms look like. Of course,
biologists have argued that evolution and development are determined by homeobox (HOX) genes, but HOX genes are
merely executors of developmental (or morphogenetic) programs; they are not the programs themselves.In another study by

the same group, thousands of short identical DNA sequences that are scattered throughout the human genome were
analyzed. Many of those sequences were located in gene deserts, which are in fact so clogged with regulatory DNA
elements that they have recently been renamed regulatory jungles. But what do they regulate? The answer could be
morphogenesis. Most of the short DNA elements cluster near genes that play a decisive role during an organisms first
weeks after conception. The elements help to orchestrate an intricate choreography of when-and-where developmental
genes are switched on and off as the organism lays out its body plan. These elements may provide a sort of blueprint for
how to build the animal. The exact mechanism as to how such sequences may function as a plan to build an animal is not
entirely clear, but the DNA elements are particularly abundant near genes that help cells to stick together. That stickiness is
important in an organisms early life phase because these genes help cells to migrate to the right location and to form into
organs and tissues of the correct shape. The 10,402 short DNA sequences studied by Bejerano are derived from
transposable genetic elementsretrotransposons that duplicate themselves and hop around the genome. Apparently,
transposable genetic elements are not what they have been mistakenly thought to be: mess makers. Indeed, the view that
transposable elements are just bad stuff is rapidly changing. In an interview with Science Daily, Bejerano says:We used to
think they were mostly messing things up. Here is a case where they are actually useful.4The genome is literally littered with
thousands of transposable elements. The word is that when ancient retroviruses slipped bits of their DNA into the primate
genome millions of years ago, they successfully preserved their own genetic legacy. 5 It is hard to imagine that they all have
functions, but their presence could certainly determine or fine-tune the output of nearby genes. In this way they may create
subtle, but novel, variation. Bejerano and Hausslers research has already identified a handful of transposons that serve as
regulatory elements, but it is not clear how common the phenomenon might be. The 2007 study showed that the
phenomenon may be a general one:Now weve shown that transposons may be a major vehicle for evolutionary
novelty.4The new findings indeed show that, in many cases, transposable elements function as regulators of gene output,
but major vehicles for evolution from microbe to man they are not. The transposition of jumping genetic elements may
certainly affect gene expression patterns, but it does not follow that they produce new genetic information. Considering the
biological data, it seems reasonable that transposable elements are present in the genome to deliberately induce biological
variation. Transposable elements thus qualify as variation-inducing genetic elements (VIGEs), and by leaving copies, they
make sure the new variation is heritable. The transposable elements present in regulatory jungles do not produce new
biological information, but they induce variation in the genetic algorithms and may underlie rapid adaptive radiation from
uncommitted pluripotent genomes. The regulatory jungles may provide an active reservoir of VIGEs that put existing genes
in new regulatory environments.
Regulated activity of VIGEs
The chromosome of the E. coli strain K12 includes three cryptic operons (linear genetic programs that encode programs to
metabolize three alternative sugars): one for cellobiose, one for arbutin and one for salicin. The organization of those
operons is like a normal substrate-induced bacterial operon; but the operons themselves are abnormal in that they are
cryptic (silent) in wild-type strains. Even in the presence of alternative sugars the operons are not activated, which indicates
that these bacteria dont readily use alternative sugars. Unused cryptic operons are redundant genetic programs that are not
observed by natural selection:As cryptic genes are not expressed to make any positive contribution to the fitness of the
organism, it is expected that they would eventually be lost due to the accumulation of inactivating mutations. Cryptic
genes would thus be expected to be rare in natural populations. This, however, is not the case. Over 90% of natural isolates
of E. coli carry cryptic genes for the utilization of beta-glucoside sugars. These cryptic operons can all be activated by IS
[insertion-sequence] elements, and when so activated allow E. colito utilize beta-glucoside sugars as sole carbon and
energy sources.6The excerpt shows that operons are kept inactive by repressors; that is, proteins that sit on the DNA of the
operon to ward off the nanomachines responsible for gene expression. Operons will only be active in bacteria that dont
have a functional gene coding for the repressors. Disrupting the repressor gene releases the cryptic programs. Thats where
the VIGEs come in. The transposition and integration of an IS element into the silencer elements is the mutational event that
activates the cryptic operon. Usually, the lack of an appropriate carbon and energy source triggers transposition of IS
elements. The transposition of IS elements appears to be regulated by starvation, and the integration in the repressor gene
is not utterly random. For instance, position 472 in the ebgR gene in the ebg operon of E. coli is a hotspot for integration of
IS elements, but only under starvation conditions. VIGEs may thus accumulate and integrate at well-defined positions in the
genome; this indicates a site-specific mechanism.In the fruit fly, some non-LTR (long terminal repeats retrotransposons)
integrate at very specific sites, but some others have been shown to integrate more or less at random. The specificity is
determined by endonucleases, enzymes that cut the DNA.7 Assuming VIGEs are part of a designed genome, we must
expect that their transposition and activity can be controlled and regulated. To avoid deleterious effects on the host and
retrotransposon, we may expect that the activity of VIGEs is regulated both by retrotransposon-and host-encoded factors.
Indeed, the mechanism of transposition seems to be dictated by the species in which the VIGEs operate. Recent research
has shown that in zebra fish the transposable element known as NLR integrant usually carries a few extra nucleotides at the
far end of the sequence, but it is not expressed in human cells.8 This observation would argue for the involvement of host
specific protein machinery in transpositionone more argument for the design origin of VIGEs.From the design perspective,
we may expect that the activity of VIGEs used to be a tightly controlled process. This is because the genomes in which they
operate also specify control factors: retroviral restriction factors. The restriction factors are proteins with the ability to bind to
retroviral capsid proteins and target them for degradation. Several restriction factors have been identified, including Fv1,
Trim5-alpha and Trim5-CypA.9 These factors share the common property of containing sequences that promote selfassociation: that is, they can assemble themselves. This fact, together with the observation that the restriction factors are
encoded by unrelated genes, is clear evidence of purposeful design. Retroviral restriction factors play an important role in
innate immunity against invading RNA viruses. For instance, Trim5-alpha binds directly to the incoming retroviral capsid core
and targets its premature disassembly or destruction. 10 In addition, some integrated VIGEs show evolutionary-tree
deviations, indicating a sequence-specific integration/excision mechanism. For instance, Alu HS6 is present in human,
gorilla and orangutan, but not in chimpanzee (see figure 1). This highly peculiar observation prompted the investigators to
consider the possibility of the specific excision of this Alu element from the chimpanzees genome.11 Precise excision implies

precise integration.

Figure 1. The Alu HS6 insertion sites in human, chimpanzee, gorilla, orangutan and owl monkey. Note the complete
absence in chimpanzee and owl monkey of any evidence for an extraction site. This suggests a highly specific mechanism
for integration and/or extraction. Alternatively, the sequences are a molecular falsification of the common descent of
primates.
Biologists specializing in synthetics at the Johns Hopkins University have built, from scratch, a LINE1-based retrotransposon
a genetic element capable of jumping around in the mouse genome. The man-made retrotransposon was designed to be
a far more effective jumper than natural retrotransposons; indeed, it inserts itself into many more places in the
genome.12,13 Why do not all LINEs jump so effectively? The scientists that constructed the synthetic LINE changed the
regulator sites used in transposition. Native LINE1 elements are relatively inactive in mice when they are introduced into the
mouse genome as transgenes. The synthetic LINE1-based element, ORFeus, contains two synonymously recoded ORFs
relative to mouse L1 and is far more active. This indicates that the integration and excision of native LINE1 elements are
controlled and regulated by an as yet unknown mechanism.VIGEs qualify as redundant genetic elements that can simply be
erased from the genome without fitness effects. As long as VIGEs do not upset critical genomic functions and do affect
reproductive success of the carrier, they are selectively neutral. Therefore, not only VIGEs, but also the mechanisms by
which they integrate, may readily wither and degrade due to accumulation of debilitating mutations. The control over
integration and activity we observe today may be less stringent compared to how it was originally designed. The originally
fine-tuned control for excision and transposition may have deteriorated over time and what is left today are more or less free
moving elements that may predominantly cause havoc when they integrate in the wrong location. It is easy to understand
how, for instance, endonucleases became less specific through mutations. This view may also explain why VIGEs are often
found associated with heritable diseases. As long as VIGE activity and integration do not significantly affect the fitness of the
organisms in which they operate, they are free to copy and paste themselves along the genome. Indeed, inactivating VIGEs
have been observed in genes not immediately required for reproduction. The GULO gene, which qualifies as a redundant
gene in populations with high vitamin C intake, has been hit several times by VIGEs and this may have contributed to
pseudogenization of GULO in humans.14Over time, VIGEs may have become increasingly detrimental to the hosts genome.
That is because information that regulates the integration and activity of VIGEs is subject to mutation. Some VIGEs have
been associated with susceptibility or resistance to diseases. In asthma, increased susceptibility appears to be associated
with microsatellite DNA instability (a term used for copy-number differences in repetitive DNA sequences). 15 Psoriasis is also
associated with HERV expression.16 It should be clear that deregulated and uncontrolled VIGEs cause havoc when they
integrate with and disrupt functional parts of genes.From the vantage of design, VIGE transpositions would make sense
during meiosis, which is the process leading to the formation of gametes. Controlled activity of VIGEs during meiosis may be
responsible for variation that can be passed on to the offspring. Although information is scant, it has been shown in
fungi17 and plants18 that VIGEs become active during meiosis and even have mechanisms to silence deleterious bystandereffects, such as deleterious point mutations.17 This shows transposable elements function to induce genetic variation,
providing the flexibility for populations to adapt successfully to environmental challenges. In chimpanzees, for instance, it
has been documented that large blocks of compound repetitive DNA, which have demonstrated retrotransposon function,
induce and prolong the bouquet stage in meiotic prophase and affect chiasm formations. 19 This may seem like a mouthful,
but it merely means that these repetitive genetic elements facilitate sister-chromosome exchanges when reproductive cells
(sperm and eggs) are being generated. Mammalian VIGEs, in particular Alu sequences, have the ability to induce genetic
recombination and duplications and contribute to chromosomal rearrangements, and they may account for the major part of
variation observed in humans. The methylation pattern of Alu sequences possibly determine activity and/or serve as
markers for genomic imprinting or in maintaining differences in male and female meiosis. 21
VIGEs and the human family
When short triplet repeat units are present in the coding part of a gene, they may even have functional consequences.
There is evidence that repeat units in the Runx2 gene formed the bent snout of the Bullterrier in a few
generations.22 Likewise, in mice and dogs, having five or six toes is determined by a repeat unit in the Alx4 gene.23 These
novel phenotypes can form almost over night, i.e. within one generation. Repetitive coding triplets that can be gained or lost
provide another mechanism to generate (instant) variation. It should be noted that this mechanism leads to reversible
genetic change, because a lost repetitive unit can readily be added back through duplication of a preexisting one, and vice
versa. Therefore, the RTS mechanism may explain seasonal changes in beak size observed for Galapagos finches,
adaptive phenotypes in Australian snakes and the evolution of the Cichlid varieties in African lakes.If we accept the idea of
deliberately designed VIGEs, we may also expect these elements to have played an important role in determining the
variety of human phenotypes. In other words, human races are the result of the activity of VIGEs! Biologists used to think
that our genomes all had the same basic structurethe same number of genes, in roughly the same order, with a few minor
differences in the sequence of DNA bases. Now, technologies that compare whole human genomes are revealing that this
picture is far from complete. Michael Wigler at Cold Spring Harbor Laboratory provided the first evidence that human
genomes are strikingly variable: his group showed marked differences in the copy number of protein-coding
genes.24 Apparently, some people have more copies of certain genes and, large-scale copy number polymorphisms (CNPs)
(about 100 kilobases and greater) contribute substantially to genomic variation between individuals. 25 In addition, people not
only carry different copy numbers of parts of our DNA they also have varying numbers of deletions, insertions and other
major rearrangements in their genomes.In 2005, Evan Eichler of the University of Washington reported 297 locations in the
genome where different individuals have different forms of major structural variations. At these spots some carry a major
deletion, for example, or an extra hundred bases of DNA. Differences between individuals were found in the protein-coding
genes; structural differences were also observed between individual genomes.26 From these and other studies we now know
that every one of us shares only about 99% of our DNA with all the other people on Earth. 27 The difference is due to
repetitive sequences that easily amplify or delete parts from the genome. With this, we have discovered another class of
VIGEs. The highly variable repetitive sequences also explain why genetic screening methods are so reliable nowadays: they
detect copy-number differences and hence are capable of discriminating between the DNA of a father and his son. Yes,
fathers and sons apparently differ at the level of VIGEs!A comparison of Asian and Caucasian people showed that 25% of
more than 4,000 protein-coding genes had significantly different expression patterns. Some gene expression levels differed
as much as twofold.28 The researchers commented that these findings support the idea that there are genetically
determined characteristics that tend to be clustered in different ethnic groups. Some genes are simply not expressed at all,
or are simply not present in the genomes. For instance, the gene UGT2B17 is deleted more often in Asians than in
Caucasians, and has a mean expression level that was more than 20 times greater in Caucasians relative to Asians. How
can such big differences be explained? Of course, single nucleotide polymorphisms (SNP; i.e. point mutations) in regulatory
sequences could affect gene regulation patterns. It is not clear, however, whether the SNPs themselves might be regulating
gene expression or whether they travel together with other DNA thats the regulator. We may also expect VIGEs to be
responsible for differences observed between human races.

VIGEs and chromosome 2


Human chromosome 2 looks as if it is the product of the fusion of two chromosomes that we find in chimpanzees as
chromosome 12 and 13. Therefore, some Darwinists take human chromosome 2 as the ultimate evidence for common
descent with chimpanzees. We know that a fusion of two ancestral chromosomes would have produced human
chromosome 2 with two centromeres. Currently, human chromosome 2 has only one centromere, so there must be
molecular evidence for remnants of the other. In 1982, Yunis and Prakash studied the putative fusion site of chromosome 2
with a technique known as fluorescence in situ hybridization (FISH) and reported signs of the expected centromere.29In
1991, another study also reported signs of the centromere.30 In 2005, after the complete sequencing of human chromosome
2, we would have expected full proof of the ancestors centromere. However, even after intense scrutiny there are still
only signs of the centromere. If signs of the centromere were already observed in 1982, why can it not be proved in the 2005
sequence analysis? Apparently, the site mutated at such high speed it is no longer recognizable as a centromere:During the
formation of human chromosome 2, one of the two centromeres became inactivated (2q21, which corresponds to the
centromere of chromosome 13) and the centromeric structure quickly deteriorated. 31Why would it quickly deteriorate? Why
would this region deteriorate faster than neutral? A close up scrutiny in 2005 showed the region that has been interpreted as
the ancestors centromere to be built from sequences present in 10 additional human chromosomes (1, 7, 9, 10, 13, 14, 15,
18, 21 and 22) as well as a variety of other genetic repeat elements that were already in place before the fusion
occurred.31 The sequences interpreted as ancient centromere are merely repetitive sequences and may actually qualify as
(deregulated) VIGEs.The chimpanzee and human genome projects demonstrated that the fusion did not result in loss of
protein coding genes. Instead, the human locus contains approximately 150,000 additional base pairs not found in
chimpanzee chromosome 12 and 13 (now also known as 2A and 2B). This is remarkable: why would a fusion result
in more DNA? We would rather have expected the opposite: the fusion would have left the fused product with less DNA,
since loss of DNA sequences is easily explained. The fact that humans have a unique 150 kb intervening sequence
indicates it may have been deliberately planned (or designed) into the human genome. It could also be proposed that the
150 kb DNA sequence demarcating the fusion site may have served as a particular kind of VIGE, an adaptor sequence for
bringing the chromosomes together and facilitating the fusion in humans.Another remarkable observation is that in the
fusion region we find an inactivated cobalamin synthetase (CBWD) gene. 32 Cobalamin synthetase is a protein that, in its
active form, has the ability to synthesize vitamin B12 (a crucial cofactor in the biosynthesis of nucleotides, the building
blocks of DNA and RNA molecules). Deficiency during pregnancy and/or early childhood results in severe neurological
defects because of impaired development of the brain. The Darwinian assumption is that the cobalamin synthetase gene
was donated by bacteria a long time ago and afterwards it was inactivated. Nowadays, humans must rely on
microorganisms in the colon as well as dietary intake (a substantial part coming from meat and milk products) for their
vitamin B12 supply. It is also noteworthy that humans have several copies of inactivated cobalamin-synthetase-like genes
on a number of locations in the genome, whereas chimpanzees only have one inactivated cobalamin synthetase gene. That
the fusion must have occurred after man and chimp split is evident from the fact that the fusion is unique to
humans:Because the fused chromosome is unique to humans and is fixed, the fusion must have occurred after the humanchimpanzee split, but before modern humans spread around the world, that is, between 6 and 1 million years ago. 32The
molecular analyses show we are more unique than we ever thought we were, and this is in complete accordance with
creation. Apparently the fusion of two human chromosomes that took place may have been the result of an intricate
rearrangement or activation of repetitive genetic elements after the Fall (as part of, or executors of, the curse following the
Fall) and inactivated the cobalamin synthetase gene. The inactivation of the gene may have reduced peoples longevity in a
similar way as the inactivation of the GULO gene, which is crucial to vitamin C synthesis. 14 Understanding the molecular
properties of human chromosome 2 is no longer problematic if we simply accept that humans, like the great apes, were
originally created with 48 chromosomes. Two of them fused to form chromosome 2 when mankind went through a severe
bottleneck.33 And,
as
argued
above, the fusion was mediated
by VIGEs (see figure 2).
Figure 2. Putative mechanism for
how the human chromosome 2
formed through the fusion of two
ancestral chromosomes p2 and
q2, which are similar to
chimpanzee chromosome 12 and
13). Like the great apes,
originally the human baranome
may
have
contained
48
chromosomes. A) Independent
transposition events may have
led to the integration of a relative
small variation-inducing genetic
element (VIGE). B) Extended
duplication events of the VIGE
may have resulted in rapid
expansion of the region in both
p2 and q2, preparing it to become
an adapter sequence required
for
fusion. C) The
expanded
homologous regions align and
facilitate the fusion of the chromosomes. The fusion region (2q21) and other parts of the modern human genome still shows
the remnants of this catastrophic event that only occurred in humans: the cobalamin synthetase gene was inactivated and
several inactive copies, which are not found in the chimpanzee, scattered throughout the genome. Speculative note: Before
the great flood, and probably shortly after, a balancing dynamics of both 48 and 46 chromosomes may have been present in
the human family. This may explain the two extreme cranial morphologies present in the human fossil record. The Homo
erectus/Neandertal humans may have had a karyotype comprised of 48 chromosomes (non-fused p2 and q2), whereas the
other humans had 46 (fused p2 and q2).
The upside-down world
The p53 protein is a mammalian transcription factor that functions as the main switch controlling whether cells divide or go
into apoptosis (programmed cell death, which is sometimes required for severely damaged cells that may become tumours).

Scientists have long wondered how p53 gained the ability to turn on and off more than 1200 genes related to cell division,
DNA repair and programmed cell death. Without the p53 control system organisms would not function: all life would have
died as bulky tumors.Biologists at the University of California now claim that ancient retroviruses helped p53 to become an
important master gene regulator in primates.34 An RNA virus invaded the genome of our common ancestor, jumped into
hundreds of new positions throughout the human genome and spread numerous copies of repetitive DNA sequences that
allowed p53 to regulate many other genes, the team contends. Studies such as these prompted Darwinians to change their
minds about jumping genetic elements. In other words, a randomly hopping ERV provided the human genome with carefully
regulated decision-making machinery. The idea is beyond reasonable belief. Darwinists tend to mix things up. What really
happened in the human genome is a read-through of polymerase II in a VIGE that was next to a gene that already contained
a binding site for p53. Or maybe the VIGE was excised improperly, taking a bit of a flanking gene containing the p53 binding
site. Next, the modified VIGE amplified, transposed, amplified and so on. That explains this family of transposons. A similar
story can be told for the syncytin gene, which encodes a protein of the mammalian placenta that helps the fertilized egg to
become embedded in the uterus wall. Since syncytin has also been found on a transposable element, 35 mammals are
alleged to have obtained the gene from an RNA virus that infected a mammalian ancestor millions of years ago. It is more
likely, however, that syncytin was captured by a VIGE.In bacteria it is often observed that genes that convey a specific
advantageous character are transmitted via plasmids. Plasmids often contain genes for alternative metabolic routes or
genes that provide resistance to antibiotics, and they replicate independently from the hosts genome. Plasmids easily
shuttle between microorganisms via a DNA uptake-process known as transformation (or horizontal gene transfer). The
uptake of plasmids is regulated and controlled, and is DNA sequence dependent. The result of DNA transformations is rapid
adaptation to, for instance, antibiotics. Likewise, viruses replicate independently from the genomic DNA, leaving many
copies and easily transferring from one organism to another. Viruses are not plasmids, although some viruses may have a
similar function in higher organisms as do plasmids in bacteria: they may be able to aid in rapid adaptations to changing
environments. It has been observed that a virus can indeed transfer an adaptive phenotype. The virus that is present in the
fungus (Curvularia protuberata), can induce heat resistance in tropical panic grass (Dichanthelium lanuginosum), allowing
both organisms to grow at high soil temperatures in Yellowstone National Park. This shows that viruses still provide
strategies for rapid adaptation.Fungal isolates cured of the virus are unable to confer heat tolerance, but heat tolerance is
restored after the virus is reintroduced. The virus-infected fungus confers heat tolerance not only to its native monocot host
but also to a eudicot host, which suggests that the underlying mechanism involves pathways conserved between these two
groups of plants.36In fruit flies, wing pigmentation depends on a gene known as yellow. The gene exists in the genome of all
individual fruit flies, but in some it is not active. By analysing the genetic origin of the spots on fruit fly wings, researchers
have discovered a molecular mechanism that explains how new patterns of pigmentation can emerge. The secret appears
to be specific genetic elements that orchestrate where proteins are used in the construction of an insects body. The
segments do not code for proteins, but rather regulate the nearby gene that specifies the pigmentation. As such, these
regulatory DNA segments qualify as VIGEs. The researchers transferred the regulatory DNA segment from a spotted
species (Drosophila biarmipes) into another species not expressing the spot (D. melanogaster), and attached the regulatory
region to a gene for a fluorescent protein. They found that the fluorescent gene was expressed in the spot-free species in
exactly the same patterns as the yellow gene is expressed in the spotted species. By comparing several spotted and
spotfree species, the scientists established that mutation of a regulatory DNA segment led to the expression of the spotted
trait. They discovered that in the species with spotted wings this regulatory segment has multiple binding sites for a protein
that then activates the yellow gene. Spotless species do not have multiple binding sites. 37 The multiplicity of regulatory DNA
segments may argue for an amplification mechanism or targeted integration of the regulatory sequence. That explains why
the same pattern of pigmentation can emerge independently in distantly related species (Darwins analogous variation). The
observed shuttle function of viruses leads me to pose an intriguing question: Were endogenous retroviruses originally
designed to serve as shuttle-vectors to deliver messages from the soma to the germ-line? If yes, then it would put
Lamarckian evolution in an entirely new perspective.
Discussion
The findings of the new biology demonstrate that mainstream scientists are wrong regarding the idea that transposable
elements are the selfish remnants of ancient invasions by RNA viruses. Instead, RNA viruses originate from transposable
elements that were designed as variation-inducing genetic elements (VIGEs). Created kinds were deliberately frontloaded
with several types of controlled and regulated transposable elements to allow them to rapidly invade and adapt to all corners
and crevices of the earth. Due to the redundant character of VIGEs, their controlled regulation may have readily deteriorated
and some of them may now merely cause havoc. The VIGE hypothesis provides elegant explanations for several biological
observations that may otherwise be difficult to interpret within the creationist framework, including the origin of diseases
(RNA viruses) and chromosome rearrangements. The VIGE hypothesis may be a framework for extended creationist
research programs. Some intriguing question can already be raised.
Were VIGEs intentionally designed to cause diseases? No, they were not. It is conceivable that the transposition and
integration of VIGEs is not entirely random. The transposition of VIGEs may have been originally present in the baranome
as controlled and regulated elements and activated upon intrinsic or external triggers. To induce variation in offspring,
triggers for the transposition of VIGEs could be released during meiosis, when the reproductive cells are being produced.
The emergence of RNA viruses from VIGEs may be a result of the Fall, when we were cut of from the regenerating healing
power of the designer.
Why are some VIGEs located on the exact same position in primates and humans? Each original baranome must
have had a limited number of VIGEs, some of which we still find on the same location in distinct species. In distinct
baranomes, VIGEs may have been located on the exact same positions (the T-zero location), which then explains why some
VIGEs such as ERVs, can be found in the same location in, for instance, primates and humans. In addition, sequencedependent integration of VIGEs may also contribute to this observation.
How could Bdelloid rotifers, a group of strictly asexually reproducing aquatic invertebrates, rapidly form novel
species?Asexual production of progeny, as observed in Bdelloids, is found in over one half of all eukaryotic phyla and is
likely to contribute to adaptive changes, as suggested by recent evidence from both animals and plants. 38 The Bdelloids may
have been derived from pluripotent baranomes containing numerous DNA transposons and retro elements, including active
LTR retrotransposons containing gag,pol, and env-like open reading frames.39 These elements are able to reshuffle the
genomes and facilitate instant variation and speciation.
Do we also observe remnants of DNA viruses in the mammalian genomes? If not, this supports my idea that RNA
viruses emerged from VIGEs, and implies DNA viruses have a different origin; probably, as with the Mimi-virus 40, they
originated from degenerated bacteria.
Why was a class of VIGEs designed with information for protein capsids? The capsid may have been acquired from
the hosts genome or it may have been designed to prevent the RNA molecules from attaching themselves to, or finding,

integrations sites. A very speculative idea may be that these VIGEs were designed to shuttle information from the soma to
the germ-line. One thing is clear, however: creation researchers have loads of work to do.
And then there was life
by Gordon Howard
What is the difference between figure 1
and figure 2? Both are patterns of light
and dark. Both are arrangements of the
same 12 particular shapes in the same
groupings. Both exhibit a complexity of
arrangement. The probability of either
arrangement arising by chance is
similar. Neither arrangement has been
produced by any action of the properties of the material they appear on.But there is a world of difference between the two,
and that difference is equivalent to the difference between the imagined primordial soup1 of non-living chemicals, and a
living cell. This is because a living cell is not a random collection of chemicals, but an incredibly complex machine controlled
by information stored in a computer-like program.The essential difference between the figures is simply that figure 2 carries
information while figure 1 does not. This difference has nothing to do with the material the figures are made of. That is, the
difference cannot be detected by physical means. It is immaterial, existing only in the readers mind, and then only if the
reader speaks English. That is, only if the reader understands the inherent code.Could the arrangement of figure 2 arise by
chance? Yes, but then it would not necessarily carry information. Consider a set of letters randomly selected that made the
pattern I LOVE YOU. It would not actually be carrying the information we might like, because the letter I (for example)
would be just a letter like any other. It would not represent anything, such as the concept of a particular person. There would
be no sender (because it is random), no intended recipient, no code, and therefore no meaningit would just be a pattern
of shapes no more significant than any other.Figure 2 carries information only if the pattern of shapes conforms to an agreed
code; that is, if it is specified by a set of rules, such as the rules of the English language, and represents the concept of
something not physically present. Furthermore, it only carries information if that code can be interpreted by another party or
process, through some decoding machinery in a recipient. In other words, the pattern needs to be filtered through a set of
rules which can then be used to put the information into action. Only then does it become meaningful, because meaning
does not arise from the arrangement, but from the interpretation, or decoding, of the arrangement. That is what happened
when you decoded the pattern in figure 2.While a required arrangement (such as figure 2) might arise by chance, 2 its rules
of interpretation cannot, since the rules for coding and decoding are likewise non-material, an abstraction, and therefore can
only be formulated and understood by an intelligence. 3 Neither can the specification for arriving at the particular
arrangement in the first place arise by chance, again because therules for the specificity (the language that determines the
arrangement of the letters) cannot arise from any property of matter. Thus these rules are also the work of intelligence, or
mind.Information, therefore, cannot arise from inanimate matter by chance.However, a living organism requires information
to function. This is because a living organism requires carefully specified materials and processes, not only for itself, but for
its replication. In fact, reproduction is part of the definition of a living thing. Replication assumes instructions for the process
of building the replicant from scratch, all the while maintaining a functioning organism, and thus needs still more information
than that needed simply to live.It is obvious that the specifications, or information, for all the processes needed for an
organism to grow, live and reproduce must have beenpreviously conceived and stored before the organism could begin its
life. It is now stored in and interpreted by its DNA, but none of it could have occurred by chance in the beginning, since no
primordial soup or primitive organism can generate information, or a code system for storing it. Further,
a random arrangement of DNA nucleotides would not carry any information. Since the source of all information is a mind,
this situation is an absolute indicator that the source of life was in a mind rather than in non-thinking materials or chemical or
physical processes. This implies an intelligent, volitional Designer.Trying to put a living thing together using only materials,
without information, is like soldering wires together to try to produce a computer program. Just as a robot without a program
is no better than a statue, so a cell containing its biotic chemicals without its instructionsits DNAwould be no better than
disorganized primordial soup. Its no wonder that Craig Venter produced his synthetic life using the information and
reading machinery of previously living cells. 4A living cell lives, not because it contains bio-chemicals, but because it can
carry out its encoded instructions for life processesprocesses for making and deploying those bio-chemicals. Thus, a
living cell lives on information; information necessarily conceived in the mind of its designer, before life began.
First there is information, and then there is life.
Cell systemswhats really under the hood
continues to drop jaws
By Brian Thomas
Two 2009 papers summarized recent discoveries
of utterly unforeseen intricacy, adaptability,
robustness and precision in regulating gene
expression, even in simple cells.
Gene expression in eukaryotic cells
I conservatively counted 24 recently discovered
mechanisms that help regulate gene expression
in eukaryotic cells, as reviewed by Moore and
Proudfoot.1 Here are just a few of them.
Figure 1. Widely regarded as the simplest
genome, Mycoplasma gene expression is instead
far more complicated than expected. It performs
functions that had been considered the sole
domain of higher eukaryotes. For example, DNA
is transcribed in both the sense and antisense
directions, indicating that valuable genetic
information is double-stacked. RNA transcripts
undergo post-translational modifications, single

enzymes have more than one application, and when certain metabolic breakdowns occur, the cell is able to formulate a
workaround solution. Illustration after. sciencemag.orgChromatin is not loosely wadded DNA inside cellular nuclei. Instead, it
is very precisely organized, with specific portions dynamically looped outward. Each loop is associated with a separate
nuclear pore, and can retract to a storage position when appropriate. Robust and efficient machinery ensures that the
correct portions of chromatin are unspooled from nearer the center of the nucleus to an appropriate nuclear pore. Each pore
is extremely active, with a host of interacting regulatory RNAs, proteins, and ribonucleoproteins. 2 These send and receive
communications from and toward the farthest ends of the RNA and protein manufacturing processes.RNA Polymerase does
not typically transcribe DNA in fluid space, but is attached to a cadre of proteins associated with each nuclear pore. This
way, the rapidly emerging RNA transcript is already proximal to the pore, through which much of it will exit to the cytoplasm.
Further, cell biologists have determined that the first copy of a transcript is like a practice run. This first, rough draft RNA
transcript either serves as a quality control run, so that its integrity is ensured prior to full manufacture and export from the
nucleus, as a primer for the total set of transcript processing machinery to be properly set, as a chemical communicator
providing information to downstream processes, or all three.
Warming up for transcription
In addition, extracellular messages are transferred from the cell membrane to the nuclear pore sites via biochemical
cascades, and these influence whether or not a gene region will switch from being transcribed into these rough abortive
transcripts, or into full-length, properly marked and exported transcripts. It appears that transcription machinery is constantly
transcribing in an idle mode, but when the correct switches are tripped, the machinery fully engages. In full production
mode, RNA transcripts often become marked for translation to proteins. Some of the switching messengers are proteins that
are temporarily restrained by other proteins, which in turn can release them upon detection of certain cell signals carried by
yet more precisely interacting biochemicals. For example, even sugar moieties riding on proteins have been found to act as
a safety switch that regulates the microswitches which fine tune protein expression during cell division.3
Full-on eukaryotic transcription runs super-fast
When all systems are go, transcription proceeds with fully processive elongation of the full body of the gene. 1 Inside the
nucleus, the relevant DNA is pulled, like a loop of magnetic tape, across a nuclear pore. Some of the proteins involved in
this action are named Set1PAF, Spt6, FACT, Chd1, along with other histone proteins. This way, the emerging transcript is
under the constant watchful attention of a wide array of sensory, quality control, marking, and transporting machinery, all
kept near the pore by precise chemical interactions specified by exactly arranged biomolecular sizes, shapes, charges, and
polarities.It was known that transcripts in eukaryotic cells undergo cut-and-pasting as well as splicing. It is now known that
this occurs simultaneously with manufacture, and requires a separate host of proteins. However, those pre-mRNA splicing
proteins directly interact with the RNA polymerase assemblage, which all works together to react to pause-sites in the gene
it is transcribing. RNA polymerase acts like a molecular juggernaut,1 streaming RNAs out as though through a jet engine. It
must be slowed down in order for cutting and splicing machinery to have opportunity to insert. Since not all DNA pause sites
become RNA cut sites, and since the alternative combinations of cut and spliced mRNA transcripts can specify a wide
variety of regulatory or catalytic RNAs and proteins from just one gene, 4 it is apparent that somehow precise
communication occurs to discern which pause sites will result in cuts.
In yeast, a model eukaryote, the THO/TREX protein complex serves three roles: one in transcription, one in transcriptdependent recombination, and one in mRNA export.1 And it does these while in constant communication with machine parts
that are involved in transcript initiation as well as parts involved in slowing and stopping transcription. It is therefore one of
many proteins and protein complexes that are being discovered with multiple functionsa clear sign of elegant engineering.
Process flow management in translation
The emerging RNA transcript then gets labeled with specific protein markers. The markers had already been gathered to the
nuclear pore site, and are presented to the nascent transcript just inside the nucleus. The immediacy of labeling thus is vital.
It guards against the dangers of having naked RNAs in the nucleus, as described below. The markers, too, serve multiple
purposes. The more splices in the transcript, the more markers are attached, and this eventually causes more efficient
translation because a transcript thus bedecked is more likely to have some surface exposed to cytoplasmic proteins vital to
translation. The markers also signal watchdog nuclear pore proteins to expedite the transcripts export.These same
watchdog proteins also serve to prevent naked transcripts from re-entering the nucleus. This is vital, for bits of RNA naturally
anneal to unzipped DNA. If this happened, it would quickly create havoc in the nucleus by both generating mutations and
gumming up the many nuclear processes that depend on accurate DNA recognition, clamping, spooling, unwinding, and
other processes.After export, the cytoplasmic machinery links each transcript to other machines. Some of these shepherd
the transcript toward a ribosome. Each time a transcript has been thus shepherded, some of its markers are removed, with
most being lost after its first round of translation. Eventually the transcript becomes naked and difficult for translational
machinery to detect, and subject to degradation. In this way, the freshest and highest quality transcripts are by far most
translated by the ribosome.
Eukaryotic gene expression is astonishing
Effective quality control mechanisms constantly cull corrupt transcripts. For example, if a transcript did not have the correct
signal sequence attached when it was first formed, due to gene mutation or an error in processing, the compromised
molecule would have been recognized immediately at the nuclear pore, and degraded by RNase enzymes. This ensures
that downstream processes are not gummed up with useless transcripts. Quality control is critical to forming the correct
products in the needed amounts, and at appropriate paces.Other systems produce a stockpile of quality transcripts in
strategic pockets within the cytoplasm. This way, there can be a tightly controlled burst of the desired [protein]
product.1There is no indication that the discovery pace of more mind-bogglingly brilliant cell processes will slow down
anytime soon. If none of the above made sense, then let the reader be edified by the glowing research summary:At every
point along the way, multifunctional proteins and [ribonucleoprotein] complexes facilitate communication between upstream
and downstream steps, providing both feedforward and feedback information essential for proper coordination of what can
only be described as an intricate and astonishing web of regulation.1
The simple Mycoplasma
Mycoplasma pneumoniae bacteria, long considered the simplest prokaryote, can no longer be described thus. It is a
parasitic bacterium that (M. pneumonia causes walking pneumonia) has a reduced genome size. It relies on its host for
certain nutrients that its ancestors apparently were able to manufacture. Thus, it has undergone significant genomic decay.
How Mycoplasma bacteria really work
The authors of a paper in Science endeavored to investigate how a cell actually accomplishes necessary processes using
the most basic subject of study.5 But they ran into a juggernaut of layered information-rich complexity that inspired their
assessment:
Together, these findings suggest the presence of a highly structured, multifaceted regulatory machinery, which is
unexpected because bacteria with small genomes contain relatively few transcription factors revealing that there is no

such a thing as a simple bacterium.5Specifically, evolutionists Ochman and Raghavan cited research that found in many
cases the sense strand of protein-coding genes is transcribed, the complementary or anti-sense strand is also transcribed.
The resulting sense mRNA is eventually translated to protein, and the resulting antisense mRNA binds to the sense mRNA
to make a double stranded RNA. This slows its path toward translation, and is thus an important speed regulator. This was
previously only known to occur in eukaryotes.
Mycoplasma cells have eukaryotic complexity
In other experiments, different environmental growth conditions caused different lengths and segments of genomic DNA to
become transcribed. This implies a suite of chemical communication cascades from the cell wall inward, as well as the
ability to make alternate products from one gene. This, too, was a surprise, only known in eukaryotes.Like eukaryotic cells,
these simplest among prokaryotes have multifunctional proteins which can be used in different metabolic pathways as
backup machines. Other data strongly suggests that newly manufactured proteins can be altered by other cellular
machinery. Termed post-translational modification, this was taught dogmatically in my 1998 graduate biochemistry courses
as exclusive to eukaryotes.Also shocking was the discovery that over 90% of Mycoplasma proteins are involved in protein
complexes, again like eukaryotes. Another genome-wide survey found indirect evidence of tight gene expression regulation,
but nobody yet knows the mechanism for it.They finally argue that because Mycoplasma is still alive even after such
reduction in quality and quantity of its genome, it must have an underlying eukaryote-like cellular organization replete with
intricate regulatory networks and innovative pathways.5
Where did Mycoplasma get all this in the first place?
These authors then bravely ask, How did these remarkable layers of gene regulation and the highly promiscuous
[multifunctional] behavior of proteins in M. pneumoniae arise?4 But they instead explain that: the reduced efficacy of
selection that operates on the genomes of host-dependent bacteria reductions in long-term effective population size [from
the bottleneck that occurred when the bacteria first became host-dependent, and] the accumulation and fixation of
deleterious mutations in seemingly beneficial genes due to genetic drift, [together cause a] reducing genome size. 5If
selection, bottlenecks, and mutations only reduced the genome, then these processes are no help at all. What in nature
expanded the genome with ingeniously useful data that the remarkably robust yet genomically
truncated Mycoplasma retains plenty of?
Conclusion
At every level, scientists have uncovered more information. That information takes the form of three-dimensional shapes,
electronic and charge configurations, as well as raw coding sequence information. Communication pathways, routines and
subroutines, prioritizing, quality control, and process regulation plans are all stunningly effective and strikingly small.More indepth knowledge of these fantastically complicated cell features demands greater faith from naturalists in the belief that laws
of chemistry built cells. The more informational structures that are found, the greater the gap between the organization in
living system parts and the disorganization found in nonliving chemicals.A reminder of some inferences about information
would seem appropriate here. First, wherever precise regulation of processes due to expertly engineered machines and
codes are seen coming into existence, they always comes from persons. Stated negatively, these machines and codes are
never observed to originate from natural laws. Therefore, it is most parsimonious to infer that wherever similar machines,
processes, and codes are found, they, too, were not derived by nature, but instead by a person or persons.Second, like
spoken languages, biological language is irreducibly complex and yet without physical substance. It comes complete with
symbols, meanings for those symbols, and a grammatical structure for their interpretation. Remove any one of these three
fundamental features, and the informational system is lost. Physics has nothing to do with symbols or grammar, and
therefore nothing to do with the origin of life, which cannot exist without its coded information. 6If run-of-the-mill information
always comes from a mind, then this cellular information, being extraordinary, came from a mastermind.
Transposon amplification in rapid intrabaraminic diversification
by Evan Loo Shan
Transposons are wide-spread mobile genetic elements that make up a huge part of the genomes of species. They are so
named because of their ability to jump from one place in the genome to another. Often, they are given whimsical names,
such as gypsy, Mariner, Tourist, or Pack-MULEs, which reflect their mobility. Barbara McClintock discovered the existence of
these elements after witnessing the phenotypical change they brought about after jumping around in the maize genome.
Due to evolutionary bias, transposons have generally been regarded as parasitic junk DNA, using the hosts genetic
machinery to propagate. However, the actual functionality, diversity, and high abundance of transposons justify a revision of
this viewpoint. Such rapid transposon accumulation puts the mechanisms for rapid speciation (given a recent creation and
subsequent Flood-induced genetic bottleneck) into a new perspective, and may lead to a further development of a scientific
basis for baraminological research. This paper deals with the distribution and dispersal of transposons in the light of
evolutionary models as well as a creationist reinterpretion. Some calculations of transposition rates are given which support
recent creation and rapid intrabaraminic variation. The importance of transposons is discussed in regard to mapping
baramin life-histories.
Transposons in general
Table 1. Basic types of
human transposons
In the higher species
(eukaryotes), two basic
types of transposons can
be distinguished: Class I
and Class II. Class I
transposons
replicate
through
an
RNA
intermediate,
and
are
therefore called retrotransposons, and end in sequences called long terminal repeats (LTRs). Class I transposons are
located in areas where recombination between genes takes place. Members of Class I are short and long interspersed
nuclear elements (called SINEs and LINEs, respectively); these make up a major part of the repetitive elements present in
eukaryotic genomes. Class I transposons are usually located further away from the coding region of genes than the Class II
transposons. LINEs can contain ORFs (open reading frames) from a few genes, such as reverse transcriptase or integrase,
and are capable of transposing autonomously. They end up in LTRs. SINEs are much shorter elements which do not contain
any coding sequences. Alu elements are examples of SINEs.Class II, or DNA transposons replicate autonomously, using

their own genes and proteins to copy their own sequences, and insert themselves into other parts of the genome. In this way
they are capable of moving parts of the hosts genome along within themselves. They are located closer to genes (for
example MITE sequences in cereal genomes) as opposed to Class I type transposons. Class II transposons can be divided
into many different families and subfamilies, and bear names such as Activator, Mutator, or Helitron. Class II transposons
are present in a few hundred or thousand copies per genome at most, and are sparser than Class I type transposons. 15
Basic types of transposon elements are depicted in table 1.
Why transposons are a problem for evolutionary theory
Introductory thoughts on the effects of transposons on the genome
The naturalistic view of life assumes that the first simple genome of a living organism emerged from a chemical soup.
Through selectable mutations accumulated over several billion years, this original genome evolved into all the intricate
genomes we observe today. However, genome research in the past 10 years presents a picture of a far more dynamic
genome that has been shaped and sculpted to a significant degree by transposable elements. 6 We can see in table 2 that
transposons make up a large percent of the genomes of different organisms.
Table 2. Genome sizes and content of
repetitive elements in some well-known
organisms.
For example, evolutionists claim the maize
genome acquired virtually all retrotransposons
(which make up about 80% of the maize
genome: see table 1) in the last 6 million
years.3 This statement is quite profound. First,
it raises a question related to species stability.
If it really took 6 million years for the maize
genome to quadruplicate in size, then how
could acquiring such a great quantity of
genetic material keep maize the same species
for such a long time? Maize was derived
from teosinte, a plant hardly recognizable as
modern maize.Teosinte was domesticated by
the Amerindians over the past few thousand
years, making rapid diversification by
intelligent selection a more plausible
explanation as to how the maize genome changed in such a way.Evolutionists contend that, as in the case of transcription
factor binding sites, random base substitutions can cause the appearance and disappearance of regulatory sequence
elements. With transposons inflating the genome in such a manner, large chunks of raw genetic material would appear out
of which new kinds of genes or other genetic elements could be formed. This is similar to howArabidopsis is supposed to
have acquired a major part of its genes. 60% of BAC sequences covering 80% of the Arabidopsis genome were found to
contain duplicated segments, but yet it remained the same species. 7Since it is commonly accepted that transposons rapidly
spread within the genome after colonizing the germ line (when the delicate developmental program is active), this is strongly
discouraging to the idea that transposons are only harmful in their phenotypic effect. Actually, some functions of transposons
can be assigned to repetitive elements; for example, certain structural functions and recombination sites, as well as genome
rearrangement through transpositioning of genetic elements. Transposons can also react to abiotic stresses by regulating
expression patterns of genes through cis-regulatory elements inserted by moving transposons. 8 Other functional examples
include induction of alternative splicing, or changing the expression patterns in certain tissues or even the subcellular
location of proteins.9 It looks as though researchers will have to rethink the junk DNA theory 10).The concept that
transposon-induced gene inflation is not only not producing junk DNA, but that it is also beneficial and strategic could be
taken a step further. An interesting technique for studying the phenotypic effect of multiple genes has been developed in
recent years by a Canadian research team involving the synthesis of a mammalian artificial chromosome (MAC) construct. It
has been shown that MAC constructs persisted stably throughout several mouse generations. The interesting thing here is
that even with the MAC carrying a whole array of novel genes (each with a potential to severely affect the phenotype) the
mice are expected to remain mice. The researchers do not predict that they will evolve into a new species! 11,12Lacking
observational evidence, evolutionists can always fall back on the argument that such acquisitions of raw genetic material
may indeed give rise to new species and claim that the rice genome would be, in effect, the wheat genome without the
repetitive sequences.3This implies that changes in the transposon content are sufficient to give rise to new species.
However, this would still not answer how the coding regions of the wheat/rice genome came about; it only deals with
regulatory or contextual changes.13-15 A study done by Kalendar et al. dealing with the copia-type BARE-1 retrotransposon in
barley shows that transposable elements can spread rapidly in response to microclimatic divergence. 16 In this study, the
copy numbers of BARE-1 ranged from 8,300 to 22,100 per haploid barley genome within a 400 m long gorge at Evolution
Canyon in Mount Carmel, Israel.
Such changes surely could not
have taken millions of years
because the wild barley that was
studied exhibited retrotransposon
replicative
spread
variability
assumed to be correlated to sudden
stress
due
to
microclimate
variations within the gorge and
other climatic factors.
Figure 1. Model of transposon
accumulation. The equation used
for calculating the number of
transposons as a function of time
is
.
Evolutionary models dealing with the rate of transposition state that the distribution of transposons within host genomes can
take place in short bursts, the accumulative effect of which could eventually lead to genomic obesity. Afterward, genetic

material could be slowly lost (although the mechanism remains unclear). It is reasonable to propose that unequal cross-over
recombinations or deletions of different sizes may be the underlying mechanism. If so, larger genomes could be expected to
contain transposons younger than those in smaller genomes, because the latter could already be diminished in size due to
deletions.A study by SanMiguel in 1998 dealing with a number of plant species (such as Arabidopsis, rice, lotus, sorghum,
maize, barley and diploid wheat) showed that even by evolutionary standards the studied retrotransposons are all thought to
be about the same age. 17 It should be noted, however, that the age of the LTR sequences is calculated by the gammacorrected Kimura 2 method and depends on the substitution rate of nucleotides in the LTR sequences. In this model, the
age of an LTR sequence is calculated from the substitution rate, but the substitution rate is based on the estimated time to
the divergence between the species. It is an obvious case of circular reasoning. 17,18-23The number of LTR sequences in
barley was also shown to correlate with altitude and temperature. 24Parallel to this, the differences in the repetitive content of
the wheat/rice genome could shed light on how intrabaraminic variation could occur, as these two species belong to a single
holobaramin.25Considering these observations, it is clear that a mechanism to induce rapid variation makes more sense in a
creationist framework (where new species arise almost instantly) than in the evolutionary model (where it supposedly takes
hundreds of thousands of years for novel species to arise).Consequently, it would be a stretch of the imagination that
different species persisted for millions of years without having their genomes affected. For example, genome size variation
has been observed within the progeny of Helianthus annuus, where the difference in genome size was 14.7%. This would
mean 441 Mb (which is larger than the genome of rice itself!) of the 3,000 Mb genome of sunflower. 26 With many thousands
of copies of transposons within the genome, genomes should have grown quite quickly during a very short period relative to
the evolutionary timescale. Contrary to this, evolutionists estimate that according to gene loss models, it would take around
1.5 billion years for maize to get rid of the same amount of this excess genetic material. 27
Model of transposon accumulation in genomes
Transposons are capable of adding large tracts of DNA to the genome, and it would be of great importance to formulate a
mathematical model describing the rate of transposon amplification within a genome. The model presented below is
completely hypothetical in nature. Research is required to elucidate the exact way in which transposons accumulate and
may validate or reject the proposed model.Since the numbers of new transposons which arise within the genome are
proportional to the number of transposons capable of replicating themselves, we may say that
That is, the rate of spread of transposons n(t) within the genome is proportional to the function f(t) of the number of
transposons n(t)capable of replicating after t transposition events. From this we can deduce that

where n(t) is the number of transposons after t transposition events, C is a constant, and F(t) is the primitive function of the
function f(t)which is characteristic of the rate of transposon accumulation within the genome.
From this we may deduce one of two possible things. If the primitive function F(t) of f(t) is constant, that is
meaning that the number of transposons in a genome after t transpositions grows exponentially. If so, it would lead to an
exponential explosion relatively quickly. This would lend support to the creationist model, which predicts large numbers of
transposons accumulating in genomes only recently (that is, with a short period of time allowed for accumulation to occur). It
would also be in line with evidence from other fields of science that support the recent creation/worldwide Flood model (for
example: high mutation rates are observed, yet the number of mutations that have occurred since mitochondrial Eve is too
small if we assume long ages; also extremely high rates of radioactive decay are suggested by the creationist RATE
team).28,29In the evolutionary framework, the model implies a runaway / out of control accumulation of transposons in the
genome a long time ago. Since we see genomes still intact, this means that runaway transposon accumulation has not yet
occurred in the relatively short time since creation.However, if the function f(t) is not constant, then the rate of transposon
accumulation may change during time. A further investigation of the function f(t) reveals important characteristics about the
dynamics of transposon accumulation. We know that the lower and upper bounds of the function f(t) are 0 and ln 2,
respectively:
The lower bound 0 would mean a complete stasis in the accumulation of transposons within the genome, resulting in no
increase; i.e. no transposition and/or amplification. Therefore, f(t) is always greater than zero. Since after any t number of
transposition events the maximum number of transposons within the genome is n(t) = 2t , then

If we take the function f(t) to decay exponentially, we get


However, when the variable t (= transposition events)
increases, e-gt/g decreases and tends to decline to 0. The
remaining function n(t) = eht, however, still describes an
exponential growth of transposon copy acquisition.
However, if in equation 6 h is equal to 0, we arrive at an
equation for a sigmoidal curve. According to this model,
transposon accumulation lags off after an exponential
burst in a later phase. This means that after an initial
burst phase of transposon accumulation, a lag phase
follows, characterized by a shutdown of transposon
activity. This is noteworthy, because it fits with certain
aspects of the AGEing hypothesis of Todd Wood, 30,31 who
contends that genetic rearrangement occurred during a
certain period of time after the Flood in the genomes of organisms to allow the rapid phenotypic change necessary for

adaptive dispersal via genetic variation.In figure 1 we can see a hypothetical situation where the number of transposons is
calculated as a function of transposition events. The equation n(t) = 100000 ee1 (if we assume that g = 1) gives a
sigmoidal curve of the form n(t) = a ee1, where ee1 ranges from 0 to 1. Therefore, a would denote the maximum number
of transposons in the genome.The obvious question is that if the number of transposons has already reached a plateau,
then how long has this plateau condition persisted? Evolutionists could argue that it has continued for an indefinitely long
time. This would mean that all transposon activity has had ample time to shut down completely. Contrary to this, some
transposons have been shown to be active in a number of organisms, such as humans. 27,32However, very few plant
retrotransposons have been shown to be transcriptionally active (one is BARE-1 in barley). 1,8,33 MITE sequences in plants
have not been shown to excise, except for the rice mPing element, also indicative of their low transposase activity.1,9This
would mean that we are presently at the top shoulder of the sigmoidal curve, where transposon activity is slowly dying out.
This is marked by the presence of many defective transposon sequences within genomes.
The importance of transposons in baraminology
studies
Many repetitive sequences are either species or genera
specific in bacteria, plants and animals, and are thought
to promote speciation.34 This is good news for
baraminologists, since transposons can therefore be
used as a sort of signature to identify members of a
baramin. This would mean that transposons could be
used as a diagnostic tool to determine whether or not a
species is a member of a given baramin.
Figure 2. Average number of LTR sequences in
thousands per barley genome. The average number of
LTR sequences is shown in the H, Y and I genomes of
barley for species listed in table 3. The number of LTR
sequences for tetraploid genomes was divided by two.
The data was taken from Vicient et al., ref. 33. Data
for genome X was not used because it was only from a
single species.
By counting the number of the various transposons in the
genomes of different species that belong to the same
baramin, we can get a picture of the life history of a given
baramin. In other words, by following the change in the
number of a given transposable element, we can estimate
which species originated from a particular baranome (see
refs. 14 and 15). For example, particular MITE sequences
can be found at the same position in the genomes of
different plant genomes because of their relative
stability.1 Therefore MITEs can be used as landmark or
reference sequences to mark the inflation or change of a
certain baranome. Also, the BARE-1 transposon element
is widespread and specifically found in a number of grass
species (such as wheat, rye and oats), each with slightly
diverged sequence, whereas it is absent in other species.
This may indicate that BARE-1 is a baramin-specific
transpoable element.24 A similar diagnostic transposon
element is the RIRE-1 element in rice.1The number of
BARE-1 elements in a genome can be approximated by
the number of LTR, in and rt sequences within the
genome. According to Vicient et al.,33 the number of
copies of these element decreases in the Y, H and X
genomes of different barley species (Hordeum spp.) as
compared to the I genome of barley, and may reflect the
spreading of the transposon during the life history of
the Hordeum monobaramin (see figure 2). In the
genus Hordeum, the I genome is the most representative
of barley, and contains the most sequences. The Y, H and
X genomes are characteristic of other barley species, and
contain a decreasing number of these elements, Y having
the highest. Furthermore, Vicient et al. also found that
genome size was negatively correlated (r = 0.593) to
genetic distance from barley, meaning that the genomes
of the Hordeum species may have inflated parallel to their
acquisition of transposable elements.
Table 3. Number of LTR sequences in different species of
barley.
Table 3 is a list of barley species with different types of
genomes (H, Y and I) and the 1,000s of LTR sequences
they contain, which are supposedly equal to the number
of BARE-1 transposons in the genome. In addition, table
3 shows that barley genomes roughly fall into three
groups: group I contains the highest number of LTR
sequences; group Y has an intermediate number of LTR
sequences; and group H has the lowest number.

According to mainstream evolutionists, the wheat genome equals the rice genome but without the repetitive elements. Since
transposon activity adds large tracts of DNA to the genomes of organisms, and because they do not easily back-mutate,
transposons may be a tool to track back the life-history of baramins. For the rice and wheat genomes, which belong to the
same baramin,30 it would be an interesting endeavour to map species relationship as a function of transposon content.
Figure 3. Model of the life history of baramins following
transposon amplification. According to this model, genome size
expands in time after Creation/Flood along with transposon
content. The archebaramin is at the base of the baraminic tree,
and represents the original genome with little or no transposon
content. At different intervals, transposon invasion and
amplification can occur, causing large-scale intrabaraminic
diversification (represented by the large branches). At different
branch points, monobaraminic variation can occur, as seen in the
genus Hordeum.33In this respect it is interesting to determine
whether species with the same gene content and colinearity all
classify as members of the same holobaramin. For example,
microcolinearity has been shown to exist between certain parts of
the genome in rice and members of the tribeTriticeae, even
though the distance between genes may be up to at least
sevenfold.35Similarly, species with about the same transposon
content may be members of the same monobaramin, such as
species in the genus Hordeum, which show intrabaraminic (and
evenintraspecies) variation. The nature and degree of variation
would obviously be helpful in determining ancestry. In contrast,
species with the same gene colinearity but with different
transposon
content
could
be
members
of
different monobaramins. This is because different numbers of
transposon would have accumulated after speciation occurred.Because of the widespread dispersion and conserved LTR
termini, molecular studies such as REMAP (Retrotransposon-Microsatellite Amplification Polymorphism) and IRAP (InterRetrotransposon Amplified Polymorphism) may be useful tools in tracking the spread of transposons within
baramins.36 Given their large difference in genome size, rice and wheat could be members of different monobaramins.
Moreover, species in a given baramin with small genome size could be members of the archebaramin, representing the
original state of the baranome before the amplification process started. This model is presented in figure 3.The life history of
a baramin undergoing transposon amplification is analogous to an uninflated balloon on which a number of dots/bars are
drawn and connected to each other by lines (see figure 4). The dots represent different genes, whereas the lines represent
the intergenic spaces. Inflating the balloon is analogous to an increase in transposon content: The further the dots move
from each other on the surface of the balloon, the greater the length of the intergenic regions become.This is in accord with
a study in rice, sorghum, and maize, which showed significant differences in a certain segment of the Adh1-F locus between
the three species, although the genes in this region were mainly colinear. In this case, homologs of the Adh1 and u22 genes
were 50 kbp apart in sorghum, but 120 kbp apart in the larger maize genome. The gene density in this region was
approximately one gene per 912 kbp in rice and sorghum, whereas the density was one per 3080 kbp in maize, which
shows intrabaraminic variation due to transposon amplification. 37 This shows that determination of gene colinearity in related
species such as cereals could be of great help in exploring the boundaries of baraminology.38,39 Furthermore,
microcolinearity of genes is proof of a young age for plant species since, if they really are millions of years old according to
evolution, then the order of their genes should have become scrambled past recognition.
Figure 4. Genome model of gene colinearity
and retrotransposon markering. In this model,
six species are represented by six concentric
circles of different line thicknesses. The six
circles represent chromosomes with genes at
specific intervals. The first five chromosomes
starting from the centre of the circles,
represent species belonging to a specific
baramin (e.g. the grasses), while the outer
circle represents a genome belonging to
another baramin (e.g.Arabidopsis). Here we
can see that genes (black dots/bars) are
colinear in the case of the first five
circles/chromosomes/species,
since
they
belong to the same holobaramin. The
concentricity of the circles also illustrates
baramin-specific transposon amplification. We
can see that the 2nd and 3rd circles contain
three elements (grey bars) denoting specific
transposon elements that are monobaraminspecific. The 4thand 5th circles contain light
grey elements that are also monobaraminspecific.When gene colinearity was studied
between Arabidopsis (Brassicaceae) and rice, it was found that ESTs (expressed sequence tags) from rice had very low
homology with genes on the chromosomes of Arabidopsis, even at the protein level. This was interpreted by our
evolutionary friends to indicate that the genomes of the both plants had eroded too much for a successful comparison. In
other words, gene colinearity and order were unrecognizable.40In a separate study of rice, wheat and Arabidopsis,
researchers found that out of 46 types of rice copia elements, only two (Adena and Osr8) were present inArabidopsis, and
even
the Osr8 element
was
thought
to
be in
silicocontamination.41 Similarly,
a
computer
analysis
of Tourist and Stowaway rice short inverted-repeat elements in the non-coding regions of 413 Arabidopsisgenes failed to
identify a single repeat longer than 30 bp. 9 It is most interesting to note that Moore et al. have found that the genomes of a
number of grass species can even be circularized (formed into a circle) around one another and divided into 19 colinear rice

linkage segments that are all representative of the ancestral grass genome (in our case the genome of
thearchebaramin).42 This mode of representation of the genomes of a single monobaramin may even be adapted to all
baraminology.Furthermore, when comparing mammalian and plant transposons, we find that SINEs and LINEs are more
common to mammalian genomes, whereas MITEs and LTR retrotransposons are more common to plant genomes. We can
take these transposons as marker elements common to the mammalian and plant apobaramins, respectively. These would
be examples of baramin-specific transposable element markers.Creationists could interpret this observation to support the
notion that colinearity of genes is evidence of interbaraminic relationships; for example, lack of colinearity of genes
between Arabidopsis and rice demonstrates the discontinuity between the monocot grasses and dicot Brassicaceae,
thereby assigning these two plant groups to separate baramins.
Conclusion
The process of genome expansion by means of transposable elements as observed in several plant species shows that
genomes can be moulded quite dynamically without crossing evolutionary boundaries. Contrary to mainstream
assumptions, the expansion of genomes via transposon amplification is much faster than anticipated by the evolutionary
model. Neither is the type of speciation of the kind that is required to evolve from microbe to man. In addition, the rapid
spread of transposable elements within these genomes shows that genomes are recent. Variation induced by accumulation
of transposable elements and fast-track speciation events are very rapid phenomena and fit nicely with the young age
timescale. A large number of transposable elements also give support to the Wood model of rapid baraminic diversification
after the Flood followed by subsequent widespread deactivation. Furthermore, the distribution of certain transposable
elements shows that they can be used as marker elements in baraminology studies. Considering the increasing body of
evidence that transposable elements induce variation in baranomes, and may even be involved in post-Flood speciation
events, they should be renamed variation-inducing genetic elements (VIGEs; as proposed by Terborg, ref. 14).
Glossary
BAC sequence:
bacterial artificial clone.
Copia element:
a common type of retrotransposon with retrovirus-like sequence organization.
EST:
expressed sequence tags used to determine gene transcripts. Usually short in length, covering only
part of a gene.
gag/prt/pol/env
a number of proteins coded for by Class II type transposons and which are necessary for
proteins:
transposition.
Gamma-corrected
a substitution model for calculating genetic distances between DNA sequences.
Kimura two method:
in sequence:
a domain within the BARE-1 element encoding the integrase protein and needed for replication.
LTR:
long terminal repeata type of sequence belonging to LINEs and involved in the insertion of the
transposon.
MITE:
miniature inverted-repeat transposable element: short transposon of several hundred bps which are
restricted in transposition. May contain genetic regulator elements.
ORF:
open reading framethat part of a gene which can be potentially translated into peptides/proteins.
rt sequence:
a domain within the BARE-1 element encoding the reverse transcriptase protein and needed for
replication.
Myriad mechanisms of Gene regulation
by Alex Williams
The 2007 ENCODE pilot study report on the human genome showed astonishing complexity in the structure of the
information stored on, in and around the DNA molecule. 1 Now come two new studies that show astonishing complexity in
the function of the information copying and usage systems in cells.

Transcription (copying) of information from a DNA molecule onto a messenger RNA molecule is carried out by a molecular
machine called RNA polymerase (RNAP). Initiation of the transcription process (schematic A) is followed by the engagement
of the transcription machinery resulting in the elongation of the RNA strand (schematic B). A pause during this process helps
to regulate the rate of copying. A molecular model of the real system is shown (C). The DNA is shown as the twin coils of
rounded blue bead-like nucleotides protruding at top and bottom, the RNAP is shown as the long purple spaghetti-like
molecular machinery, and the RNA transcript is shown in green emerging from the centre of the RNAP. (Images from
www.wikipedia.org)
Ingenious transcripts
The first step in using the complex information stored on the DNA molecule is to transcribe (copy) it onto a messenger RNA
molecule (mRNA). Transcription is carried out by a molecular machine called RNA polymerase (RNAP) which attaches to
the DNA strand at the START end of a gene and works its way, nucleotide by nucleotide, to the STOP end, producing an
exact complimentary copy of each nucleotide at each step in the chain. More than one RNAP can work on a particular gene
at any one time, and a recent study by an international team working on the mechanics of transcription found that in a

culture of human cells there were, on average, two RNAPs per gene. 2The rate of transcription often needs to varyfor
example, in response to environmental stress or a fight-or-flight threat situationand one might think that the best way to
increase the rate would be to increase either the number of copying machines working on the gene, or to increase the
speed at which the machines progress along the DNA. Surprisingly, cells use neither of these options.In a normal metabolic
state, RNAP copying seems stunningly inefficient. Only about 1 in 90 transcripts produce mature messenger RNA; the
majority are aborted. Furthermore, the measured step-by-step transcription rate goes about twice as fast as previously
measured for whole transcript production because along the way there are quite long pauses.There are three main phases
in the transcription process. First, a region upstream of the transcription site called the promoter is activated. Second, the
promoter acts upon an adjacent region to initiate the formation of the transcription machinery. Third, when the transcription
machinery goes into action, it is said to be engaged in the copying process.About a third of the transcripts can be found in
each of these three stages at any one time. 3 The average residence time in each stage was 6 seconds for the promoter, 54
seconds for initiation, 517 seconds for engagement, and pause times ranged from 204 to 307 seconds. At any one time,
about a quarter of all transcripts were paused. A single gene produced a mature RNA transcript every 31 to 63
seconds.Because of the long pauses in the transcription process, there is a traffic pile-up in the transcription queue. The
authors likened it to a Sunday driver going slowly along a country road, with cars lined up for miles behind. This may seem
to be an awkward and inefficient way to proceed, but the authors suggest that there may be method in this apparent
madness.The speed at which the RNAP can copy is limited by its inherent enzymatic properties, so that leaves only the rate
of initiation and the length of the pause time as control points for controlling the rate of mRNA production. By having a very
high rate of initiating that is mostly abortive, that then leaves the pause control as the single determinant of copying rate. By
having just one control parameterlength of the pause timethe rate of transcript production can be varied almost
instantaneously if needed. The authors end the report by saying:We therefore expect that future results with endogenous
genes [i.e. in living organisms rather than in cell culture], as more sensitive microscopy methods are introduced, will
reveal the myriad of controls by which genes are expressed [emphasis added].2It is not hard to imagine what at least some
of these myriad controls might involve. For example, cells normally function at only a fraction of their potential rate and
range of operation. This is often referred to as redundancyhaving more structure and functional capacity than strictly
needed. The so-called inefficiency of RNA transcription (only 1 transcript in 90 reaching maturity) may actually be a method
of both repression and ready activation. Since the excess capacity in a redundant system is not normally used, it spends
most of its time in a repressed state.One of many methods of gene regulation involves small fragments of RNA that bind to
the RNA transcript and thus interfere with and prevent its translation into protein. There are many different ways in which this
can occur,4 and it is quite possible that the large proportion of aborted RNA strands may act as repressors. On the other
hand, when accelerated transcription is required, the rapid rate of transcription initiation can quickly be turned to full use in
being carried through to mature RNA production. There is more than enough capacity for acceleration in such a mechanism
when compared with the pause-time control rate. The average production rate of mature RNA was 1 every 31 to 63 seconds
per gene, while the pause time ranged from 204 to 307 seconds. By turning the pause time down to zero, RNA production
could thus be accelerated by 3 to 10 times over the normal rate, well within the 90 to 1 value for aborted initiations.So not
only does DNA contain a myriad of information structures, it is also consulted by the cell for that information in a myriad
ways. It makes reading a book, or an article like this, pale into insignificance by comparison.
Smart thinking
Once the information on the DNA molecule has been transcribed onto an RNA molecule, a number of posttranscription processes occur, and then the transcript is translated into protein. Sounds easy? Read on!The human brain is
the most complex organ in our body. It is made up of about 100 billion nerve cells (neurons), that each has numerous treelike branches (dendrites). When we learn or remember something new, a new pathway for thinking is created by a unique
pattern of dendrites joining up into a memory circuit. Problemhow to prevent a highly branched dendritic network from
joining up with itself and short-circuiting the memory or thought pattern?Researchers approached this problem by studying a
simpler systemthe development of dendrites in the fruit fly Drosophila melanogaster, which has only about 200,000
neurons in its brain!5 What they discovered was beautifully elegant, surprisingly simple yet mind-bogglingly complex in the
execution.6 A particular cell surface protein on the dendrites (called Dscam) is made subtly different in each dendrite so that
each one can sense whether the nearby branch it is about to join up with is self or non-self. It is similar in concept to the
complexes of proteins in flakes of human skin that allow a dog to track the scent of a particular individual human sometimes
a day or more after the person has passed by. Except in the dendrite case, it is thought that variations in just the one
protein, Dscam, solve the problem.How do you make one protein in a large number of different varieties? The answer
is alternative splicingthe RNA transcript is cut and pasted together in slightly different ways to produce proteins that are
almost exactly the same, but not quitejust a few amino acid differences. TheDscam gene can potentially generate more
than 38,000 closely related trans-membrane proteins that are different enough to be reliably identifiable, but similar enough
to function in exactly the same way.Trans-membrane proteins are folded up and down through the membrane, joining up
both the outside and the inside of the cell several times.Dscam is thus able to be sensed by other dendrites from the
outside, but can be also be used as a signalling molecule to tell the internal workings of the cell whether to go ahead with
the connection if it is non-self, or to stop the connection if the other is part of itself. It doesnt matter which one of the 38,000
versions a particular dendrite has, as long as it is different from its near neighbours. Easy, once you know how!But
just how do you cut and paste a single RNA transcript into 38,000 different but functionally identical proteins? Well, the
mechanics are complex and dynamically multi-functional,7 but not yet fully known. We do know, however, that
the spliceosomethe machine that does the alternate splicingis the largest machine in the cell. It consists of about 300
different proteins and several nucleic acids.8 It clearly takes a big machine to do a big job!
Vary or perish
According to a new theory of how life works at the molecular level, called facilitated variation,9 all the mechanisms of
variabilityboth within an individual organism and between parent and offspringmust be in place before life can function
and persist in the face of environmental challenge and change. A purely mechanical kind of lifesuch as William Paleys
watch found upon a heathwould become extinct the first time a malfunction occurred. But life as we now see it in its vast
molecular detail is astonishingly variable. If the new theory is correct, and life without such ingenious built-in mechanisms of
variation is not possible, then life itself becomes the greatest testament to creation that the world has ever seen.
More marvellous machinery: DNA scrunching
by Jonathan Sarfati
Some of the most startling discoveries in the last few decades have improved our understanding of the amazing complexity
of the cell. This includes the worlds tiniest machines. 1 But not only are there machines, but also theirblueprintthe
message molecule DNA.2 DNAs function is to store and transmit genetic information, but it cant work without many
molecular machines. However, as the noted philosopher of science, Sir Karl Popper (19021994), commented:What makes

the origin of life and of the genetic code a disturbing riddle is this: the genetic code is without any biological function unless it
is translated; that is, unless it leads to the synthesis of the proteins whose structure is laid down by the code. But the
machinery by which the cell (at least the non-primitive cell, which is the only one we know) translates the code consists of at
least fifty macromolecular components which are themselves coded in the DNA. Thus the code can not be translated except
by using certain products of its translation. This constitutes a baffling circle; a really vicious circle, it seems, for any attempt
to form a model or theory of the genesis of the genetic code.
Figure 1. The scrunching model for RNAPactivecentre
translocation
during
abortive
initial
transcription.5,6
Thus we may be faced with the possibility that the
origin of life (like the origin of physics) becomes an
impenetrable barrier to science, and a residue to all
attempts to reduce biology to chemistry and physics.3
Transcription tricks
Now Richard H. Ebright and his team from Rutgers University have discovered more intricacies in the process
of transcription,4 where information from the right part of the DNA is copied onto a strand of messenger RNA
(mRNA).5,6 Indeed, it is this mRNA that is translated into proteins in the complex machines known as ribosomes. 79DNA is
double stranded, so must first be unwound, so that the right strand can be copied onto mRNA, in a sense like a
photographic negative. So the machine, called RNA polymerase (RNAP), first locks on to the start of the gene. Ebright and
colleagues demonstrated what happens next with two complementary techniques, single-molecule fluorescence resonance
energy transfer (FRET)6 and single-molecule DNA nanomanipulation,5 and were able to rule out other ideas of how it works.
The next stage is that the anchored RNAP then reels in the DNAscrunching (figure 1).10 This unwinds the double strand
so the messenger RNA copy can be formed off one of them. Also, the unwinding stores energy, just like winding the rubber
band of a rubber-band-powered airplane. And just like the toy plane, this energy is eventually released, with the machine
then breaking free of its starting point and shooting forward. This also rewinds the unwound DNA (unscrunching) which
escapes from the back of the machine.Ebright states that this research should also enable them to develop antibacterial
agents that target the bacterial version of this machine.4
Evolutionary conundrum
This discovery provides yet more support for Poppers bafflement. The instructions to build RNAP are themselves encoded
in the DNA. But the DNA could not be transcribed into the mRNA without the elaborate machinery of RNAP. And this is also
an example of irreducible complexity because it would not be able to perform its function unless every feature was working
fully. There would be no use being able to dock onto the right spot of the gene and getting stuck there, or unwinding the
DNA without being able to wind it back.Furthermore, RNAP uses ATP as an energy source to achieve its feats. And ATP is
made by another nano-machine, the ATPase complex, which is a rotary motor.1 This is also coded on the cells DNA.Natural
selection is no answer, because this means differential reproduction, i.e. fully formed self-reproducing entities that can pass
on the information that codes for their features. But until RNAP is fully formed, the coding would not work at all, being unable
to get past first base (pun intended). Thus Darwinian evolution could not even have got off the starting block.
The genetic puppeteer
by David White
Back in 2005 a group of researchers published a landmark study on a question that has long puzzled geneticists: why arent
identical twins identical?1 Considering that they have the same DNA sequence in each of their cells, it seems a bit
strange that they often possess a number of physical differences, such as different fingerprints, and different susceptibilities
to disease. This raises the question: if two people can have identical DNA sequences and yet be so different, is there more
to our genetic blueprint than just DNA?
The answer is an emphatic yes! Everyone, it seems, has heard of DNA. But
many people are unaware that the DNA code itself is governed by another
code, known as the epigenetic code. In fact, so significant is this code that
one Science writer said that genes (stretches of DNA) are little more than
puppets, whereas the enzymes controlling this other code are the master
puppeteers.2 (See Another layer of complexity.)So what did the researchers
find? Put simply, identical twins possess the same DNA code, but different
epigenetic codes.3 They found that the epigenetic codes of identical twins,
though indistinguishable during the early years of life, can diverge markedly
as they age. Also, epigenetic differences were greater in identical twins that
lived apart and had different lifestyles.
Solving medical mysteries
Mutations in DNA are often regarded as the chief culprits of disease, but
epigenetic errors can have equally devastating effects. Biologists have known
since the 1970s that DNA in cancer cells has an unusually high level of
methylation, suggesting that crucial genes may be switched off. 4 Tumour
suppressor genes, as their name suggests, are required for normal
development, and several instances have been found where these are
switched off by methylationand directly linked to cancer. Alternatively,
researchers have noted that cancer genes (oncogenes) can be activated
through de-methylation.5However, adding or removing chemical groups can reverse epigenetic changes. So the race is now
on to develop drugs that control key epigenetic enzymes. One such drug has already been approved in the US to treat preleukemia.6 And even common dietary components, such as green tea, can prevent or reverse the effects of cancer by
inhibiting certain enzymes and reactivating switched off genes. 7Cancer research, however, is just the tip of the iceberg.
What causes schizophrenia and autism?8 Why are children born through IVF more likely to have epigenetic disorders?
9
These are key questions epigenetic researchers hope to answer.
Epigenetics and early development
Scientists at Duke University recently managed to radically alter a group of mice without altering one letter of their
DNA.10 Agouti mice (so-called because they have the agouti gene) are typically yellow, obese and highly susceptible to
cancer and type II diabetes. However, this experiment produced mice that were brown, slender and didnt share their
parents vulnerability for disease, despite carrying the dominant agouti gene. 11But what makes this transformation so

remarkable is the way it was achievedsimply by feeding the pregnant mothers a methyl-rich diet, which managed to
switch off the harmful agouti gene! And not only can a mothers diet profoundly affect gene expression in her children but
also her grandchildren and possibly succeeding generations after that.12 As one writer quipped, you are what your
grandmother ate.13The fact that a mothers diet during pregnancy can impact the epigenome of her grandchildren may
explain why human populations that suffer famine can continue to see health problems in well-nourished future
generations.14 Moreover, some have suggested that the obesity epidemic in some western countries may partly be due to
the lifestyles and nutrition of past generations.15
Cloning conundrums
When Dolly the Sheep was cloned over a decade ago, many believed that cloning animals would soon be easy and routine.
However, progress has been frustratingly slow, as scientists have realized that the epigenetic code is far less amenable to
cloning than the DNA code. Since the epigenetic profile of the DNA changes over time, the epigenome of a 6-year-old sheep
is very different from that required just after fertilisation. Consequently, the egg has to erase the DNAs epigenetic profile,
and reprogram it, appropriately. Dollys creator commented, When you think about what were asking the egg to do for us,
in a way, I think we should still be surprised that cloning works at all.16
Did epigenetics evolve?
The core histone proteins that package DNA are possessed by anything higher than bacteria on the evolutionary tree of
life. Therefore, many believe that the histone code has been regulating gene expression for at least 2.7 billion years, when
the first cells with an organized nucleus supposedly evolved.17 But just because higher life forms package their DNA in
much the same way, does not necessarily mean they all descended from a common ancestor. Even though an architect may
design totally different structures, on closer inspection, many of the concepts and materials used to build them would be
similar. So why cant the architect of life do the same? 18The famous evolutionary biologist, Theodosius Dobzhansky, once
claimed that Nothing in biology makes sense except in the light of evolution. If this statement were true, youd expect that
unravelling the complexities of epigenetic control would rely strongly on the light of evolution. But the opposite seems to be
the case. As one researcher candidly admitted:While the role of epigenetic inheritance in development is becoming a major
subject of biological research, the study of its implications for evolution is lagging far behind 19 [emphasis mine].So here we
have yet another example where biological research has flourished, without any need for evolutionary belief or speculation.
The big picture
Referring to the DNA code, Australian physicist Paul Davies has stated that The key question is how this ingenious
system of coding emerged?20 But even though he regards it as ingeniousand openly confesses that a naturalistic origin
remains a mysterythe idea that this code and the complex messages it communicates originated with an intelligent source
isnt even considered.But now that we know DNA has another layer of coded instructions, and thus another layer of
complexity, will more scientists attribute this to an intelligent designer? I seriously doubt it. DNA already has impressive
credentials, such as holding the title of the worlds most compact information storage system and possessing a remarkable
ability to conduct electricity to identify strand breaks. 21 So another layer of complexity isnt likely to budge someone already
committed to explaining the universe purely in naturalistic terms. As world-renowned geneticist, Richard Lewontin declared,
we cannot allow a Divine Foot in the door. 22So dont be surprised that as future research sheds more light on
epigenetics23 many intelligent scientists overlook a conclusion that others consider obvious: the epigenetic code is the
handiwork of a supremely intelligent programmer.
Another layer of complexity
The epigenetic code controls gene expression in two known major ways. The first of these is related to the way DNA is
packaged. Inside the cell nucleus, DNA is wrapped around proteins called histones. These proteins can be bunched
together tightly or loosely, depending on the chemical environment. This is significant because to utilize the information in
DNA, certain proteins need access so they can bind to it. If the histones are packaged too tightly, the DNA cant be
accessed. Its a bit like arriving at the library after closing timethe information is still there, but inaccessible. So when
histones are bunched together tightly, gene expression is prevented. But when they are loosely bunched, gene expression is
allowed. This form of control is referred to as The Histone Code.
The second major way the epigenetic code
controls gene expression is by attaching or
detaching chemical groups to the DNA itself.
Methyl groups are small chemical clusters (
CH3 ) that attach to DNA and switch off or
prevent genes being expressed. This form of
regulation is known as DNA methylation.
The Histone Code and DNA methylation
provide an epigenetic code, which is a
heritable physical and chemical code that
controls gene expression. Its a bit like the
stage manager at a concertit choreographs
when certain acts (genes) play their part in the
concert of life. Whats more, the epigenetic code
is a dynamic code that changes throughout
development and in response to the environment.

MUTATIONS
Can mutations create new information?
by Dr Robert W. Carter
In the same way that species are not static, neither
are genomes. They change over time; sometimes

randomly, sometimes in preplanned pathways, and sometimes according to instruction from pre-existing algorithms.
Irrespective of the source, we tend to call these changes mutations. Many evolutionists use the existence of mutation as
evidence for long-term evolution, but the examples they cite fall far short of the requirements of their theory. Many
creationists claim that mutations are not able to produce new information. Confusion about definitions abounds, including
arguments about what constitutes a mutation and the definition of biological information. Evolution requires the existence of
a process for the invention of new information from scratch. Yet, in a genome operating in at least four dimensions and
packed with meta-information, potential changes are strongly proscribed. Can mutations produce new information? Yes,
depending on what you mean by new and information. Can they account for the evolution of all life on Earth? No!
Mutations are known by the harm they cause, such as the one in the feather duster budgie (left), which results in deformed
feathers in the budgerigar. However, some genetic changes seem to be programmed to happen, creating variety and
assisting in organisms adapting. Is this new information?The phrase, Mutations cannot create new information is almost a
mantra among some creationists, yet I do not agree. Evolutionists have a number of responses to the idea, although most of
them display faulty reasoning. Most evolutionary responses display a lack of understanding of the complexity of the genome.
I will explain below why I believe the genome was designed to operate in at least four dimensions and why this causes
difficulty for the evolutionary belief in the rise of new information.Another issue, especially displayed among evolutionists
(but creationists, including myself, are not immune), is a lack of understanding of the location of biological information. Most
people tend to think DNA (the genome) is the storage place of information. While it is certainly the location of a tremendous
amount of it, this gene-centered view ignores the information originally engineered into the first created organisms. The
architecture of the cell, including the cell wall, nucleus, sub-cellular compartments and a myriad of molecular machines, did
not originate from DNA, but was created separately and alongside DNA. Neither can exist without the other. Thus, a large,
yet immeasurable, part of biological information resides in living organisms outside DNA. Taking an organism-centric view
changes the debate dramatically.1 Yet, because the organism-centric view ultimately involves the creative genius of a
designer, which we cannot begin to fathom, we immediately run into a wall of incalculability. For this reason, I will focus on
one subset of biological information, genetic information, for the remainder of this article.A third issue involves the fact that
Darwin actually wrote about two different ideas, what I like to call his specialand general theories of evolution (described
below). Creationist reactions against evolution in general have led to some misunderstanding of the amounts of change we
might expect in living organisms over time. There are three basic ideas I would like to introduce in this discussion: 1) In the
same way that the designer was not limited to creating static species, he was not limited to creating static genomes; 2) The
designer may have placed intelligently designed genetic algorithms into the genomes of His created kinds that cause
changes in genetic information or even create information de novo; and 3) The designer could have engineered information
in compressed form into the genome that would be later decompressed and seen as new information.
What is a mutation?
A mutation is a change in the sequence of DNA. Mutations can be bad or (theoretically) good, but they all involve some
change in the sequence of letters (base pairs) in the genome. A single mutation can be as simple as a single letter swap
(e.g. C changed to T) or the insertion or deletion of a few letters. These simple mutations are in the majority. Mutations can
also be complex, like the deletion or duplication of an entire gene, or even a massive inversion of a millions-of-base-pairs
section of a chromosome arm.
We have to make a distinction between mutation and designed variation.
I do not believe all current human genetic differences are due to mutation. We have to make a distinction between mutation
and designed variation. There are a huge number of single letter differences between people, and these are mostly shared
among all people groups.2 This indicates that much of the diversity found among people was designed: The first people
carried a significant amount of diversity; and in the Babel population immediately after the Flood, and the post-Babel people
groups were large enough to carry away most of the variation present at Babel. Most deletions (~90%), however, are not
shared among the various human subpopulations.3 This indicates that a significant number of deletions have occurred in the
human genome, but after Babel. Deletions are apparently not designed variation and are an example of rapid genomic
decay. The same can be said of DNA insertions, but they are about 1/3 as common as the same-size deletion. The ubiquity
of large, unique deletions in the various human subpopulations worldwide is evidence for rapid erosion or corruption of
genetic information, through mutation.
What is a gene?
Technically, a gene is a piece of DNA that
codes for a protein, but modern genetics has
revealed that different parts of different genes
are used in different combinations to produce
proteins,4,5 so the definition is a bit up in the air
at the moment.6 Most people, including
scientists, use gene to mean two different
things: either 1) a piece of DNA that codes for
a protein, or 2) a trait. This is an important
distinction to keep in mind.
What is information?
This question, What is information, is the real
crux of the argument, yet the term information
is difficult to define. When dealing with this
subject, in most cases evolutionists use a
statistical
measure
called
Shannon
Information. This was a concept invented by
the brilliant electronic engineer C.E. Shannon in the middle of the 20 th century, who was trying to answer questions about
how much data one could stuff into a radio wave or push through a wire. Despite common usage, Shannons ideas of
information have little to do with biological information.Case in point: A beautiful cut-glass vase can be described quite
easily. All one needs is a description of the material and the location of each edge and/or vertex in 3-D space. Yet, a milliondollar vase can be smashed into a worthless pile of sand quite easily. If one wanted to recreate that pile of sand exactly, a
tremendous amount of Shannon information would be required to describe the shape of each grain as well as the orientation
and placement of grains within the pile. Which has more information, the pile of sand or the original vase into which a
tremendous amount of purposeful design was placed? It depends on which definition of information one uses! Figure 1. A
biological system is defined as containing information when all the following five hierarchical levels of information are
observed: statistics (here left off for simplicity), syntax, semantics, pragmatics and apobetics (from Gitt, ref. 9).In other
definitions of information, the pile of sand could be described quite easily with just a few statistical measures (e.g. average

grain size mass of sand angle of repose). In this sense, any number of independent piles of sand can be, for all practical
purposes, identical. This is the essence of Zemanskys use of information,7 yet this also has little to do with biological
information, for biology is not easy to summarize, and any such attempts would produce meaningless results (e.g. a
statistical measure of the average rate of a chemical reaction mediated by a certain enzyme says nothing about the origin of
the information required to produce that enzyme).A definition of biological information is not easy to come by, and this
complicates the discussion of the power of mutation to create information. However, pioneers in this field, including Gitt 8and
others, have discussed this issue at great length so it is not necessary to reproduce all the arguments here. I will follow Gitt
and define information as, an encoded, symbolically represented message conveying expected action and intended
purpose, and state that, Information is always present when all the following five hierarchical levels are observed in a
system: statistics, syntax, semantics, pragmatics and apobetics (figure 1). 9 While perhaps not appropriate for all types of
biological information, I believe Gitts definition can be used in a discussion of the main focus of this article: potential
changes in genetic information.
Can mutations create information?
Now we can address the main question, Can mutations create new
genetic information?
Figure 2. Schematic view of the central role that intelligentlydesigned VIGEs may play in generating variation, adaptations and
speciation events in the genomes of living things to induce DNA
changes. Lower part: VIGEs may directly modulate the output of
(morpho)genetic algorithms due to position effects. Upper part:
VIGEs that are located on different chromosomes may be the result
of speciation events, because their homologous sequences
facilitate chromosomal translocations and other major karyotype
rearrangements. (From Borger, ref 22.)
1) The designer was not limited to creating static genomes, in the
same way that He was not limited to creating fixed species. 10 In the
1800s, Darwin pushed back against the popular idea that The
designer created all species in their present form. Today, most
creationists do not have trouble with non-fixity of species.
Evolutionists constantly attempt to bring up the straw man
argument that we believe in species stasis, even comparing us to
people who believed in a flat earth, but both of these are historical
myths.12 Most people throughout history believed the earth was
round, and there were creationists, like Linnaeus 13 and Blyth,14 prior
to Darwin who believed species could change (though not beyond a certain limit). CMI, in particular, have published articles
and one DVD15 on the subject of how species change over time and have an entire section on the topic on our Q&A
page.16 Here is an important question: if species can change, what about their genomes?Not only are species not fixed, but
more than several articles have been published in this journal alone on the topic of non-static genomes, including recent
articles by Alex Williams,17 Peter Borger,18 Jean Lightner,19 Evan Loo Shan,20 and others. It looks like the designer
engineered into life the ability to change DNA. This occurs through homologous crossover, jumping genes
(retrotransposons,21 ALUs, etc.), and other means (including the random DNA spelling errors generally called mutations).
Borger has coined a phrase, variation inducing genetic elements (VIGEs) 22 to describe the intelligently-designed genetic
modules the designer may have put into the genomes of living things to induce DNA sequence changes (figure 2).
2) Creationists are making a strong case that genomes are not static and that the DNA sequence can change over time, but
they are also stating that some of these changes are controlled by genetic algorithms built into the genomes themselves. In
other words, not all changes are accidental, and a large proportion of genetic information is algorithmal. If a change occurs
in DNA through an intelligently-designed algorithm, even an algorithm designed to make random, but limited, changes, what
do we call it? Mutation originally simply meant change but today it carries a lot of extra semantic baggage. Can we say that
a mechanism designed to create diversity over time within a species can be a cause of mutation, with its connotation of
unthinking randomness? In fact, there is considerable evidence that some mutations are repeatable 23,24 (that is, not wholly

random) (figure 3). This suggests the presence of some genomic factor designed to control mutation placement in at least
some cases. If that something causes an intentional change in the DNA, do we call that a mutation or an intelligently
engineered change in the DNA sequence? Of course, random mutations still occur, and these are mostly due to the error
rate of the DNA replication and repair machinery.
Figure 3. There is considerable evidence that some mutations are not random. E.g. mutations in nucleotide sequences of
exon X (ten) from GULO genes and pseudogenes from a number of species. In this illustration, positions with identical
nucleotides in all organisms are not shown. The deletion mutation in position 97 (indicated by *) in this pseudogene is
usually hailed as the ultimate evidence for the common descent shared between humans and the great apes. At first glance,
this may appear to be a very strong case for common descent. However, after examining a large number of organisms,
enabling the excluding non-random mutations, it becomes obvious that position 97 is in fact a hot spot for non-random
mutations. (From Borger, ref. 24.)

3) There could be a considerable amount of information stored in the genome in compressed, hidden form. When this
information is decompressed, deciphered, revealed, or unscrambled (call it what you will), this cannot be used as evidence
for evolution, since the information was already stored in the genome.Take the information the desiner put into the first
people. An evolutionist looks at any DNA difference as a result of mutation, but the designer could have put a significant
amount of designed variation directly into the first people. There are millions of places in the human genome that vary from
person to person, the majority of this variation is shared among all populations, 25 and most of these variable positions have
two common versions (Aor G, T or C, etc.).26 The bulk of these should be places where the designer used perfectly
acceptable alternate readings during the creation of man. These are not mutations!The in-built alternatives he put into the
first people are scrambled over time, and new traits (even many good ones not previously in existence) might arise during
this process. How? One way is through a process called homologous recombination. People have two sets of
chromosomes. Lets say a certain portion of one of Adams chromosome #1 reads GGGGGGGGGG and codes for a greencolored something-or-other. The other copy of chromosome 1 reads bbbbbbbbbb and codes for a blue something-or-other,
but blue is recessive. Someone with one or two copies of the all-G chromosome will have a green something-or-other.
Someone with two copies of the all-b chromosome will have a blue something-or-other. In the early population, about three
quarters of the people will have the green version and about one quarter will have the blue version.How, then, does this
process produce new traits? Homologous chromosomes are recombined from one generation to the next through a process
called crossing over. If a crossing over event occurred in the middle of this sequence, we might get one that reads
GGGGGbbbbb that causes the production of a purple something-or-other. This is a brand new thing, a new trait never seen
before. This is the result of a change in the DNA sequence and we will not be able to tell the difference between this
crossing over event and a mutation until we can sequence the piece of DNA in question. Thus, new traits (sometimes
incorrectly or colloquially referred to as genes) can arise through homologous recombination. 27 But this is not mutation.
Recombination is part of the intelligently-designed genome and usually only reveals information that was previously packed
into the genome by the Master Designer (it can also reveal new combinations of mutations and designed diversity). Also,
recombination is not random,28,29 so there is a limit to the amount of new traits that can come about in this way.
Bad examples used by evolutionists
Adaptive immunity
I have a hard time calling something like adaptive immunity, which involves changes in the order of a certain set of genes to
create novel antibodies, mutation. Adaptive immunity is often brought up by the evolutionist as an example of new genes
(traits) being produced by mutation. Here we have an example of a mechanism that takes DNA modules and scrambles
those modules in complex ways in order to generate antibodies for antigens to which the organism has never been exposed.
This is a quintessential example of intelligent design. The DNA changes in adaptive immunity occur only in a controlled
manner among only a limited number of genes in a limited subset of cells that are only part of the immune system, and
these changes are not heritable. Thus, the argument for evolution falls flat on its face.30
Gene duplication
Gene duplication is often cited as a mechanism for evolutionary progress and as a means of generating new information.
Here, a gene is duplicated (through several possible means), turned off via mutation, mutated over time, turned on again
through a different mutation, and,voil!, a new function has arisen.Invariably, the people who use this as an argument never
tell us the rate of duplication necessary, nor how many duplicated but silenced genes we would expect to see in a given
genome, nor the needed rate of turning on and off, nor the likelihood of a new function arising in the silenced gene, nor how
this new function will be integrated into the already complex genome of the organism, nor the rate at which the silenced
junk DNA would be expected to be lost at random (genetic drift) or through natural selection. These numbers are not
friendly to evolutionary theory, and mathematical studies that have attempted to study the issue have run into a wall of
improbability, even when attempting to model simple changes. 31-33 This is akin to the mathematical difficulties Michael Behe
discusses in his book, The Edge of Evolution.34 In fact, gene deletions35 and loss-of-function mutations for useful genes are
surprisingly common.36 Why would anyone expect a deactivated gene to stick around for a million years or more while an
unlikely new function develops?But the situation with gene duplication is even more complicated than this. The effect of a
gene often depends on gene copy number. If an organism appears with extra copies of a certain gene, it may not be able to
control the expression of that gene and an imbalance will occur in its physiology, decreasing its fitness (e.g. trisomy causes
abnormalities such as Down syndrome because of such gene dosage effects). Since copy number is a type of information,
and since copy number variations are known to occur (even among people37), this is an example of a mutation that changes
information. Notice I did not say adds information, but changes. Word duplication is usually frowned upon as being
unnecessary (ask any English teacher). Likewise, gene duplication is usually, though not always, bad. In the cases where it
can occur without damaging the organism, one needs to ask if this is really an addition of information. Even better than that,
is this the type of addition required by evolution? No, it is not.Several creationists have written on this subject, including
Lightner,38 Liu and Moran.39 Even if an example of a new function arising through gene duplication is discovered, the
function of the new must necessarily be related to the function of the old, such as a new but similar catalysis end product of
an enzyme. There is no reason to expect otherwise. New functions arising through duplication are not impossible, but
they are vanishingly unlikely, and they become more unlikely with each degree of change required for the development of
each new function.
Degraded information
There are abundant examples in the evolutionary literature where genetic degradation has been used in an attempt to show
an increase in information over time. Examples include sickle cell anemia (which confers a resistance to the malaria parasite
by producing deformed hemoglobin molecules), 40 aerobic citrate digestion by bacteria (which involves the loss of control of
the normal anaerobic citrate digestion),41and nylon digestion by bacteria (which involves a loss of substrate specificity in one
enzyme contained on an extra-chromosomal plasmid).42Since they all involve decay of prior information, none of these
examples are satisfactory evidence for an increase in biological complexity over time.
Antibiotic resistance in bacteria
This has been dealt with so many times that I hesitate to even mention it. However, for some reason evolutionists keep
bringing it up, almostad nauseam. The interested reader can easily find many articles on the subject, with detailed
creationist rebuttals.43
General gain-of-function mutations
Evolution requires gain-of-function (GOF) mutations, but evolutionists have had a difficult time coming up with good
examples.44 Adaptive immunity, homologous recombination, antibiotic resistance in bacteria, and sickle-cell anemia in
humans have all been used as examples, but, as detailed above, each of these examples fails to meet the requirements of
a true GOF. The general lack of examples, even theoretical examples, of something absolutely required by evolution is
strong testimony against the validity of evolutionary theory.
The real issue

The development of new functions is the only thing important for evolution. We are not talking about small functional
changes, but radical ones. Some organism had to learn how to convert sugars to energy. Another had to learn how to take
sunlight and turn it into sugars. Another had to learn how to take light and turn it into an interpretable image in the brain.
These are not simple things, but amazing processes that involve multiple steps, and functions that involve circular and/or
ultra-complex pathways will be selected away before they have a chance to develop into a working system. For example,
DNA with no function is ripe for deletion, and making proteins/enzymes that have no use until a complete pathway or nanomachine is available is a waste of precious cellular resources. Chicken-and-egg problems abound. What came first, the
molecular machine called ATP synthase or the protein and RNA manufacturing machines that rely on ATP to produce the
ATP synthase machine? The most basic processes upon which all life depends cannot be co-opted from pre-existing
systems. For evolution to work, they have to come up from scratch, they have to be carefully balanced and regulated with
respect to other processes, and they have to work before they will be kept.Saying a gene can be copied and then used to
prototype a new function is not what evolution requires, for this cannot account for radically new functionality. Thus, gene
duplication cannot answer the most fundamental questions about evolutionary history. Likewise, none of the common modes
of mutation (random letter changes, inversions, deletions, etc.) have the ability to do what evolution requires. Darwin pulled
a bait and switch in his On the Origin of Species. He actually produced two separate theories: what I call
his special and general theories of evolution, following Kerkut45. Darwin went on at length to show how species change. This
was the Special Theory of Evolution and he was preceded by numerous others, including several creationists, with the same
idea.
It took him a long time to get to the point, but he finally said,
I can see no limit to the amount of change which may be effected in the long course of time by natures power of
selection.46
The can mutations create new information argument is really about the bridge between the special and general modes of
evolution.This was his General Theory of Evolution, and this is where he failed, for he provided no real mechanism for the
changes and was ignorant of the underlying mechanisms that would later be revealed. To use a modern analogy, this would
be akin to saying that small, random changes in a complex computer program can create radical new software modules,
without crashing the system.47 Thus, the can mutations create new information argument is really about the bridge between
the special and general modes of evolution. Yes, mutations can occur within living species (kinds), but, no, those mutations
cannot be used to explain how those species (kinds) came into existence in the first place. We are talking about two
completely separate processes.
The meta-information challenge
We need to get past the nave idea that we understand the genome because we know the sequence of a linear string of
DNA. In fact, all we know is the first dimension out of at least four in which the genome operates (1: the one-dimensional,
linear string of letters; 2: the two-dimensional interactions of one part of the string with another, directly or through RNA and
protein proxies; 3: the three-dimensional spatial structure of the DNA within the nucleus; and 4: changes to the 1 st, 2nd and
3rd dimensions over time). There is a tremendous amount of information packed into that genome that we have not figured
out, including multiple simultaneously-overlapping codes.48 When discussing whether or not mutations can create new
information, evolutionists routinely bring up an overly-simplistic view of mutation and then claim to have solved the problem
while waving their hand over the real issue: the antagonism between ultra-complexity and random mutation.If a fourdimensional genome is hard enough to grasp, there is also a huge amount of meta-information in the genome. This is
information about the information! This is the information that tells the cell how to maintain the information, how to fix it if it
breaks, how to copy it, how to interpret what is there, how to use it, when to use it, and how to pass it on to the next
generation. This is all coded in that linear string of letters and life could not exist without it. In fact, life was designed from a
top-down perspective, apparently with the meta-information coming first. According to a brilliant paper by Alex Williams, 49 for
life to exist, organisms require a hierarchy of
Perfectly pure, single-molecule-specific biochemistry,
specially structured molecules,
functionally integrated molecular machines,
comprehensively regulated, information-driven metabolic functions, and
inversely-causal meta-information.
None of these levels can be obtained through natural processes, none can be predicted from the level below, and each is
dependent on the level above. Meta-information is the top level of biological complexity and cannot be explained by
naturalistic mechanisms, yet life cannot exist without it. 50 Putting all other arguments for and against the rise of biological
information aside, where did the meta-information, upon which all life depends, come from?
Conclusions
Can mutation create new information? Yes, depending on what you mean by information. Also, new does not necessarily
imply better or even good. When evolutionists cite examples of new information, they are almost invariably citing
evidence of new traits, but these traits are caused by the corruption of existing information. Mutations can create new
varieties of old genes, as can be seen in white-coated lab mice, tailless cats, and blue-eyed people. But damaging
mutations cannot be used to vindicate molecules-to-people evolution. Breaking things does not lead to higher function (and
presupposes a pre-existing function that can be broken). Also, not all new traits are caused by mutation! Some come about
by unscrambling pre-existing information, some from decompressing packed information, some from turning on and off
certain genes.In all the examples I have seen used to argue against creation, evolution is not helped. There are no known
examples of the types of information-gaining mutations necessary for large-scale evolutionary processes. In fact, it looks like
all examples of gain-of-function mutations, put in light of the long-term needs of upward evolutionary progress, are
exceptions to what is needed, because every example I have seen involves something breaking.We as creationists have the
upper hand here. If we treat this properly, we can score a great victory in our long war for truth. The genome is not what
evolution expected. The examples of mutations we have are not of the types required for evolution to advance. Evolution
has to explain how the four-dimensional genome, with multiple overlapping codes and chock full of meta-information, came
about. Can a mutation create new information? Perhaps, but only in the most limited sense. Can it create the kind of
information needed to produce a genome? Absolutely not!
Acknowledgments
I must thank Don Batten, Jonathan Sarfati, and three anonymous reviewers for critical comments on this manuscript. This
was very much a team effort as the ideas were distilled through years of interaction among my creationist colleagues, many
of whose contributions were not mentioned due to lack of space, not due to lack of merit. I am afraid I did not do justice to
those who have gone before me.
Refuting Evolution 2

A sequel to Refuting Evolution that refutes the latest arguments to support evolution (as presented by PBS and Scientific
American).
by Jonathan Sarfati, Ph.D. with Michael Matthews
Argument: Some mutations are beneficial
Evolutionists say, Mutations and other biological mechanisms have been observed to produce new features in organisms.
First published in Refuting Evolution 2, Chapter 5
When they begin to talk about mutations, evolutionists tacitly acknowledge that natural selection, by itself, cannot explain the
rise of new genetic information. Somehow they have to explain the introduction of completely new genetic instructions for
feathers and other wonders that never existed in simpler life forms. So they place their faith in mutations.In the process of
defending mutations as a mechanism for creating new genetic code, they attack a straw-man version of the creationist
model, and they have no answer for the creationists real scientific objections. Scientific American states this common strawman position and their answer to it.10. Mutations are essential to evolution theory, but mutations can only eliminate traits.
They cannot produce new features.On the contrary, biology has catalogued many traits produced by point mutations
(changes at precise positions in an organisms DNA)bacterial resistance to antibiotics, for example. [SA 82]This is a
serious misstatement of the creationist argument. The issue is not new traits, but new genetic information. In no known case
is antibiotic resistance the result of new information. There are several ways that an information loss can confer resistance,
as already discussed. We have also pointed out in various ways how new traits, even helpful, adaptive traits, can arise
through loss of genetic information (which is to be expected from mutations).Mutations that arise in the homeobox ( Hox)
family of development-regulating genes in animals can also have complex effects. Hox genes direct where legs, wings,
antennae, and body segments should grow. In fruit flies, for instance, the mutation called Antennapedia causes legs to
sprout where antennae should grow. [SA 82]Once again, there is no new information! Rather, a mutation in the hox gene
(see next section) results in already-existing information being switched on in the wrong place. 1 The hox gene merely moved
legs to the wrong place; it did not produce any of the information that actually constructs the legs, which in ants and bees
include a wondrously complex mechanical and hydraulic mechanism that enables these insects to stick to surfaces. 2These
abnormal limbs are not functional, but their existence demonstrates that genetic mistakes can produce complex structures,
which natural selection can then test for possible uses. [SA 82]Amazingnatural selection can test for possible uses of
non-functional (i.e., useless!) limbs in the wrong place. Such deformities would be active hindrances to survival.
Gene switches: means of evolution?
William Bateson (18611926), who added the word genetics to our vocabulary in 1909, found that embryos sometimes
grew body parts in the wrong place. From this he theorized that there are underlying controls of certain body parts, and
other controls governing where they go.Ed Lewis investigated and won a Nobel Prize in 1995 for discovering a small set of
genes that affect different body parts (Hox or Homeobox). They act like architects of the body. Mutations in these can cause
dramatic changes. Many experiments have been performed on fruit flies (Drosophila), where poisons and radiation induced
mutations.The problem is that they are always harmful. PBS 2 showed an extra pair of wings on a fly, but failed to mention
that they were a hindrance to flying because there are no accompanying muscles. Both these flies would be eliminated by
natural selection.Walter Gehring of the University of Basel (Switzerland) replaced a gene needed for eye development in a
fruit fly with the corresponding gene from a mouse. The fly still developed normal fly eyes, i.e., compound eyes rather than
lens/camera. This gene in both insects and mammals is called eyelessbecause absence of this gene means no eyes will
form.However, there is obviously more to the differences between different animals. Eyeless is a switchit turns on the
genetic information needed for eyes. But evolution requires some way of generating the new information thats to be
switched on. The information needed to build a compound eye is vastly different from that needed to build a lens/camera
type of eye. By analogy, the same switch on an electric outlet/power socket can turn on a light or a laptop, but this hardly
proves that a light evolved into a laptop!All the same, the program says that eyeless is one of a small number of common
genes used in the embryonic development of many animals. The program illustrated this with diagrams. Supposedly, all
evolution needed to do was reshuffle packets of information into different combinations.But as shown, known mutations in
these genes cause monstrosities, and different switches are very distinct from what is switched on or off. Also, the embryo
develops into its basic body plan before these genes start switchingobviously they cant be the cause of the plan before
they are activated! But the common genes make perfect sense given the existence of a single designer.
Increased amounts of DNA dont mean increased function
Biologists have discovered a whole range of mechanisms that can cause radical changes in the amount of DNA possessed
by an organism. Gene duplication, polyploidy, insertions, etc., do not help explain evolution, however. They represent an
increase in amount of DNA, but not an increase in the amount of functional genetic informationthese mechanisms create
nothing new. Macroevolution needs new genes (for making feathers on reptiles, for example), yet Scientific
American completely misses this simple distinction:Moreover, molecular biology has discovered mechanisms for genetic
change that go beyond point mutations, and these expand the ways in which new traits can appear. Functional modules
within genes can be spliced together in novel ways. Whole genes can be accidentally duplicated in an organisms DNA, and
the duplicates are free to mutate into genes for new, complex features. [SA 82]In plants, but not in animals (possibly with
rare exceptions), the doubling of all the chromosomes may result in an individual which can no longer interbreed with the
parent typethis is called polyploidy. Although this may technically be called a new species, because of the reproductive
isolation, no new information has been produced, just repetitious doubling of existing information. If a malfunction in a
printing press caused a book to be printed with every page doubled, it would not be more informative than the proper book.
(Brave students of evolutionary professors might like to ask whether they would get extra marks for handing in two copies of
the same assignment.)Duplication of a single chromosome is normally harmful, as in Downs syndrome. Insertions are a
very efficient way of completely destroying the functionality of existing genes. Biophysicist Dr Lee Spetner in his book Not By
Chance analyzes examples of mutational changes that evolutionists have claimed to have been increases in information,
and shows that they are actually examples of loss of specificity, which means they involved loss of information (which is to
be expected from information theory).The evolutionists gene duplication idea is that an existing gene may be doubled, and
one copy does its normal work while the other copy is redundant and non-expressed. Therefore, it is free to mutate free of
selection pressure (to get rid of it). However, such neutral mutations are powerless to produce new genuine information.
Dawkins and others point out that natural selection is the only possible naturalistic explanation for the immense design in
nature (not a good one, as Spetner and others have shown). Dawkins and others propose that random changes produce a
new function, then this redundant gene becomes expressed somehow and is fine-tuned under the natural selective
process.This idea is just a lot of hand-waving. It relies on a chance copying event, genes somehow being switched off,
randomly mutating to something approximating a new function, then being switched on again so natural selection can tune
it.Furthermore, mutations do not occur in just the duplicated gene; they occur throughout the genome. Consequently, all the
deleterious mutations in the rest of the genome have to be eliminated by the death of the unfit. Selective mutations in the

target duplicate gene are extremely rareit might represent only 1 part in 30,000 of the genome of an animal. The larger the
genome, the bigger the problem, because the larger the genome, the lower the mutation rate that the creature can sustain
without error catastrophe; as a result, it takes even longer for any mutation to occur, let alone a desirable one, in the
duplicated gene. There just has not been enough time for such a naturalistic process to account for the amount of genetic
information that we see in living things.Dawkins and others have recognized that the information space possible within just
one gene is so huge that random changes without some guiding force could never come up with a new function. There
could never be enough experiments (mutating generations of organisms) to find anything useful by such a process. Note
that an average gene of 1,000 base pairs represents 4 1000 possibilitiesthat is 10602 (compare this with the number of atoms
in the universe estimated at only 10 80). If every atom in the universe represented an experiment every millisecond for the
supposed 15 billion years of the universe, this could only try a maximum 10100 of the possibilities for the gene. So such a
neutral process cannot possibly find any sequence with specificity (usefulness), even allowing for the fact that more than
just one sequence may be functional to some extent.So Dawkins and company have the same problem as the advocates of
neutral selection theory. Increasing knowledge of the molecular basis of biological functions has exploded the known
information space so that mutations and natural selectionwith or without gene duplication, or any other known natural
processcannot account for the irreducibly complex nature of living systems.
Yet Scientific American has the impertinence to claim:
Comparisons of the DNA from a wide variety of organisms indicate that this [duplication of genes] is how the globin family of
blood proteins evolved over millions of years. [SA 82]This is about the vital red blood pigment hemoglobin that carries the
oxygen. It has four polypeptide chains and iron. Evolutionists believe that this evolved from an oxygen-carrying ironcontaining protein called myoglobin found in muscles, which has only one polypeptide chain. However, there is
no demonstration that gene duplication plus natural selection turned the one-chained myoglobin into the four-chained
hemoglobin. Nor is there any adequate explanation of how the hypothetical intermediates would have had selective
advantages.In fact, the proposed evolution of hemoglobin is far more complicated than Scientific American implies, though it
requires a little advanced biology to understand. The - and -globin chains are encoded on genes on different
chromosomes, so they are expressed independently. This expression must be controlled precisely, otherwise various types
of anemia called thalassemia result. Also, there is an essential protein called AHSP (alpha hemoglobin stabilizing protein)
which, as the name implies, stabilizes the -chain, and also brings it to the -chain. Otherwise the -chain would precipitate
and damage the red blood cells.AHSP is one of many examples of a class of protein called chaperones which govern the
folding of other proteins.3 This is yet another problem for chemical evolutionary theorieshow did the first proteins fold
correctly without chaperones? And since chaperones themselves are complex proteins, how did they fold?4Identifying
information-increasing mutations may be a small part of the whole evolutionary discussion, but it is a critical weak link in the
logical chain. PBS, Scientific American, and every other pro-evolution propaganda machine have failed to identify any
evidence that might strengthen this straw link.
Beetle bloopers
Flightless insects on windswept islands
Even a defect can be an advantage sometimes
by Carl Wieland
A big obstacle for evolutionary belief is this: what mechanism could possibly have added all the extra information required to
transform a one-celled creature progressively into pelicans, palm trees, and people? Natural selection alone cant do it
selection involves getting rid of information. A group of creatures might become more adapted to the cold, for example, by
the elimination of those which dont carry enough of the genetic information to make thick fur. But that doesnt explain the
origin of the information to make thick fur.For evolutionists there is only one game in town to explain the new information
which their theory requiresmutations. These are accidental mistakes as the genetic information (the coded set of
instructions on the DNA which is the recipe or blue-print specifying the construction and operation of any creature) is
copied from one generation to the next. Naturally, such scrambling of information will tend to either be harmful, 1 or at best
neutral.2However, evolutionists believe that occasionally, a good mutation will occur which will be favored by selection and
will allow that creature to progress along its evolutionary pathway to something completely different.
The wrong type of change
Are there good mutations? Evolutionists can point to a small handful of cases in which a mutation has helped a creature to
survive better than those without it. Actually, they need to take a closer look. Such good mistakes are still the wrong types
of changes to turn a fish into a philosopherthey are headed in precisely the wrong direction. Rather than adding
information, they destroy information, or corrupt the way it can be expressed (not surprising, since they are random
mistakes).For example, beetles losing their wings. A particular winged beetle type lives on large continental areas; the same
beetle type on a small windy island has no wings.What happened is easy to imagine. Every now and then in beetle
populations, there might be a mutational defect which prevents wings from forming. That is, the wing-making information is
lost or scrambled in some way.The damaged gene (a gene is like a long sentence carrying one part of the total instructions
recorded on the DNA) will then be passed to all that beetles offspring, and to theirs, as it is copied over and over. All these
descendant beetles will be wingless.If a beetle with such a wingless defect is living on the Australian mainland, for example,
it will have less chance to fly away from beetle-eaters, so it will be more likely to be eliminated by survival of the fittest
before it can leave offspring. Such so-called natural selection can help to eliminate (or at least reduce the buildup of) such
genetic mistakes.
Blown away
However, on the windy island, the beetles which can fly tend to get blown into the sea, so not having wings is an advantage.
In time, the elimination of all the winged ones will ensure that only those of this new wingless variety survive, which have
therefore been naturally selected. There! says the evolutionist. A favorable mutationevolution in action! However, it fails
to make his case, because though beneficial to survival, it is still a defecta loss or corruption of information. This is the
very opposite of what evolutionists need to demonstrate real evolution.To support belief in a process which has allegedly
turned molecules into man would require mutations to add information. Showing that information-losing defects can give a
survival advantage is irrelevant, as far as evidence for real evolution is concerned.
In short,
Evolutionary theory requires some mutations to go uphillto add information.
The mutations which we observe are generally neutral (they dont change the information, or the meaning in the code) or
else they are informationally downhilldefects which lose/corrupt information.The rare beneficial mutations to which
evolutionists cling, all appear to be like this wingless beetledownhill changes, losses of information which, though they
may give a survival advantage, are headed in precisely the wrong direction for evolution.All of our real-world experience,

especially in the information age, would indicate that to rely on accidental copying mistakes to generate real information is
the stuff of wishful thinking by true believers, not science.
The evolution trains a-comin
(Sorry, a-goinin the wrong direction)
by Carl Wieland
The atmosphere in the crowded lecture theatre foyer was alive with curious
anticipation. It was the late 1970s, the heady early days of the creation movement
in South Australia. The creation/evolution debate I was about to take part in, before
some 40 science teachers and involving a prominent academic evolutionist, was a
first for the region.As the words of an animated conversation drifted across to me, I
realized that my opponent-to-be was only a few metres to my left. A senior lecturer
(associate professor in US terms) in population biology, he was holding forth to a
small group of supporters, clearly unaware that his creationist protagonist was
within earshot.This is really frustrating, I heard him say. I feel like an astronaut
whos come back from the moon, seen the spherical Earth, and now hes
supposed to debate with someone who tries to tell people its flat. In my job
we seeevolution happening in front of our eyes.Back then, before creationist
arguments had had a good airing, it was understandable for him to think like that.
Biology teachers could perhaps be excused for perpetuating such a nave belief.
They simply assumed that the easily observable genetic changes in many types of
living populations were an obvious demonstration that evolution from microbes to
man was fact. Just give it enough time, and voil , such micro changes would
accumulate, continually filtered and guided by natural selection. It seemed obvious
and logical to expect these little steps to keep adding up so as to lead to the
macro changesthe really big jumps, frog-to-prince, fish-to-philosopher, that type
of thing. (As we will show later in this article, though, the very opposite is true.)
In that light, this biology lecturers perplexed frustration can be readily understood, because he thought that he was often
seeing a small bit of what would in time become a large chunk of change. We need to understand that most evolutionists,
even today, still think this way. Which is, frankly, why the usual answers given by most creationists, when challenged on the
subject of biological change, are inadequate.For instance, a challenger might say, Mosquitoes have evolved resistance to
DDT in just 40 years. If thats not evolution happening before our eyes, what is? Most people responses focus on
the amount of change. For instance, they will say, Well, thats just variation within a kind. Or they reply, But the mosquitos
still a mosquito, isnt it? It hasnt turned into anything else.Both of these replies are true. But they are inadequate and
seldom impress the challenger, who thinks, Well, thats just a copout for the people. Evolution takes millions of years, and
here we have all this change in only 40 years. So, give it a million years and imagine what sort of change well have
then!The analogy I have for many years used in explaining this in public lectures is that of a railway train. Imagine you see a
train pulling out of a station in, say, Miami, Florida, headed north to Chicago. 1 The distance you see it travel is only a few
hundred metres. But you can reasonably presume that, given enough time, it will end up in Chicago. You have seen sound
evidence to indicate that it is in principle capable of making the whole journey, you dont need to see it make the whole trip.
This is just how evolutionists see the little changes (often called microevolution, but seeaside below) happening all around
us. If a mosquito has changed a little in 40 years, you dont need to see it turning into an elephantit has shown that it is in
principle capable of making a similarly radical journey.What we need to be aware of, and focus on in our answers, I tell
audiences, is not the amount of change, but thetype or direction of change. It is not just that the train has not gone far
enough, but that it is headed in the wrong direction. The types of changes observed today, though they can be
accommodated within an evolutionary framework, are, we will see, precisely and demonstrably the opposite of the ones
which evolutionists really need in order to give some semblance of credibility to their belief system.So while you may be
seeing the train pulling out of the station at Miami, if the reality is that it is not heading north, up to Chicago, but is headed in
the opposite direction, downwards to where the line (if there was one) would end in the deep blue ocean, then it will never
get to Chicago. Time will not solve the problem, since it is in principle an impossibility to reach Chicago by train in that
downward direction. Just so, once we can point out to people that the evolution train (really the train of biological change) is
headed downwards, not upwards, then the more time there is, the less likely the whole evolution scenario becomes.Before
explaining what I mean by biological changes having a direction, I will share what triggered this article. It was a book
review2 by well-known evolutionary biologist Dr Jerry Coyne, of the University of Chicago, who could not resist an
opportunity to lash out at the creationists. 3 Amazingly, Coyne uses the train journey analogy himself, reinforcing my point of
how evolutionists see the issue. Though his intention is to mock creationists, he unwittingly provides a great opportunity to
show how misplaced this common reasoning is.The book he was reviewing 4 uses familiar examples of rapid human-induced
biological changes (antibiotic resistance in bacteria, pesticide resistance in insects, changes in growth rate of fish from
overfishing) to get people to consent to the bigger idea of microbes-to-man evolution.Coyne deplores the fact that the
books examples will probably not change the minds of creationist advocates, who have already accepted such changes as
adaptation within a species (variation within a kind would have been more precise). He says that creationists argue that
such small changes cannot explain the evolution of new groups of plants and animals, and goes on to say: This argument
defies common sense. When, after a Christmas visit, we [presumably his family in ChicagoCW] watch grandma leave on
the train to Miami, we assume that the rest of her journey will be an extrapolation of that quarter-mile. Thus, says Coyne, a
creationist unwilling to extrapolate from micro- to macroevolution is being irrational.
Reason vs rhetoric
Why can one say with confidence, concerning the biological changes observable today (man-induced or otherwise) that the
train is headed in the wrong direction? Why is it that when evolutionists use this grandmas train extrapolation argument, it
can be turned around to make the opposite point? Because the real issue in biological change is all about what happens at
the DNA level, which concerns information.5 The information carried on the DNA, the molecule of heredity, is like a recipe, a
set of instructions for the manufacture of certain items.Evolutionists teach that one-celled organisms 6 (e.g. protozoa) have
given rise to pelicans, pomegranates, people and ponies. In each case, the DNA recipe has had to undergo a massive net
increase of information during the alleged millions of years. A one-celled organism does not have the instructions for how to
manufacture eyes, ears, blood, skin, hooves, brains, etc. which ponies need. So for protozoa to have given rise to ponies,
there would have to be some mechanism that gives rise to new information.Evolutionists hail natural selection as if it were a
creative goddess, but the reality (which they invariably concede when pressed) is that selection on its own always gets rid of

information, never the opposite.7 To have a way to add information, the only game in town for evolutions true believers is
genetic copying mistakes or accidents, i.e. random mutations (which can then be filtered by selection). 8However, the
problem is that if mutations were capable of adding the information required, we should see hundreds of examples all
around us, considering that there are many thousands of mutations happening continually. But whenever we study
mutations, they invariably turn out to have lost or degraded the information. This is so even in those rare instances when the
mutational defect gives a survival advantagee.g. the loss of wings on beetles on windy islands.9
Whats in a word? Micro vs macro
Many creationists will say, We accept microevolution, but not macroevolution. As our main article points out, the micro
changes (i.e. observed genetic variation) are not capable of accumulating into macro ones, anyway.We suggest, however,
that it would be wiser to avoid the use of the term microevolution. To most people, it sounds as if you are conceding that
there is a little bit of evolution going on. I.e. a little bit of the same process that, given enough time, will turn microbes into
millipedes, magnolias and microbiologists. Thus, you will be seen as churlish or, as in Dr Coynes inverted train example,
as irrational for putting what they see as an arbitrarydistinction between the micro and macro.If the use of such potentially
misleading terminology is unavoidable, always take the opportunity to point out that the changes often labelled
microevolution cannot be the same process as the hypothetical goo-to-you belief. They are all information-losing
processes, which thus depend on there being a store of information to begin with.As creatures diversify, gene pools become
increasingly thinned out. The more organisms adapt to their surroundings by selection, i.e. the more specialized they
become, the smaller the fraction they carry of the original storehouse of created information for their kind. Thus, there is less
information available on which natural selection can act in the future to readapt the population should circumstances
change. Less flexible, less adaptable populations are obviously heading closer to extinction, not evolving.We see that, just
like with the train pulling out from Miami and headed south, if the sorts of changes we see today are extrapolated over time,
they lead to extinction, not onwards evolution.Remember, evolutionary belief teaches that once upon a time, there were
living things, but no lungslungs had not evolved yet, so there was no DNA information coding for lung manufacture.
Somehow this program had to be written. New information had to arise that did not previously exist, anywhere.Later, there
were lungs, but no feathers anywhere in the world, thus no genetic information for feathers. Real-world observation has
overwhelmingly shown mutation to be totally unable to feed the required new information into the system. 10 In fact, mutations
overall hasten the downward trend by adding genetic load in the form of harmful mutations, of which we have all
accumulated hundreds over the generations of our ancestry.11In other words, populations can change and adapt because
they have a lot of information (variety) in their DNA recipe. But unless mutations can feed in new information, each time
there is variation/adaptation, the total information decreases (as selection gets rid of the unadapted portions of the
population, some information is lost in that population). Thus, given a fixed amount of information, the more adaptation we
see, the less the potential for future adaptation. The train is definitely headed downhill, destined to fall off the jetty of
extinction.The supreme irony is that, of all the examples lauded by Dr Coyne as evolution, whether antibiotic resistance 12 or
changes in fish growth rates, not one single one supports his train analogy, but rather the reverse. Not one involves a gain
of information; all show the opposite, a net loss. Pondering all this, I feel a sense of the same sort of frustration (only in
reverse) that my evolutionist opponent was airing all those years ago, which he could have paraphrased as: Why cant they
see it? Its obvious, isnt it?
Who knows, perhaps somehow this article will get into Dr Coynes hands. Maybe it will give him, and some other evolutionist
apologists, food for thought the next time they put one of their grandmothers on a train.
Can any genetic information be gained from mutations
Ancon sheep: just another loss mutation
by Jerry Bergman
Many examples of mutations that produce phenotypic changes are loss mutations in which the mutation causes the loss of
a structure. Loss mutations that result in a non-functional protein or structure can be beneficial if the functional protein loss
or malformation somehow benefits the organism (or, far more often, humansas in the case of the loss of seeds in a fruit,
producing a convenient seedless fruit). One of the first and most common examples of the latter was an alleged new breed
Huxley called it a race, others labeled it a speciesthat resulted when Massachusetts farmer Seth Wright noticed in 1791
that he had a very short-legged sheep in his flock.1,2 The story is usually claimed that, realizing the advantages of this trait
to sheepherders, Wright bred a flock of the short-legged species of sheep, all of whom were unable to jump over ordinary
stone walls or fences.3,4 Called the Ancon or Otter breed, it was believed to reduce the need for tall fences, as well as
reducing the number of lost sheep. 1 In addition, the short legs limited the sheeps ability to run so that, as a result, they
were less active, more gentle, and gained weight far more readily then other sheep breeds.5
Charles Darwin and Ancon sheep
Charles Darwin was evidently the first person to use the Ancon breed as evidence for evolution. He discussed them at least
three times in his published books. In the Origin of Species, first published in 1859, Darwin speculated that some animal
variations have probably arisen suddenly, or by one step in one generation. One example that Darwin used was the
turnspit dog. He then added that such one step rapid evolution also is known to have been the case with the Ancon
sheep.6 In another work, Darwin claimed that in a few instances new breeds have suddenly originated; thus, in 1791, a
ram-lamb was born in Massachusetts, having short crooked legs and a long back, like a turnspit-dog. From this one lamb
the otter or ancon semi-monstrous breed was raised; as these sheep could not leap over the fences, it was thought that they
would be valuable . The sheep are remarkable from transmitting their character so truly that Colonel Humphreys never
heard of but one questionable case of an ancon ram and ewe not producing ancon offspring. When they are crossed with
other breeds the offspring, with rare exceptions, instead of being intermediate in character, perfectly resemble either parent;
even one of the twins has resembled one parent and the second the other. Lastly, the ancons have been observed to keep
together, separating themselves from the rest of the flock when put into enclosures with other sheep.7
Is the Ancon mutation beneficial?
Other evolutionists such as Kenneth Miller have also touted Ancon sheep as an example of evolutionary jumps. But this is
deceptive because the condition actually is pathological, known as achondroplasia (where cartilage fails to develop, from
Greek a, not; chondros, cartilage; plassein, to mould or forma form of dwarfism) or a related pathology, 8 and would
bring about the extinction of these creatures in a natural environment, rather than an advance through natural selection.
The suggestion, by Miller, that the four-winged fly and the Ancon sheep present evolutionary advances was simply a
deceptive ploy.9Actually, the mutation has proved lethal in a protected environment as well. Gish concludes that Ancon
sheep are deformed animals, specifically, the product of a pathological condition, called achondroplasia. In his
presentation, Miller pointed out that these sheep have been bred by sheep breeders because they are short-legged and
thus cannot jump fencesan advantage for those who raise sheep. What he did notsay was that their condition is caused
by a mutation which results in the failure of the cartilage between the joints to develop. There is thus little or no cartilage

between the joints of their legs, causing them to be short. This abnormal condition would, of course, result in their rapid
extinction in a natural environment and could never be considered an evolutionary advance. 9The Ancon mutation, in
harmony with our general experience with mutations, was harmful to the sheep for many reasons. Achondroplasia is a type
of genetic dwarfism characterized by slow limb growth relative to the rest of the skeleton. 10Many other abnormalities aside
from short legs have been discovered as a result of Ancon sheep postmortems. These included looser leg joint
articulations, abnormal spines and skulls, flabby subscapular muscles, and crooked bent inward forelegs that caused the
legs to appear like elbows while the sheep were walking. 11,12 This prominent trait is the reason for the term Ancon (ancon is
the Latin transliteration of the Greek word for elbow, ). The Ancon legs resemble the clubfeet condition, and, in fact,
as adults were clumsy cripples that could neither run nor jump like other sheep.13
Conclusion
A major problem for Darwinists is that the Ancon mutation (a Mendelian recessive), as is true with most other mutations, is a
loss mutation. This type of mutation does not result in an information gain, as Darwinism requires, but an information loss
(often of a complete structure or protein). A chief difficulty in arguing for macroevolution by mutations is the fact that most
expressed mutations are either lethal or semi-lethal. Either they kill the organism outright, or they prove harmful, so that in
the ordinary course of life they are eliminated. This includes both mutations in which the fertility rate is reduced as well as
mutations that result in the loss of certain structures.And as shown, even the rare beneficial mutation, as some might
consider the Ancon to be, are the result of information loss. Therefore they are going in the opposite direction from what
goo-to-you evolution requires.14
Bacteria evolving in the lab?
A poke in the eye for anti-evolutionists?
Some bacteria cultured in a laboratory have gained the ability to use citrate as an energy source. We have had lots of
queries about this matter, so this is our weekend feedback to all those who
have asked.
A New Scientist article proclaims:
Lenskis experiment is also yet another poke in the eye for anti-evolutionists,
notes Jerry Coyne, an evolutionary biologist at the University of Chicago. The
thing I like most is it says you can get these complex traits evolving by a
combination of unlikely events," he says. Thats just what creationists say
cant happen."1The many comments posted on the New Scientistwebsite
shows just how excited the atheists are about this report. They are positively
gloating.
The context
In 1988, Richard Lenski, Michigan State University, East Lansing, founded 12
cultures of E. coli and grew them in a laboratory, generation after generation,
for twenty years (he deserves some marks for persistence!). The culture medium had a little glucose but lots more citrate, so
once the microbes consumed the glucose, they would continue to grow only if they could evolve some way of using citrate.
Lenski expected to see evolution in action. This was an appropriate expectation for one who believes in evolution, because
bacteria reproduce quickly and can have huge populations, as in this case. They can also sustain higher mutation rates than
organisms with much larger genomes, like vertebrates such as us. 2 All of this adds up, according to neo-Darwinism, to the
almost certainty of seeing lots of evolution happen in real time (instead of imagining it all happening in the unobservable
past). With the short generation times, in 20 years this has amounted to some 44,000 generations, equivalent to some
million years of generations of a human population (but the evolutionary opportunities for humans would be far, far less, due
to the small population numbers limiting the number of mutational possibilities; and the much larger genome, which cannot
sustain a similar mutation rate without error catastrophe; i.e. extinction; and sexual reproduction means that there is 50%
chance of failing to pass on a beneficial mutation ).As noted elsewhere (see Giving up on reality), Lenski seemed to have
given up on evolution in the lab and resorted to computer modelling of evolution with a program called Avida (see
evaluation by Dr Royal Truman, Part 1 and Part 2, which are technical papers). Indeed, Lenski had good reason to abandon
hope. He had calculated1 that all possible simple mutations must have occurred several times over but without any addition
of even a simple adaptive trait.
Lenski and co-workers now claim that they have finally observed his hoped for evolution in the lab.
The science: what did they find?
In a paper published in the Proceedings of the National Academy of Science, Lenski and co-workers describe how one of 12
culture lines of their bacteria has developed the capacity for metabolizing citrate as an energy source under aerobic
conditions.3This happened by the 31,500th generation. Using frozen samples of bacteria from previous generations they
showed that something happened at about the 20,000th generation that paved the way for only this culture line to be able to
change to citrate metabolism. They surmised, quite reasonably, that this could have been a mutation that paved the way for
a further mutation that enabled citrate utilization.This is close to what Michael Behe calls The Edge of Evolutionthe limit
of what evolution (non-intelligent natural processes) can do. For example, an adaptive change needing one mutation might
occur every so often just by chance. This is why the malaria parasite can adapt to most antimalarial drugs; but chloroquine
resistance took much longer to develop because two specific mutations needed to occur together in the one gene. Even this
tiny change is beyond the reach of organisms like humans with much longer generation times. 4 With bacteria, there might be
a chance for even three coordinated mutations, but its doubtful that Lenskis E. coli have achieved any more than two
mutations, so have not even reached Behes edge, let alone progressed on the path to elephants or crocodiles.Now the
popularist treatments of this research (e.g. in New Scientist) give the impression that the E. coli developed the ability
to metabolizecitrate, whereas it supposedly could not do so before. However, this is clearly not the case, because the citric
acid, tricarboxcylic acid (TCA), or Krebs, cycle (all names for the same thing) generates and utilizes citrate in its normal
oxidative metabolism of glucose and other carbohydrates.5Furthermore, E. coli is normally capable of utilizing citrate as an
energy source under anaerobic conditions, with a whole suite of genes involved in its fermentation. This includes a citrate
transporter gene that codes for a transporter protein embedded in the cell wall that takes citrate into the cell. 6 This suite of
genes (operon) is normally only activated under anaerobic conditions.
this would be the sort of thing that mutations are good at: destroying things
So what happened? It is not yet clear from the published information, but a likely scenario is that mutations jammed the
regulation of this operon so that the bacteria produce citrate transporter regardless of the oxidative state of the bacteriums
environment (that is, it is permanently switched on). This can be likened to having a light that switches on when the sun
goes downa sensor detects the lack of light and turns the light on. A fault in the sensor could result in the light being on all

the time. That is the sort of change we are talking about.Another possibility is that an existing transporter gene, such as the
one that normally takes up tartrate,3which does not normally transport citrate, mutated such that it lost specificity and could
then transport citrate into the cell. Such a loss of specificity is also an expected outcome of random mutations. A loss of
specificity equals a loss of information, but evolution is supposed to account for the creation of new information; information
that specifies the enzymes and cofactors in new biochemical pathways, how to make feathers and bone, nerves, or the
components and assembly of complex motors such as ATP synthase, for example.However, mutations are good
at destroying things, not creating them. Sometimes destroying things can be helpful (adaptive), 7 but that does not account
for the creation of the staggering amount of information in the DNA of all living things. Behe (in The Edge of Evolution)
likened the role of mutations in antibiotic resistance and pathogen resistance, for example, to trench warfare, whereby
mutations destroy some of the functionality of the target or host to overcome susceptibility. Its like putting chewing gum in a
mechanical watch; its not the way the watch could have been created.
Much ado about nothing (again)
it has no relevance to the origin of enzymes and catalytic pathways that evolution is supposed to explain
Behe is quite right; there is nothing here that is beyond the edge of evolution, which means it has no relevance to the origin
of enzymes and catalytic pathways that evolution is supposed to explain.8
A cat with four ears (not nine lives)
by Don Batten
A kitten born on a farm in Germany has an extra pair of ears! 1 Lilly is perfectly healthy. The extra ears do not hear; only her
usual ears, and they hear quite normally.Animals, and even humans, can be born with extra toes/fingers/ears, and so on.
Evolutionists sometimes use these extra bits, such as an extra pair of wings on a fruit fly, to claim that genetic information
has increased spontaneously; that is, without an intelligent designer.2 As a propaganda tool, this does confuse some
people.However, no new genetic information is involved with making these extras. The cat already has the information for
making ears, and the fruit fly already has information for making wings. An error during development has merely caused the
information to activate twice instead of once!If you used a photocopier to make a copy of a document and it malfunctioned
and printed two copies, you would not conclude that you had created new information by this accident. It is like this with the
extra organs that sometimes appear on animals (and plants). There is no new information created, so it has nothing to do
with evolution!Lillys extra ears most likely came about because of a defect during development, rather than inheriting a
defective gene from a parent. Chemicals in the environment can cause such defects. Everyone knows about how
thalidomide caused many abnormalities in human babiesusually loss of limbs due to suppression of the information for
making limbs, for example. Radiation can also cause defects, or they can be spontaneous.In Lilly, the defect resulted in the
expression of the ear-design information twice, resulting in two pairs of ears. However, not all the information activated
twice, because apparently only the external parts of the extra ears are present, since the ears do not hear.
New plant coloursis this new information?
One skeptic believes that he has found an example of new information arising by mutations and natural selection. Could he
be correct?
Question/statements from skeptic
Since I have some background in genetics and plant breeding, I can tell you that the entire field of plant breeding is based
on new information arising from random mutations. New traits do appear, at the molecular and morphological level new
proteins, new pigments, etc. These are novelties.Two parents with blue eyes will generally produce children with blue eyes,
and likewise two plants with white flowers will generally produce new plants with white flowers, but sometimes that seedlings
with red or purple flower turns up, not because a recessive allele has been revealed, but because a mutation has altered an
existing pigment or biochemical pathway to produce something entirely new, that has never existed before. This is NEW
INFORMATION.As an example, there is nothing like an ear of corn in any other species of grass. It seems to be entirely
unique in the plant kingdom. And yet there are three or four species of grass, very similar to corn in their overall growth, but
with typical grass-like reproductive organs. The funny thing is, they will breed with corn to produce fully fertile offspring. It is
clear that a combination of mutation and selection has produced in corn an unusual and entirely novel structure from a very
typical grass in other words, NEW INFORMATION.
Response by Don Batten, Ph.D.
The question comes from someone who does not understand the concept of information. The appearance of a new trait
does not have to involve the addition of information via the DNA coding. In fact, as bioinformatics expert Dr Lee Spetner has
demonstrated (in his book, Not by Chance, Judaica Press), such is so unlikely that it could never be the basis for the
increased information needed for molecules-to-man evolution. Information content is measured not by the number of traits,
but by what is called the specified complexity of a base sequence or protein amino acid sequence. A mutation, being a
random change in highly specified information contained in the nucleic acid base sequence, could almost never do anything
but scramble the information; that is, reduce the information.Now sometimes such a loss of information results in a new trait
for example, purple or red flowers where there were only blue ones before. This would have to be studied at the DNA
base sequence level (or amino acid sequence in the enzyme producing the pigment, or the pigment itself) to show this. For
example, a blue pigment could be changed into a red or purple pigment by loss of a side-chain from the basic pigment
molecule. Such a change would involve a loss of specified complexity and therefore a loss of information. Even an
informationally neutral change could be responsiblethis is not to be confused with Kimuras neutral mutation, which has
nothing to do with the concept of information, only the effect on survival. Even a change of one amino acid in a protein, not
altering information content, can alter energy levels in such a way as to change the visible absorption spectrum, e.g. by
reducing the number of consecutive conjugated bonds. And a small change in pH can have a large effect on color (this
effect was overlooked by a group of molecular biologists who managed to get the gene for the blue pigment in hydrangeas
into a rosethe rose was not blue, although the pigment was manufactured, because the cell pH was not the same as a
hydrangeas!).Of the many hundreds of antibiotic, herbicide and insecticide resistance mechanisms studied at a biochemical
level, none involve addition of specified complexity in the DNA. Although some are new traits due to mutations, all involve
loss of information. An example is the loss of control over the production of an enzyme that happens to break down penicillin
in Staphylococcus aureus, resulting in the production of greatly increased amounts of the enzyme and thus conferring
resistance to penicillin. Another mode of antibiotic resistance due to mutation is decreased effectiveness of a membrane
transport protein so that the antibiotic is no longer taken up by the cell (but the normal function of the transporter is also
impaired and the bacterium is less fit to survive in the wild). However, much antibiotic resistance seems to be acquired by
the transfer of plasmids from other species of bacteria via conjugation, which of course does not explain the ultimate origin
of the information.What about the corn story? The questioner is probably correct about the species of grass and the origin of
corn. I have no problem with that. Creationists would say that the species that interbreed with corn (maize) are of the same
created kind (see Ligers and wholphins? What next?, Q&A: Speciation). However, until the biochemical/genetic basis of the

difference between maize and its wild relatives is determined, it cannot be said that the maize inflorescence is due to new
information. Loss of information in some base sequences responsible for early steps in inflorescence development could
easily account for such seemingly large differences.It must be noted (again) that creationists do not say that mutations are
always harmful, just that they are almost invariably a loss of information (i.e. specified complexity). Sometimes a loss of
information can be beneficial, but it is a loss of information. For example, loss of function of wings in the flightless cormorant
in the Galpagos Islands, which can now dive better than its flying cousins, or flightless beetles on a windswept island that
are better off because they are less likely to be blown into the seasee Beetle bloopers.Evolution needs swags of new
information, if a microbe really did change into a man over several billion years. The additional new information would take
nearly a thousand books of 500 pages each to print the sequence. Random changes cannot account for a page, or even a
sentence, of this, let alone accounting for all of it. The evolutionist has an incredible faith!
Sickle-cell anemia does not prove evolution!
by Felix Konotey-Ahulu
Sixth-graders I have lectured on genetic counselling invariably pop
Dr Felix Konoteysome questions such as:
Ahulu, M.D. (Lond.),
Is it true that the sickle-cell phenomenon has established Darwinian
FRCP, DTMH, is a
evolution as fact?
world authority on
Behind the question, of course, lies the assumption that observing
sickle-cell disease,
selection/adaptation involving a mutation (an inherited random change
with
25
years
or defect) somehow implies that the more complicated forms seen
experience as a
today arose from simpler forms traced back ultimately to one-cell
physician,
clinical
organisms.The answer is of course no. In the early 1950s I did a
geneticist
and
course on evolution and metaphysics at the feet of Professor J.Z.
consultant physician
Young, perhaps the greatest evolutionist of recent years. Nothing has
in
Ghana
and
happened during the past 30 years in molecular biology in general,
subsequently
in
and the sickling phenomenon vis--vis malaria in particular, to have
London. He is a
raised evolution from theory to established fact.
visiting professor at Howard University College
A scientific theory that tackles origins must of necessity find that it is
of Medicine in Washington, and honorary
out of its depth. In his 1985 book, The Limits of Science, Nobel
consultant to its Center for Sickle Cell Disease.
laureate Sir Peter Medawar stated clearly that ultimate questions like
He is the author of a major 643-page
origins go beyond the explanatory competence of science. For
text,The Sickle Cell Disease Patient(Macmillan,
people to state that because sickle-cell trait children are resistant to
1991, ISBN 0333-39239-6; Tetteh-ADomeno
cerebral malaria, therefore the whole neo-Darwinian scenario is fact, is
Co., Watford, UK, ISBN 0-9515442-2-5, 1996).
like saying that my black skins ability to withstand the tropical sun
This article is abstracted with Dr Konoteyestablishes an evolutionary process, starting perhaps with a big
Ahulu's permission from pages 106108 of his
bang, leading to single-celled organisms, multicelled organisms,
book.
invertebrates, vertebrates, and on up to man. In any case, although
malaria would certainly seem to be a major factor in maintaining a high
frequency of such inherited red cell defects, the real situation
concerning their distribution also involves a complex interaction of viral
infections, dietary habits and social factors. There is not the random mating which Western population geneticists often
assume, and African anthropogenetics cannot ignore the effect of marriage customs, and social and religious habits that
lead to in-breeding, for example.
Stuttering facts
The hypotheses of some anthropologists and theoretical geneticists detract from more serious work. While natural selection
is a fact, attention to selection hypotheses is frequently overdone and other factors ignored. In one case it was found that
out of 10 West African individuals carrying the sickle-cell trait and living in the UK, there were five who had the genetic
predisposition to stuttering (which is known to be high in West Africa). The temptation to use natural selection explanations
to link the stuttering gene frequency to child-hood malaria and sickling overlooks some important sociological facts.First,
most who have the stuttering tendency in Ghana in particular are left-handed, which is a major social disadvantage.
Relatives heckle and bully the left-hander for doing household jobs with the wrong hand. The already nervous left-hander is
then forced in the schools to write with the right hand, evoking a constant state of agitation.The desire to excel against all
odds is great, so left-handers are generally better at their studies. You will thus find a higher number of left-handers, and
thus more stutterers, in any academic institution. If that institution is in northern Nigeria, for instance, 30 per cent of these
stutterers will also be sicklers, simply because 30 per cent of the rest of the population are sicklers. This has nothing to do
with malaria and selection, nor does the fact that of the five consultants in the haematology department of a famous African
teaching hospital, 80 per cent had the sickle-cell trait.I have indicated more than once that African anthropogenetics would
be much better served by thinking along factual lines rather than theoretical evolutionary concepts. The same could certainly
be said for science and medicine in general.
Sickle-cell disease
Sickle-cell anaemia is caused by an inherited defect in the instructions which code for the production of haemoglobin, the
oxygen-carrying pigment in red blood cells. You will only develop the full-blown, serious disease if both of your parents have
the defective gene. If you inherit the defect from only one parent, the healthy gene from the other one will largely enable you
to escape the effects of this serious condition.However, this means you are capable of transmitting the defective gene to your
offspring, and it also happens that such carriers are less likely to develop malaria, which is often fatal. Being a carrier of sicklecell disease without suffering it (heterozygosity is the technical term) is far more common in those areas of the world which are
high-risk malaria areas, especially Africa.This is good evidence that natural selection plays a part in maintaining a higher
frequency of this carrier state. If you are resistant to malaria, you are more likely to survive to pass on your genes.
Nevertheless, it is a defect, not an increase in complexity or an improvement in function which is being selected for, and
having more carriers in the population means that there will be more people suffering from this terrible disease. Demonstrating
natural selection does not demonstrate that upward evolution is a fact, yet many schoolchildren are taught this as a proof of
evolution.

Evolution of a new master race?


by Jonathan Sarfati

News reports are talking about a German baby superman.1 He is only 4, but his thigh already has twice the muscle mass of
most kids his age, and half the fat He is strong enough to hold 3 kg (7 lb) weights outstretched, hard for many adults His
strength is the result of a genetic mutation (an inherited copying mistake in the DNA instructions) that gives him the extra
amount of muscle.The childs mother was a muscular 24-year-old former sprinter who had one copy of this mutation, but
paired with the normal gene Her brother and three other close male relatives also seem to have this mutation, because they
are very strong. One of them was a construction worker who can unload heavy curbstones by hand.The boy has two copies
of the mutated gene, the other one almost certainly from his father.
Evolution proved?
Is this not an example of evolution in action, a way in which organisms can become bigger and better? Not at all. It pays to
look closely at the nature of the mutation in this case. Normally, muscle growth is well controlled, and one controller is a
protein calledmyostatin or growth/differentiation factor 8 (GDF-8). This new mutation actually damages the gene that
produces myostatin.2 As a result, the myostatin protein is not properly formed and the muscles grow in an uncontrolled
fashion.Evolution from goo to you via the zoo requires a huge number of mutations to increase information content. This is
to build newstructures and enzymes that didnt previously exist. If this were occurring, we would expect to find lots of
information-increasingmutations. But instead we have yet to find even one heritable random mutation of this type. 3 Rather,
observed mutations are either neutral or information-losing.Notice that we dont deny there are beneficial mutations, i.e.
mutations that benefit their possessors. But even these are going in the wrong direction to help turn bacteria into babies.
The superbaby mutation is just one in a long line of information-losing mutations that might count as beneficial. It obviously
cant explain how muscles and myostatin evolved in the first place.Indeed, readers of Creation magazine and our website
might remember that we have written on exactly the same thing in animals. 4 The Piedmontese and Belgian Blue cattle are
extremely muscular precisely because a mutation results in the production of a defective myostatin protein. 5 A similar
mutation has produced muscular mice.It is debatable whether the mutation is really beneficial in the long run. The Belgian
Blue mutation has side effects, for example, reduced fertility. And doctors worry that this superboy might later suffer from
health problems including heart trouble. It should not be surprising that a protein like myostatin is there for a reason, so
destroying its effectiveness would cause problems.But the main point is still that the mutation is losing information, not
gaining it. So it is just like the wingless beetles on windswept islands. They cant fly up so the wind doesnt sweep them into
the sea, which is a good thing for their survival. But they have still lost the power of flight. This doesnt explain how wings or
flight could have evolved in the first place.6
The 'werewolf' gene
by Thomas Awtry
The general public is bombarded with evolutionary propaganda through television, radio, magazines, and newspapers. A
good example of such was found in a recent newspaper article by Associated Press science writer Malcolm Ritter,
'Scientists move closer to finding "werewolf" gene'.1The article describes a very rare condition of excessive hairiness which
is not known to afflict anybody outside a large family in rural Mexico. Some newborn males are born with hair all over their
face, including the eyelids, with only the lips remaining hairless. Men with the condition have inch-long hair all over their
face, and on some parts of their upper body. Women with the condition, while not being affected to the same degree as the
men, have random patches of hair on the face and on the body.According to the article, scientists have moved closer to
finding the gene, which someday may help treat baldness and excessive hairiness. Readers are then informed that the
werewolf condition 'comes from an aberrant gene that runs through their large family, perhaps after reawakening from a long
sleep during human evolution'. The last phrase speaks volumes: this hairiness condition is said to be a leftover trait from our
alleged ape-type ancestry.As is true of several evolution-creation topics of discussion, the difference lies not in the data
under consideration, but rather in the interpretation of the data. While it is true that an aberrant gene now runs in the one
family, it does not therefore mean that it is a 'reawakening' effect of 'human evolution'. Another interpretation which is just as
viable is that it is simply a mutated gene. The mutation changed the amount of a physical feature (hairiness) of humans
which has existed in humans from their initial creation. This may be from damage to a gene controlling hair growth or its
distribution.And as creationists have been saying for years about all mutations, this random mutation is harmful, if anything.
The article reveals how much the individuals with the trait suffer, and how most become outcasts of society, often becoming
members of sideshows for a travelling circus. It is possible then that this condition might become even more rare as
selection works against these individuals. Of course, this supports the creationists' explanation, but not the evolutionary
explanation. Remember, Darwinian evolution requires mutations to be favourable and cumulative, increasing
viability.Another physical phenomenon which some evolutionists have claimed in the past as evidence of evolutionary
ancestry is when a child is born with a 'tail'. The 'tail', however, is really not a tail or part of the coccyx, but is rather a fatty
tumour.2 The lesson to be learned, though, is that no one now claims it to be due to a 'reawakening' of human evolution.
We can learn several lessons from the article:
Sometimes there is more than one interpretation of data.
Evidence suggests mutations to be random and detrimental. And because
Evolution must have favourable, 'uphill' mutations, and many of them, therefore
Evolution, like werewolves, is a myth.

Evolution in a Petri dish?


Have scientists demonstrated evolution in action?
by Don Batten
So trumpeted Scientific American, December 2007, p.17. What are
they talking about? Some Spanish researchers managed to get
some nematodes (tiny roundworms) commonly used in lab
experiments to grow in the presence of bacteria that normally kill
them. Apparently a mutant form of the nematode was adapted to
living with the bacteria. This is supposed to demonstrate
evolution.Have they demonstrated that mutations can create new,
complex featuresthe sorts of changes needed to change worms
into fish, for example? The biochemical basis of this nematode

adaptation has not yet been elucidated, but the article reveals that,The difference in the worms movements shows that the
ability to survive bacteria does not come without cost. The mutated individuals breathe poorlythey consume 30 percent
less oxygenand they are not as fast as their wild cousins in competing for food. From a Darwinian perspective,
Martinez says, the phenomenon represents a second-class selection that resembles the utility of the sickle cell mutation
against malaria.So we are looking at broken, defective worms, not ones that are on an upward path to greater complexity.
These worms are not going to become humans!Sometimes it is helpful to be brokenas in the beetles with defective wings
that could not fly on a windy island (and thus avoided being blown into the sea)seeBeetle bloopers. Note the comparison
with sickle cell trait. This is another example of a broken gene that happens to be adaptive under certain circumstances,
according to one of the worlds leading experts on the disease,Dr Felix Konotey-Ahulu. Thats what mutations do: they break
existing genes, not create brand new ones.Mutations do not create the complex, integrated DNA code needed to explain
how some worms changed into fish or ultimately how microbes changed into mankind. Nor do natural processes explain the
origin of incredibly complex, essential cellular components such as the ATP synthase complex that all living things need to
have to live (see Fantastic voyage), or the DNA code that is common to all living things for that matter (see DNA: marvellous
messages or mostly mess?).In a side bar note, the editors claim that this observation provides evolution with experimental
evidenceevidence that creationism does not have. Now creationists accept that mutations occur, that natural selection
occurs and that adaptation occurs (see the many articles on this web site under Mutations Q&Aand Natural Selection Q&A),
so whoever wrote this either has no idea of what he/she is arguing against, or is being deceptive.They also claim that this
nematode is on the verge of speciation. But creationists dont deny that speciation occurs eitheryet another straw man
argument, as speciation can happen without natural processes having to invent new complex features. For example,
different species of cattle (Bos spp.) all have the same suite of organs; they only vary in size, colour, etc. See: Speciation
Q&AThis article is yet another example of the deceptive bait-and-switch trick, or equivocation, that is so favoured by
evolutionary propagandists like the editor of SA, John Rennie. My colleague, Jonathan Sarfati, countered Mr Rennie on a
previous occasion when Rennie launched into an incredibly ill-informed scathing attack on creationists. See 15 ways to
refute materialistic bigotry.If by evolution Mr Rennie and Co. mean that organisms can adapt by mutations and natural
selection, then we creationist biologists readily accept this, so they have not provided evidence for anything that contradicts
creationism. But if they mean that they have demonstrated that worms can change into fish (which is the grand claim of
Evolution / Darwinism), then what they have shown has nothing to do with proving this.Interestingly, the editorial sidebar
quotes a Spanish scientist as saying, To my knowledge, it is the first time that an evolutionary law has been demonstrated
in a complex creature. Thats an interesting admission, considering that evolutionists have been telling us for decades that
evolution is a fact. But the article itself refutes this claim, pointing out that the adaptation in the nematode resembles sickle
cell in humans. In their endeavour to hype up the findings to try to score a point against creationism, they contradict
themselves. Warfarin resistance in rats is another example of a mutation breaking an existing gene in a complex creature
where it is adaptive (see Rats! Another case of sickle cell anemia). The CCR5 delta32 mutation in humans is another
example of an adaptive mutation in humans (conferring resistance to HIV), but again, it is a broken gene. There are many
more examples of mutations breaking, wrecking existing genes where they simply cause disease, with no benefits
whatsoever. Indeed, over a thousand human diseases are known to be caused by mutations. The mutation-natural-selection
train is going in the wrong direction.So, Mr Rennie and Co., of Scientific American, you have not demonstrated Evolution at
all, only Devolution that happens to be adaptive, which is no threat to creationists. When you find a mutation or a series of
mutations that occurred naturally that creates one of the hundreds of enzyme complexes upon which life depends, you
might perhaps have a valid point to make. But I wont hold my breath while I wait for an example, I might end up breathless
like the nematodes!
Breathtaking new frog surprise
by Carl Wieland
The discovery that this rare frog, living in Borneo, has no lungs, is being used to promote evolution. But as the article shows,
this is off the mark.Making headlines is an amazing discovery about a
unique species of frog. The amphibian is described as bizarre, a fitting
epithet given that it is missing lungs altogether (the wordplay in the title
is intentional).1 The 5 cm (2 inch) long Barbourula kalimantanensislives
in the jungles of Borneo, and apparently gets all the oxygen it needs
through its skin. It was first known about some 30 years ago, but being
very rare, only one other specimen has been discovered to date, and
neither was dissected.Some of the media descriptions associate the
frog with the word primitive, a term loaded with evolutionary
connotations. But is it somehow an intermediate, a form closer to some
ancestor that had not evolved lungs yet? Not at all; even most informed
evolutionists would agree with the conclusion that it is almost certainly
descended from a frog that once had lungs.We can connect the dots to
lead to a very likely scenario of how this species arose.Frogs with lungs already get a significant amount of their oxygen
through the skinthis is absorbed directly from the water.Even though its habitat is close to the equator, the streams that B.
kalimantanensis lives in are very fast-flowing and extremely cold. Such waters contain large amounts of dissolved
oxygen.There is a very similar species of frog in the Philippines that does have lungs, and the two may well be descendants
of the same created kind.If one such frog with lungs happened to have a (information-losing, hence not evolutionary in the
microbes to man sense) mutation causing lungs to disappear or dramatically reduce, then the descendants of such a frog
exhibiting this loss/defect wouldBe well able to survive in their oxygen-rich home waters, using the existing mechanism of
skin-breathing. But in addition:They would actually be at a selective advantage compared to their lung-equipped comembers of the population. In such fast-moving streams, those with lungs (which make the creature float) would be more
likely to be carried away from the breeding population. Also:Without lungs, the body would flatten more readily, increasing
the available surface area for skin-breathing still more.Thus, just as in the case of the blind cave fish that have devolved
from fish with eyes (see this article), natural selection would soon ensure that the entire population was lungless. Such a
downhill change would only need a few generations, especially in a small isolated population. No vast time spans are
required.Although such fascinating features of the natural world are always trumpeted in evolution-supporting terms, they
actually make far more sense in a framework of history. Vast amounts of information were created in separate biological
populations at the beginning, and the direction of biological change since creation has been overwhelmingly downhillthe
very opposite of the grand-scale claims of evolution.
CAN MUTATION BE THE MECHANISM FOR EVOLUTION?

Hox (homeobox) GenesEvolutions Saviour?


by Don Batten
Some evolutionists hailed homeobox or hox genes as the saviour of evolution soon after they were discovered. They
seemed to fit into the Gouldian mode of evolution (punctuated equilibrium) because a small mutation in a hox gene could
have profound effects on an organism. However, further research has not born out the evolutionists hopes. Dr Christian
Schwabe, the non-creationist sceptic of Darwinian evolution from the Medical University of South Carolina (Dept. of
Biochemistry and Molecular Biology), wrote:
Control genes like homeotic genes may be the target of mutations that would conceivably change phenotypes, but one
must remember that, the more central one makes changes in a complex
system, the more severe the peripheral consequences become. Homeotic What is the REAL message of the patterns
changes induced in Drosophilagenes have led only to monstrosities, and of life?
most experimenters do not expect to see a bee arise from
The
Biotic
Message
their Drosophila constructs. (Mini Review: Schwabe, C., 1994. Theoretical
Walter
ReMine
limitations of molecular phylogenetics and the evolution of relaxins. Comp.
Biochem. Physiol.107B:167177).Research in the six years since Schwabe
This book scientifically fights
wrote this has only born out his statement. Changes to homeotic genes cause
evolutionists on their terms,
monstrosities (two heads, a leg where an eye should be, etc.); they do not
on their issues, using their
change an amphibian into a reptile, for example. And the mutations do not
testimony, and their ground
add any information, they just cause existing information to be mis-directed to
rules. It dismantles many
produce a fruit-fly leg on the fruit-fly head instead of on the correct body evolutionary illusions, and offers a new
segment, for example.Evolutionists, of course, use the ubiquity of hox genes creation theory of biology: Life was
in their argument for common ancestry (Look, all these creatures share these designed to shout that it had only ONE
genes, so all creatures must have had a common ancestor). However, designer, and to resist all other
commonality of such features is to be expected with their origin from the explanations. 538 pages, hardbound.
same (supremely) intelligent designer. All such homology arguments are only
arguments for evolution when one excludes, a priori, origins by design.
Indeed many of the patterns we see do not fit common ancestry. For
example, the discontinuity of distribution of hemoglobin-like proteins, which are found
in a few bacteria, molluscs, insects, and vertebrates. One could also note features
such as vivipary, thermoregulation (some fish and mammals), eye designs, etc. For
more detail, see The Biotic Message.
Gain-of-function mutations: at a loss to explain molecules-to-man evolution
by Dr Jean Lightner
Evolutionists point to mutations as providing the raw material necessary for the
onward, upward change they believe has occurred since life began. Mutations which
affect an organism are often categorized into two basic types: loss-of-function
mutations and gain-of-function mutations.1A loss-of-function mutation is a mutation
that results in reduced or abolished protein function.2 A gain-of-function mutation has
been defined as a mutation that confers new or enhanced activity on a protein. 3 A
good understanding of these two types of mutations can be gained by examining
mutations in the gene coding for a receptor located on thyroid cells and how these
changes affect the control of thyroid hormone levels in the body.
A well designed pathway
The thyroid hormones, triiodothyronine (T3) and thyroxine (T4), affect essentially
every tissue in the body. These hormones are necessary for maintaining an
appropriate basal metabolic rate and are produced by the thyroid gland located in the
front of the neck (figure 1, right). Thyroid stimulating hormone (TSH), a glycoprotein
secreted by the pituitary gland in the brain, binds with the TSH receptor on the
surface of the thyroid follicular cells. This initiates a series of biochemical events that
result in an increase in circulating thyroid hormone.
The rise in blood thyroid hormone concentration is detected by the pituitary which
responds by decreasing the release of TSH. With this negative feedback loop the
body is able to carefully control circulating levels of the thyroid hormones. 4 Several
diseases exist where the body is unable to properly regulate thyroid hormone levels
in the blood. Hypothyroidism occurs when there are insufficient levels of thyroid
hormone and is often associated with signs of intolerance to cold, lethargy, weight
gain and cool, dry skin. Hyperthyroidism is caused by excessive thyroid hormone
and signs may include rapid heart rate, intolerance of heat, weight loss and fatigue.5
Loss-of-function mutations
One important part of this regulatory pathway is the TSH receptor found on the
surface of cells in the thyroid gland. Not surprisingly, mutations within the gene
coding for the TSH receptor can create problems for the body in controlling thyroid
hormone levels. A number of loss-of-function mutations have been identified which
impair the receptor to different degrees and thus result in varying degrees of
hypothyroidism.6Like most loss-of-function mutations, these mutations are generally
recessive. This means that clinical signs are typically observed only when both
genes (one being inherited from each parent) for the receptor carry such a mutation.
There are around 20 different loss-of-function mutations in this gene that have been
described in the literature.7
A gain of what?

Figure 1. A highly simplified


schematic showing the negative
feedback loop used by the body
to maintain appropriate thyroid
hormone (T3 + T4) levels.
Thyroid stimulating hormone
(TSH) is secreted by the pituitary
and binds to its receptor on the
follicular cells of the thyroid
gland. This initiates a series of
biochemical events that result in
the release of more T3 + T4.
Elevated T3 + T4 levels are
detected
by
the
pituitary,
resulting in a decrease of TSH
secretion. This is an important
part of controlling hormone
levels
and
maintaining
homeostasis. Many other factors
(not shown) also play a role in
the regulation of T3 + T4. (Click
on picture to see a higher
resolution version)

Other mutations within the gene for this receptor result in a


gain-of-function.8 In this case the receptor is constitutively
active, or switched on even when TSH is not present. Many of
these mutations have been identified, yet most of these are
not heritable (germ-line) but somatic mutations.9 Most
commonly, activating mutations are found in thyroid nodules
(over 2 dozen different mutations, some identified in more than
one individual) which develop from long term iodine deficiency
or exposure to goitrogens.10
Activating mutations have also been described in cases of
sporadic hyperthyroidism and thyroid cancer (carcinomasee
figure 2, right).7 At this point it should be apparent that the
enhanced activity mentioned in the definition has an
unwarranted positive connotation. Activating mutations may
result in more product, but they dont result in something more
Figure 2. In addition to controlling the output of thyroid
valuable.11 Instead, there is a loss of control of a pre-existing
hormone, the TSH receptor is part of a second biochemical
biochemical pathway. Living things depend on being able to
pathway that regulates the growth and development of
maintain homeostasis, or a balance. Although the pituitary
5
cells in the thyroid gland. Mutations in the genes coding
responds to the high thyroid hormone levels by decreasing
for such proteins can often lead to the development of
TSH release, it has no effect on the constitutively active
cancer. (Click on picture to see a higher resolution version). receptor. The excess thyroid hormone doesnt enhance
After: National Cancer Institute (US).
anything, it causes disease.
A new, but not improved, function
One gain-of-function mutation is different and results in a
protein with a new function. In this case the mutation alters the receptor so it responds to human chorionic godadotropin
(HCG). HCG is a hormone that increases early during pregnancy to help maintain the pregnancy. While HCG naturally
stimulates the TSH receptor to some degree in early pregnancy, the mutation causes the receptor to be so sensitive that
overt gestational hyperthyroidism develops.12 Again, the pituitary responds to increased thyroid hormone by decreasing
TSH, but since the receptor is responding to HCG this doesnt solve the problem. When a protein loses its specificity and
becomes involved in reactions it wouldnt normally be involved in, it has a new function. This is often referred to
as promiscuous activity. Even though very rarely a protein with promiscuous activity may prove beneficial under special
circumstances, still the loss of specificity represents a downward change in the genome.13 It is impossible to build complex
pathways with appropriate feedback mechanisms by randomly introducing errors.
Conclusion
It is worth noting that mutations produce new alleles (variant forms of a gene) and certainly add variety. However,
molecules-to-man evolution requires the generation of new information to build new, complex, interdependent biochemical
pathways. Despite the deceptive wording found in the gain-of-function definition, there is no increase of information or
improvement of biochemical pathways. Without a mechanism for developing such pathways, evolution is nothing more than
a myth. Instead, what we observe fits exactly with what we would expect if the young age model is true. Living things are
very well designed. Errors introduced by mutations do not build new, well integrated biochemical pathways; instead they
often cause disease.

Are gain of function mutations really downhill and so not


supporting of evolution?
A biologist questions
Mutations are good at getting rid of genetic information; in this case
information for making horns.Daniel H. of the United Kingdom, who gave
permission for the publication of his name, wrote:
Dear CMI,
Im a scientific researcher and I came across your site by accident in a
google search, in the form of the article Gain-of-function mutations: at a
loss to explain molecules-to-man evolution, by Dr Jean Lightner.
I feel the urge to provide you with some feedback on this article. It appears
to me, as written, to demonstrate nothing and to be either misled or
misleading. While the example of TSH receptor gain-of-function mutations
producing disease could certainly be used as part of a wider discussion of gain-of-function mutations in general (which are
also associated with cancer), fundamentally you cannot take one example and prove an argument completely based upon
it. The author completely fails to demonstrate how the example of this particular TSH mutation proves some universal rule
about the effects of gain of function mutations.For instance I work with G protein coupled receptors, a large family of
membrane receptor proteins. The adrenoceptor (for adrenaline) and the nicotinic acid receptor (for ketone bodies produced
in starvation) appear to both result as divergent descendents of an ancestral G protein coupled receptor, through gain/loss
of function mutations over genomic history. Now they both respond to different chemicals; however, both now also play
useful roles, even having grown into vital roles. Would you like to discuss this possibility as well?Evolution sometimes
means that bad variants of a gene dont persisteg the TSH receptor examplebut as any biologist would tell you it also
has the capacity to promote good ones.For the record, I am a Catholic and standing up for Christianity in the field of biology
is a challenging and vitally important thing. I dont feel this kind of bad science helps at all. It has no credibility. Thanks for
the chance to provide you with this feedback.
Yours
sincerely,
Daniel

Here is Daniels email repeated with a response from Dr Jean Lightner, author of the questioned article, interspersed in
common email fashion:
Dear CMI,
Im a scientific researcher and I came across your site by accident in a google search, in the form of the article Gain-offunction mutations: at a loss to explain molecules-to-man evolution, by Dr Jean Lightner.
I feel the urge to provide you with some feedback on this article. It appears to me, as written, to demonstrate nothing and to
be either misled or misleading. While the example of TSH receptor gain-of-function mutations producing disease could
certainly be used as part of a wider discussion of gain-of-function mutations in general (which are also associated with
cancer), fundamentally you cannot take one example and prove an argument completely based upon it. The author
completely fails to demonstrate how the example of this particular TSH mutation proves some universal rule about the
effects of gain of function mutations.
Dear Daniel,
Thank you for taking the time to write. I wrote the article you mentioned a number of years ago because the term gain of
function implies to the lay person that something like onward, upward evolution is occurring. In the article I was able to give
an example of a loss of function mutation and two types of gain of function mutations. So the intent of the article was to
inform readers about what these terms mean (which is best done, in my opinion, by including specific examples) and show
that they do not inherently support molecules-to-man evolution.
For instance I work with G protein coupled receptors, a large family of membrane receptor proteins. The adrenoceptor (for
adrenaline) and the nicotinic acid receptor (for ketone bodies produced in starvation) appear to both result as divergent
descendents of an ancestral G protein coupled receptor, through gain/loss of function mutations over genomic history. Now
they both respond to different chemicals; however, both now also play useful roles, even having grown into vital roles. Would
you like to discuss this possibility as well?
I have been doing some research on seven-transmembrane G-protein coupled receptors (MCRs and Olfactory receptors).
They are fascinating and I have argued that some were designed to change. In fact, I argue that our worldview gives us
reason to look for directed changes in genes. One of these articles appears on the web and the other is scheduled to
appear in the next issue of Journal of Creation.I would love to discuss with you the possibility that the two receptors you
mention may have developed from an ancestral G-protein. It would have been nice to have had a reference which explores
this suggestion in more detail, but we certainly can begin a discussion on the information you provided. First, I would like to
mention a few things that may help you better understand what I am saying.The word evolution is sometimes defined as
change in the genetic makeup of a population over time. I wholeheartedly agree that this occurs (as do the many other
creationists I know) and I discuss it in much of my writing. However, evolution also refers to the idea that all life on earth has
developed from a single common ancestor by random, chance processes. In college when I asked for evidence of evolution,
I was always given examples of the former and expected to accept that this implied the latter had occurred.When doing
science we will always have some assumptions and we should be aware of what these are. One I expect we share is that
science is a useful tool for learning about the world around us. There are a number of assumptions involved in molecules-toman evolution that I am unwilling to accept on faithI am constantly astounded at the layer upon layer of complexity we find in
living things; there are numerous biological features that wow human engineers (hence the rapidly growing field
of biomimetics). While you may believe that somehow God did it, by accepting common ancestry you are still looking at
things through an atheistic lens which basically assumes that natural processes can account for this. See: Design features
Q&A.
It assumes that random errors (which are what evolutionists consider all mutations to be) can increase the level of
complexity of biological systems. Random errors naturally destroy complexity orders of magnitude faster than they build it.
Natural selection is not effective at eliminating these errors, so there is no plausible naturalistic way to increase specified
complexity. See, for example, From ape to man via genetic meltdown: a theory in crisis.It uses equivocation or bait-andswitch to imply that it is scientifically plausible. Remember the two definitions of evolution mentioned above? One is
expected to assume they are essentially synonymous. However, observed genetic changes over time have not been shown
to effectively increase biological complexity, so those who operate in the evolutionary paradigm do so by blind faith and not
because of the evidence. See: Separating the sheep from the goats.Back to the issue of an ancestral G protein: I am
certain that you could show that a series of changes from a putative ancestral protein could account for the two receptors
you mention. This is rather weak circumstantial evidence that something like this may have happened. Would the animal
carrying the ancestral protein have been viable? Would the intermediates be viable? The circuitry these receptors are part
of is incredibly complex. How does one propose a series of possible changes in one protein and have an explanation of how
the two actual proteins are each well integrated into complex circuits where they effectively regulate important functions?Id
like to point out that one can always make a phylogenetic tree whether things are actually related or not. I can take the
silverware and cutlery in my kitchen, lay them out on the table, and give you a very nice story of how they all evolved from
the spoon over a period of time. Someone else may disagree and feel they all descended from a butter knife. Either way, a
good story does not necessarily correlate with reality.
Evolution sometimes means that bad variants of a gene dont persisteg the TSH receptor examplebut as any biologist
would tell you it also has the capacity to promote good ones.
As a veterinarian I know that persistence of bad variants is a widespread problem. Recessive disorders are common and
natural selection is not effective at eliminating them. In humans, many diseases strike after the childbearing years, and so
defective genes get passed on whether they are recessive or not. As I pointed out in #3 above, naturalistic processes cannot
really account for an increase in complexity even though we are often told that they can.
ARE MUTATIONS EVER BENEFICIAL?
CCR5-delta32: a very beneficial mutation
by Andrew Lamb
Cysteine-cysteine chemokine receptor 5 (CCR5) is found in the cell membranes of
many types of mammalian cells, including nerve cells and white blood cells. 1,2 The
role of CCR5 is to allow entry of chemokines into the cell 3chemokines are
involved in signaling the bodys inflammation response to injuries. 4The gene that
codes for CCR5 is situated on human chromosome 3. Various mutations of the
CCR5 gene are known that result in damage to the expressed receptor. One of the
mutant forms of the gene is CCR5-delta32, which results from deletion of a
particular sequence of 32 base-pairs. This mutant form of the gene results in a

receptor so damaged that it no longer functions. But surprisingly, this does not appear to be harmful:Yersinia pestis seen at
2000x magnification. This bacterium, carried and spread by fleas, is generally thought to have been the cause of millions of
deaths.Its highly unusual, says Dr. Stephen J. OBrien of the National Institutes of Health in Washington D.C. Most genes,
if you knock them out, cause serious diseases like cystic fibrosis or sickle cell anemia or diabetes. But CCR5-delta32 is
rather innocuous to its carriers. The reason seems to be that the normal function of CCR5 is redundant in our genes; that
several other genes can perform the same function.4Moreover, this mutation can be advantageous to those individuals who
carry it. The virus HIV normally enters a cell via its CCR5 receptors, especially in the initial stage of a person becoming
infected.5 But in people with receptors crippled by the CCR5-delta32mutation, entry of HIV by this means is blocked,
providing immunity to AIDS for homozygous carriers and greatly slowing progress of the disease in heterozygous carriers. 6
8
Up to 20%8 of ethnic western Europeans carry this mutation, which is rare or absent in other ethnic groups. 911 This
suggests that theCCR5-delta32 mutation was strongly selected for sometime during European history. Some researchers
have proposed that the plague epidemics that repeatedly swept Europe during the Middle Ages were
responsible.12 However, recent experiments in mice suggest that Yersinia pestis, the cause of plague, can infect mammalian
cells by other means1315 and so some scientists have proposed that smallpox, which is caused by the variola virus, was the
selection agent that historically caused CCR5-delta32 carriers to proliferate in Europe.15There has also been research
suggesting that CCR5-delta32 hampers development of cerebral malaria from Plasmodium infection,16 and that it may slow
progression of Multiple Sclerosis.17,18With the advantage of providing full or partial immunity to certain diseases, and with no
apparent disadvantages [But see Addendum March 2009. Ed.], CCR5-delta32 can be considered a prime example of a
beneficial mutationa mutation that decreases the information content of the genome and degrades the functionality of the
organism, yet provides a tangible benefit.19To date over 10,000 specific disease-causing mutations of the human genome
have been identified.20 In contrast, only a handful of beneficial mutations have been discovered, none of which involve an
increase in genetic information as required by evolution. All this is highly consistent with the young age account of a very
good creation21 followed by the Fall,22 and a subsequent six millennia23 of cumulative physical degeneration.24 However, it
clashes irreconcilably with the evolutionary view that the accumulation of mutations over time brings about upward evolution
(increasing functional complexity).In original creation, the CCR5 receptor would not have constituted an entryway for
pathogens. It may be that infectious agents like HIV only became pathogenic after degeneration from their original very
good created state. Or it may be that humans did not live in the same environment as such pathogens and so were just not
exposed to them. Perhaps both these scenarios apply (see The origin of bubonic plague on p. 7).
Addendum August 2007
A new generation of sophisticated therapies designed to HIV-proof the immune system promises to enter the clinic soon.
For example, [Carl] June, working with Sangamo Bio-Sciences in Richmond, California, later this year plans to start trials in
12 HIV-infected people of a gene therapy designed to endow immune cells with a genetic mutation that protects them from
HIV.To infect immune cells, HIV must first bind to chemokine receptors. Researchers discovered in 1996 that people who
had a naturally occurring mutation in their genes for one of these, CCR5, were strongly protected from developing AIDSor
even becoming infected in the first placeand suffered no ill effects from lacking the receptor.Sangamo specializes in
developing enzymes called zinc finger nucleases that can bind to genes, clip their DNA, and repair mutations (Science, 23
December 2005, p. 1894). But for the HIV gene therapy, theyve created a nuclease to specifically disrupt the CCR5 gene in
the same manner as the natural mutation. In the new trial, researchers will put the gene for this zinc finger nuclease into an
adenovirus vector, transduce harvested CD4+ T cells of HIV-infected people, and infuse those cells back. June says this is
the first gene-therapy experiment that aims to create a phenotype thats known to confer disease resistance.26
Addendum March 2009
A reader alerted us to the fact that at least one drawback associated with this mutation has been found. The CCR5-delta32
mutation is strongly associated with a chronic and potentially life-threatening liver disease:
Eri, R, et al., CCR5-Delta 32 mutation is strongly associated with primary sclerosing cholangitis, Genes And
Immunity 5(6):444450, September 2004, <http://www.nature.com/gene/journal/v5/n6/full/6364113a.html>.

The mutant feather-duster budgie


by Andrew Lamb
When a clutch of four budgerigar chicks hatched out one sunny September day in 1999, it was a happy occasion for their
breeder, Damien Harris of New Zealand. It wasnt until they began to grow feathers a few weeks later that Damien noticed
something odd about one of the chicks. Her feathers were curly instead of straight, and they just kept growingand growing
and growing!Because this odd chick (pictured below) needed special care, Damien didnt sell her, but instead gave her to
bird-lover Warren Scholes, who named her Nora, after a local political character.
Nora
Noras long, curly feathers seemed to lack some component of
the normal barb, barbule and hook structures of standard
feathers, and they greatly hampered her mobility. Although able
to eat normal budgie fodder and shuffle around, Nora couldnt
climb, preen or fly like other budgies, and she could hardly
chatter or squawk either. However, with Warrens help she did
eventually learn to perch on the low rung in her cage.Noras
parents were both descendants of English show budgies, the
only birds known to produce feather duster mutants, the first
such case being reported in England in 1966. 1,2 Breeders think a
mutation (genetic copying mistake) in a recessive gene causes
the problem.Since Noras three siblings were healthy, they must
have each inherited a good copy of the gene from at least one of the two parents. (If the
gene from their other parent was defective, the good copy would override it.) If they
inherited one good copy and one mutant copy they would be carriers of the disease, while
remaining healthy themselves.When the copies from both parents have this mutation, as
with Nora, the feather duster syndrome results. To avoid the possibility of more chicks
like Nora, Damien separated Noras parents, and did not breed them together
again.These feather duster mutants also carry some strains of Budgerigar Herpesvirus
that arent found in normal budgies, but experts dont know whether the virus plays a role
in this genetic problem.3 As their bodies divert precious nutrients and energy into

continual feather growth, these mutant birds suffer severe muscle wasting. Most die after four to eight months. Warren gave
Nora special care, but even with this attention she managed to live only 12 months before succumbing, whereas a typical
budgie lifespan is anywhere from six to 14 years. 4Mutations, disease, deathhow did our world come to be such a tragic
place? The feather duster mutation probably damages the control gene responsible for turning off feather growth. Though
rarely observed in birds, many similar mutations are known in mammals. For example, just as Nora had feathers that
wouldnt stop growing, so too poodles have hairthat wont stop growing. Most dogs moult each spring, but poodle hair just
keeps on growing non-stop all year round! Some scientists think this is caused by a mutation in the control gene that
regulates moulting. It makes poodles a favourite for fashion-minded dog groomers, but if owners dont regularly trim their
poodles ear hair, it can quickly obstruct their ears, causing debilitating life-threatening infections.5,6
Normal budgie
Besides poodles, a number of other domestic mammal breeds are the result of selecting for abnormal hair growth, e.g.
Australian Merino sheep, Scottish Highland cattle, Hungarian Mangalica pigs, and Angora goats. 7 And people have
observed mutant hairy specimens in numerous other mammal species, including minks, guinea pigs and horses. King
William IV of Britain owned a mutant horse with a mane 4 m (13.5 ft) long and a tail 8 m (27 ft) long! 7 Even humans have on
occasion been afflicted with such mutations,8 which have often been misinterpreted as evolutionary throwbacks to mythical
furry ancestors.Control gene mutations are not restricted to hair and feathers. Some bacteria have a mutation disabling the
control gene that regulates manufacture of a chemical that destroys penicillin, resulting in unrestrained production, which
thus renders them immune to penicillin. 9 Belgian Blue cattle have a mutation that deactivates the myostatin gene, resulting
in uncontrolled muscle growth, and thus abnormally big cattle. 10 In all these examples information is lostthe control genes
are damaged and no longer able to fully perform their original function.Evolutionists are compelled to believe that mutations
lead to new information, and new types of organisms, but this has never been observed. What we do observe is, however,
entirely consistent with the young age model of degeneration from a very good original state. Many mutations lead to
innocuous neutral changes that do not impede the health of the organism, but this is not upward evolution. In rare cases, a
mutation can confer an advantage in a particular environment, but this is always due to a loss of information resulting in
a loss of some particular functionality. For example, a mutation resulting in the loss of wings is an advantage to beetles on a
windy island.11More commonly, such loss of functionality results in disease. Researchers have now tracked down the exact
nature and chromosomal locations of a staggering 9,500 specific DNA defects that cause, or predispose towards, over 900
different human diseases, with a further 3,400 mutations identified but not yet confirmed, and new mutations being
uncovered almost daily.12
Lost World of Mutants discovered
by Carl Wieland
In 1986, construction workers unexpectedly drilled into the Movile Cave, close to the western (Romanian) coast of the Black
Sea. A secret kingdom of strange creatures was revealeda group of living things that have clearly been cut off from the
outside world for many generations. They are found in air-pockets which can only be reached by diving. Forty-seven species
altogether have been studied by a Romanian scientist, Serban Sarbu, who escaped the communist dictatorship and has
only recently been able to resume his work. They include such things as spiders, leeches, millipedes, pill bugs, flatworms,
mites, beetles, and water dwellers such as water scorpions and nematode worms.The unique thing about the ecosystem
within which these creatures function is that they do not depend, even indirectly, on the energy of sunlight. The entire
community appears to be fueled by energy from the metabolism of hydrogen sulfide (the foul-smelling rotten egg gas, H 2S),
carried out by dense mats of bacteria which live on the cave walls. These bacteria produce sulphuric acid, which incidentally
carves out increasing volumes of space in the limestone.The bacteria are eaten by creatures higher on the food chain
which are then eaten by others, and so on. There is no photosynthetic vegetation at all.Air can seep in through tiny fissures,
but the atmosphere is very different from outside, with 100 times as much CO 2, one tenth the level of oxygen, and a lot of
H2S, produced by natural sulfur springs. The animals scuttle for cover when they detect a change in oxygen levels.All of
them have the condition known as troglomorphya loss of coloring pigment, giving them a pale-yellow appearance. All are
born blind, with the exception of one spider that is born with the usual eight eyes. However, these degenerate as it matures,
so that it is blind as an adult.Many have large antennae which assist them to find their way around in the dark.
A creationist understanding
The obvious explanation would be similar to the standard evolutionary interpretation, except for the time-scale.
Genetic loss through mutation is an integral part of the creation model, whereas molecules-to-man evolution requires huge
volumes of new, functionally more complex information to arise.It is important to understand that the loss of characters such
as eyes and pigment does not arise from disuse as such, although most of the public will surely see it in such Lamarckian
terms. (Modern biologists, whether creationist or evolutionist, overwhelmingly disown such beliefs. The giraffes neck cannot
get progressively longer from stretching over generations, nor shorter from lack of stretching.) Use and disuse do not cause
changes which can be inheritedthat is, there is no change in the DNA code as a result of use and disuse of bodily parts.
Lets look at the likely course of events.
Loss of pigment
Many cave dwellers have long been known to show this. If a creature living in the normal, outside world were to have a
mutation causing a loss of its pigment, it would normally be less able to survive, losing some protection against sunlight.
However, in a cave without the sun, such a mutation can readily spread through the population.These recently-discovered
Romanian creatures are thus almost certainly the descendants of previously pigmented ancestors cut off from the outside
perhaps thousands of years ago (not 5 million, as evolutionists speculate in this instance).
Loss of eyes
Blind fish, with scars where eyes normally appear, have long been known to exist in certain caves. A mutant gene damaging
the genetic information for eye manufacture would have no selection pressure opposing its spread in a lightless
environment. In addition, there may well be a selective advantage to such a loss/defect, as follows.Eyes are complex
structures, prone to disease and injury. If a fish with eyes bumped into a cave wall in the dark, it could damage the eye
surface, introducing infection leading to death. The acidic environment would also be detrimental to eye health. After many
hundreds of generations, the eyeless gene would confer a slight but significant selective advantage. In the thousands of
years since the Flood, the Romanian cave-dwelling creatures have had time to go through a huge number of their short
generational life cycles, allowing them plenty of time for maximum adaptation to their new environment. This adaptation is
based, however, on:
the original created information, and
adaptive losses/defects from mutation since creation.
Large antennae

What about the huge, sensitive feelers some of these creatures apparently have? These are clearly adaptive to their dark
environment. In addition, there are probably a host of other specialized features which have not yet been described, as so
far only newspaper reports (admittedly quite detailed) of conferences have been available to us.Creationists would generally
claim that it is vanishingly improbable that completely new biological apparati involving teleonomy (project-oriented
functional complexity) will ever arise de novo from natural lawthat is, chance copying mistakes/mutations in the first
instance, to be subject to the filter of natural selection.A population may respond to environmental selection pressure based
on information already present in its total gene pool. For instance, if a population of plants in a moist environment is
exposed to ever-increasing dryness, only those plants carrying genes for deeper roots and waxier cuticles will survive. The
population will have responded and become adapted, but only because the genetic information coding for waxier cuticles
and deeper roots was already present.Therefore, a reasonable postulate regarding any of the adaptive features of these
cave creatures is that: they are indeed the descendants of similar creatures from the outside which became trapped
(perhaps progressively) and cut off from the parent population; and
they already had the genetic information for feelers/antennae.
Individuals which (rarely) inherited the combination of genes for oversize feelers would be at a disadvantage outside, but
would be selectedfor in this world of darkness. Perhaps a mutational defect involving a loss of control (or loss of switch-off
of growth) of feeler size during development might have been involved in at least some, leading to gigantism in an existing
structure, which is inherited.There is clearly no way in which, and no need to postulate that, these feelers themselves, with
all their complex associated sensory and control mechanisms, arose by mutation in creatures which previously had no such
information in their genes.
The ecosystem
Creationists sometimes point to complex predator-prey relationships, food chains, etc., as examples of creative design.
While this is certainly feasible in many cases, even likely, it appears unlikely that the particular food chain in this case was
the direct result of original creative design; at least, not created for that purpose. Rather, it probably arose as a result of each
member being forced to survive on what it could, an adaptation (of an earlier ecosystem) based on necessity.
Summary
Complex information is never seen to arise from natural law, although that is the very core claim of evolution. These
Romanian cave creatures show no evidence of having evolved such information. However, they can indeed be described
as strange mutantsthe offspring of non-cave-dwellers which demonstrate the results of selection and degenerative
change, not upward evolution.
New eyes for blind cave fish?
A remarkable experiment leads to much evolutionary misinterpretation
by Carl Wieland, CMIAustralia
Fish living in caves, in permanent darkness, are blind, with apparent scars where their eyes should be. In the quarter of a
century in which I have written and spoken on creation issues, I have often raised the matter of eyeless fish to
argue against evolution, despite the fact that I believe that these fish arose from ones that originally had eyes. I also agree
with evolutionists about the fairly obvious mechanism by which they think this happened. Nevertheless, I hope it will be clear
from what follows that I find it exceedingly strange that some evolutionists would gloat about this as a classic proof of
evolution.1Note that such fish often are, in all other respects, identical to the same species of fish living at the surface and
having eyes.Imagine a situation in which a group of such normal fish swim into a stream which enters an underground
cave, and become trapped in this pitch-dark environment. Their eyes are completely useless here.But eyes do not
disappear just because they are no longer needed. The fishs DNA would have programmed into it the instructions on
constructing eyes, and the code on the DNA does not know that the eye is no longer needed, so it will keep on
manufacturing eyes, generation after generation. In fact, in a moderate-sized population, many of these errors occur in each
generation. It is not hard to see how one of these could result in a gene that usually switches on eye development being
corrupted, or somehow switched off, via mutation.In a normal above-ground situation, such eyeless fish would probably
never survive much past early infancy, because they would be so handicapped both in locating food and escaping
predators. So for all practical purposes, we never see eyeless fish in the wild where there is sunlight.However, in the cave, it
is a different matter. The eyeless type no longer suffers this disadvantage compared to its compatriots. Not only that, the
eyeless ones even have an advantage over the others. This is because, as fish bumped into rocks and cave walls in the
darkness, the eyed ones would be likely to injure their eyes. The delicate tissue of eyes is prone to injury, which would allow
harmful bacteria to enter, leading to infection and often death.The eyed fish would thus have a lesser chance of surviving to
produce offspring. Those fish carrying the eyeless genetic defect would have a greater chance of passing it on to the next
generation, so it would not take many generations under such circumstances for all the fish to be of the eyeless type.But
this classic example of mutation/selection causing adaptation to a new environment is also a classic example of a mutation
causing adownhill change. It is not showing us how the first stages of a new, complex adaptation could arise, it is merely
showing us how complex information coding for great engineering design is being corrupted or lost. The grand-scale theory
of evolutionthat microbes have become millipedes, magnolias, and microbiologistsdemands that huge amounts of new
information, of true genetic novelty, have arisen over millions of years. To show such information arising from natural
processes is the real challenge for evolution. It is a challenge which the renowned Darwinist Richard Dawkins was unable to
answer, as shown in the video From a Frog to a Prince (as well as refutation of Skeptics attack andrefutation of Dawkins
later response).Even if one tiny example could be found where information had arisen by chance mutation, Dr Lee Spetners
classic book Not By Chance (see right; as well as this review) shows that neo-Darwinian theory requires literally hundreds to
be observable today. So far, all the examples studied (including the handful of helpful defects, like the loss of eyes in cave
fish, or wingless beetles on windy islands [see Beetle bloopers]) show a loss of information.The fascinating experiment (by
researchers from the University of Maryland, USA) that has brought blind cave fish back into the news was one in which
young eyeless fish had lenses implanted in them from the same species of fish (Astyanax mexicanus) living at the surface.
Eight days later, the blind fish seemed to be regrowing eyes. After two months, they had a large restored eye with a distinct
pupil, cornea and iris. In addition, the retina of the restored eye showed rod photoreceptor cells . 2The researchers are not
saying that the fish developed sight, which would require regrowth of nerve connections to the brain and more. This
experiment is of great interest in helping us understand more about the pathways by which genes express the development
of certain structures in the embryo. The following may be helpful in understanding what has taken place:It has long been
known that during the development of certain frog embryos, for instance, the lens not only appears first, it acts as an
inducer of the development of most of the rest of the eye. Thus, if the lens from one embryo is surgically embedded into
another embryo at a spot different from where the eye normally develops, an eye will start to form at that location.Both in the
above example, and that of the cave fish, the development of the eye structures can only take place if the organism into
which the lens is transplanted has the genetic instructions present in its DNA to manufacture such structures.This indicates

that the mutation by which the fish initially became eyeless did not somehow delete all of the eye information, but just
interfered with the process leading to the eyes development. An analogy with computers would be deleting files on a
computerthe information is not deleted, just the record of its location on the hard disk. If the data as such were not still
there, undelete programs would not be possible.In the example here, the mutation most probably just blocked the proper
formation of the lens. Without the lens to induce the rest of the eye to form, it wont. This is supported by the fact that in the
embryos of eyeless cave fish, eyes start to form, but the lens that has started to form deteriorates, and the other structures
remain undeveloped.This is the first time, to my knowledge, that such optic induction experiments have been successful on
any organism in a post-embryonic stage. As such it is important in future embryological research into the immensely
ingenious, complex, and still very poorly understood, processes by which an adult organism develops from a tiny fertilized
egg.Sadly, though not surprisingly, this has been described in such a way as to promote the evolution is fact ideaeven
though it has nothing to do with demonstrating that microbes could turn into man (and as shown, the change is in
the opposite direction required). It has been described as Eye parts lost during millions of years of evolution were restored
in just a matter of days. 3 We have already seen that it is misleading to describe the loss of the ability to produce eyes as
evolution, because it gives rise to the impression that it has something to do with how there came to be such things as fish
with eyes in the first place. In addition, there is not the slightest bit of evidence that the process of losing them took millions
of years. In fact, it would be surprising if it took more than a few dozen generations, or just a few short years, given the
scenario described earlier.Indeed, considering the supposed creative power of evolution, it is remarkable that these fish,
allegedly separated for millions of years, are so near-identical to those living at the surface that even evolutions most
hardened true believers concede that they should be given the same species name. The notion that they have not been cut
off from each other for anywhere near as long, directly fits the facts.
Christopher Hitchensblind to salamander reality
Photo by ensceptico, flickr.com
A well-known atheists eureka moment shows the desperation of evolutionists
In a recent article in the leftist online magazine Slate, prominent atheistic journalist
Christopher Hitchens (b. 1949) thinks he has found the knock-down argument against
creationists and intelligent design supporters. Fellow misotheist Richard Dawkins (b.
1941) and another anti-theist Sir David Attenborough (b. 1926) agree. Not surprisingly,
there have been questions to us about this, so Dr Jonathan Sarfatiresponds. As will be
seen, their whole argument displays breathtaking inanity and ignorance of what
creationists really teach, and desperation if this is one of their best proofs of
evolution.Christopher Hitchens is a British-born American journalist and author, recently
best known for his antitheistic bookGod Is Not Great. He is also an avid debater,
although he seemed to come off second best against Dinesh D Souza(b. 1961), author
of Whats So Great About Christianity? 1 In a bizarre recent article, Losing Sight of
Progress: How blind salamanders make nonsense of creationists claims, 2 Hitchens thinks he has clinched the case for his
antitheistic faith. He begins:
It is extremely seldom that one has the opportunity to think a new thought about a familiar subject, let alone an original
thought on a contested subject, so when I had a moment of eureka a few nights ago, my very first instinct was to distrust my
very first instinct. To phrase it briefly, I was watching the astonishing TV seriesPlanet Earth (which, by the way, contains
photography of the natural world of a sort that redefines the art) and had come to the segment that deals with life
underground. The subterranean caverns and rivers of our world are one of the last unexplored frontiers, and the sheer
extent of the discoveries, in Mexico and Indonesia particularly, is quite enough to stagger the mind. Various creatures were
found doing their thing far away from the light, and as they were caught by the camera, I noticedin particular of the
salamandersthat they had typical faces. In other words, they had mouths and muzzles and eyes arranged in the same
way as most animals. Except that the eyes were denoted only by little concavities or indentations.
Photo by Yolaine Conti, sxc.hu
So Hitchens thinks that eyeless salamanders are a moment of
eureka. He later explains why:
If you follow the continuing argument between the advocates of
Darwins natural selectiontheory and the partisans of creationism
or intelligent design, you will instantly see what I am driving at.
The creationists (to give them their proper name and to deny
them their annoying annexation of the word intelligent) invariably
speak of the eye in hushed tones. How, they demand to know,
can such a sophisticated organ have gone through clumsy
evolutionary stages in order to reach its current magnificence and
versatility?
That is indeed a problem, and Hitchens continues:
The problem was best phrased by Darwin himself, in his essay Organs of Extreme Perfection and Complication:
To suppose that the eye, with all its inimitable contrivances for adjusting the focus to different distances, for admitting
different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural
selection, seems, I freely confess, absurd in the highest possible degree.
As we have advised in our Dont Use page, care must be taken with this quote. Darwin went on to say that although it
seems absurd, he nevertheless believes it could have happened via small changes worked on by natural selection. Hitchens
goes on to say:
His defenders, such as Michael Shermer in his excellent book Why Darwin Matters, draw upon post-Darwinian scientific
advances. They do not rely on what might be loosely called blind chance:
Evolution also posits that modern organisms should show a variety of structures from simple to complex, reflecting an
evolutionary history rather than an instantaneous creation. The human eye, for example, is the result of a long and complex
pathway that goes back hundreds of millions of years. Initially a simple eyespot with a handful of light-sensitive cells that
provided information to the organism about an important source of the light
This is indeed a continuation of Darwinian ideas. Yet there are a number of problems with this as well as the next point. For
example, the usual simulations start with the nerve behind the light-sensitive spot. The vertebrate eye has the nerves in front
of the photoreceptors, while the evolutionary just-so story provides no transitions from behind to in front, with all the other
complex coordinated changes that would have to occur as well. See also Fibre optics in eye demolish atheistic bad design
argument.

Hold it right there, says Ann Coulter in her ridiculous book Godless: The Church of Liberalism. The interesting question is
not: How did a primitive eye become a complex eye? The interesting question is: How did the light-sensitive cells come to
exist in the first place?
Coulters book actually nails the Darwiniacs, as she calls them, on this and in many other places. Indeed, the
photochemistry involved in even the simplest light-detecting cells is enormously complex. So although evolutionists claim to
be climbing a gentle slope up Mt Improbable, they are really starting from a sheer ledge near the top. See At the bottom of
Mount Improbable? Eye evolution, a case study.
The salamanders of Planet Earth appear to this layman to furnish a possibly devastating answer to that question. Humans
are almost programmed to think in terms of progress and of gradual yet upward curves, even when confronted with
evidence that the past includes as many great dyings out of species as it does examples of the burgeoning of them. Thus
even Shermer subconsciously talks of a pathway that implicitly stretches ahead. But what of the creatures who turned
around and headed back in the opposite direction, from complex to primitive in point of eyesight, and ended up losing even
the eyes they did have?
Well, what about them? This is the crux of Hitchens argument. Yet this is his own blind spot. Proving that someone can fall
down the mountain (Improbable or otherwise) is hardly proof that he could have climbed up there in the first place . Thats
the general problem with many alleged proofs of evolution: its not that the changes are too small, but that they are going in
the wrong direction see The evolution trains a-comin (Sorry, a-goin in the wrong direction).This is easily explainable: there
are many ways to break something, but not many ways to makesomething in the first place. So its not surprising that it
would be relatively easy for a mutation, or copying mistake in the genes, to ruin the eyes. In the light, natural selection would
eliminate such mutations, since blind creatures could see neither prey nor predators.But in a pitch-black cave, there would
be no natural selection against blind creatures, so they could proliferate. They might even have an advantage, because a
shrivelled eye is less likely to be damaged. Creationists have explained this long agosee New eyes for blind cave fish? A
remarkable experiment leads to much evolutionary misinterpretation.In one of the best known blind cave fish, Astyanax
mexicanus, there is another reason why the blind fish can have an advantage in caves. This is pleiotropy, where a single
gene has more than one effect on an organism. It turns out that a control gene, hedgehog, which affects a number of
processes including development of the jaws and tastebuds, also inhibits another control gene, pax6, which controls
development of the eyes. A fish with bigger jaws and more sensitive tastebuds would have an advantage in finding food, but
this must be traded off with the loss of eye development. In the light, loss of eyes is a big disadvantage, so natural selection
would eliminate a fish that over-expresses hedgehog, despite its better jaws and taste. But in the dark caves, a fish with
highly expressed hedgehog would have a big advantage, since the loss of eyes would be irrelevant. 3Also, to underscore the
point that there are many ways to break things, there are actually a number of ways to produce blindness, even in Astyanax.
This is shown by breeding different populations of blind fish, and resulting in a number of sighted progeny. This is explained
because the sight loss in the different populations is caused by different mutations, so when you cross them, the genetic
deficiencies in one lineage are compensated for by strengths in the other, and vice-versa. 4 We have also pointed this out
in Let the blind see: Breeding blind fish with blind fish restores sight, so Hitchens has even less excuseexactly the same
principles apply to blind salamanders and other blind troglobionts (cave-dwelling living organisms).When it comes to
breaking something, it need not take very long either. Breaking is often quicker than making, just as its often much quicker
to fall off a mountain than to climb it. We can see this in humans, when sighted parents have blind children due to a genetic
defectthis can happen in only one generation. Yet Hitchens gushed:
Even as I was grasping the implications of this, the fine voice of Sir David Attenborough was telling me how many millions of
years it had taken for these denizens of the underworld to lose the eyes they had once possessed.
He of course provides no proof. Indeed, the fact that sight can be regained in one generation shows that there has been
little time for mutations to further degenerate the genesnote that natural selection would not preserve genes connected to
eyes and the visual parts of the brain if there were no selection for eyesight. Indeed, Dr John Sanford, inventor of the gene
gun, in his book Genetic Entropy and the Mystery of the Genome, shows that the known rate of harmful mutations
accumulation would have resulted in error catastrophe if we had really been around for millions of years.
Whereas the likelihood that the post-ocular blindness of underground salamanders is another aspect of evolution by natural
selection seems, when you think about it at all, so overwhelmingly probable as to constitute a near certainty.
Which is why creationists agree that mutations and selection is the right explanation. But we have also pointed out
that natural selection is the opposite of evolution, since it removes information. Indeed, creationists proposed natural
selection as a conservative force well before Darwin, which hinders the downward slide of a population by eliminating the
less fit. The blind fish are one of the best proofs of this: natural selection conserves sight in most populations by eliminating
sightless mutants; when this selective pressure is removed, blind mutants proliferate, so the population goes informationally
downhill.
I wrote to professor Richard Dawkins to ask if I had stumbled on the outlines of a point, and he replied as follows:
Vestigial eyes, for example, are clear evidence that these cave salamanders must have had ancestors who were different
from themhad eyes, in this case. That is evolution. Why on earth would the designer create a salamander with vestiges of
eyes? If he wanted to create blind salamanders, why not just create blind salamanders? Why give them dummy eyes that
dont work and that look as though they were inherited from sighted ancestors? Maybe your point is a little different from
this, in which case I dont think I have seen it written down before.
Of course, creationists deny the designer created blind salamanders, and agree that this is one genuine example of
a vestigial organ. But even genuine vestigial organs prove merely devolution, not evolution. What would be impressive
would be a nascent organ, one growing where none existed before in the creatures ancestry. Dawkins must be willingly
ignorant of what creationists teach, or is deceitfully knocking down a straw man. After all, why should his ethics be
trusted under his own belief system when Dawkins hasagreed that ultimately evolution leads to a moral vacuum in which
[peoples] best impulses have no basis in nature, and scoffed at the idea of righteous indignation and retribution against
child murderers and other vile criminals?
I recommend for further reading the chapter on eyes and the many different ways in which they are formed that is contained
in Dawkins Climbing Mount Improbable;
Ive already read thatsee my review.
also The Blind Cave Fishs Tale in his Chaucerian collection The Ancestors Tale.
Also reviewed in the Journal of Creation.5
I am not myself able to add anything about the formation of light cells, eyespots, and lenses, but I do think that there is a
dialectical usefulness to considering the conventional arguments in reverse, as it were.
As shown, there is a big difference in the forward and reverse directions: as one of Australias leading molecular
biologists, Dr Ian Macreadie, pointed out:Evolution would argue for things improving, whereas I see everything falling to

pieces. Genes being corrupted, mutations [mistakes as DNA is copied each generation] causing an increasing community
burden of inherited diseases. All things were well designed initially.
For example, to the old theistic question, Why is there something rather than nothing?
Which we note Hitchens has not actually answered. Atheists must believe by faith that nothing exploded and became
everything.
Cant drink milk? Youre normal!
How mutations cause lactose tolerance in adults
by David Catchpoole
Lactose intolerance was really only recognized in the 1960s, and ranges from about 5% in
northern Europe, around 70% for southern Europe, to more than 90% in some African and
Asian countries. The aversion to milk among Asians was once mostly regarded in the West
as cultural in basis (rather than biological/genetic), with powdered milk often being sent as
part of food aid.18For many, the mere mention of milk will be enough to invoke memories of
nausea, bloating, cramps, diarrhea, and perhaps in some cases, jibes and taunts about
wind and bad breath. Some will have undergone medical tests that diagnosed the cause as
Lactose intolerance .Lacking the enzyme lactase, which breaks down the milk sugar
lactose (see box), they are unable to digest milk, whereas lactose-tolerant people can.
Others, though, might still be unaware that they are deficient in lactase, not realizing that
drinking milk causes their feelings of nausea, etc. 1For many years, lactose intolerance was
regarded as abnormal, and was used by many as evidence of human evolution. As a
measure of evolutionary advancement, milk-drinking seemed to fit the stereotype perfectly.
Pale-skinnned northern Europeans usually retained full intestinal lactase activity into
adulthood, in stark contrast to the worlds darker-skinned peoples who are only able to digest
milk as infants or young children. Well, thats the way the story went.However, lactose
deficiency in adults is not in fact abnormal, but the norm! Research has shown that the gene
for lactase normallyswitches off as children are weaned. And a genetic mutation that results in lactase production not being
switched off accounts for the ability of certain people to drink milk into adulthood.So there has now been a dramatic change
in terminology, with those who cannot digest milk no longer being called lactase deficient. Instead, they are now regarded
as normal, while those adults who retain the enzymes allowing them to digest milk are called lactase
persistent.2,3Furthermore, different mutations can stop lactase production from being switched off after weaning. The
mutation that confers lactase persistence in northern Europeans 4 is different from the mutation in East Africans who are
lactase persistent.5 Researchers have identified three different mutations (in the same stretch of DNA as the European
variant) in various African populations in Tanzania, Kenya and the Sudan.6
Evolutionary notions overturned
The findings have overturned previously-held evolutionary notions in dramatic manner. Anyone enamoured with the blackpeople-are-less-evolved-than-white-people idea must confront the fact that dark-skinned Africans have been shown to have
genetic mutations conferring lactase persistencesome of them even had all three of the mutations so far discovered in
that region.7In that light its interesting to make comparative reference to a notable pale-skinned person, namely, Charles
Darwin. Some of the symptoms of his mystery illness, 8 viz. continual diarrhea, bloating and gas,9 match those resulting
from lactose intolerance!Another shake-up for evolutionists was the researchers assessment that the most common variant
arose as recently as 3,000 to 7,000 years ago. University of California anthropologist Diane Gifford-Gonzalez says the
finding of recent multiple mutations arising independently is changing the way they think about human history: Until the
geneticists contributed to the data, the rest of us always thought about evolution happening very slowly and gradually. 10This
is not the first time, however, that evolutionists have been surprised by the speed of genetic changes. 11However, they still
claim it as evolution. This is the best example of convergent evolution in humans that Ive ever seen, said geneticist Joel
Hirschhorn, of the Childrens Hospital Boston, Massachusetts.10 But note that these genetic changes are not evolution in
the uphill molecules-to-milkman sense, as the changes are downhill, i.e., information has been lost (viz., the normal
switching-off mechanism of lactase production following weaning). 12 Rather, at best this is an example of selection, as
Hirschhorn himself went on to acknowledge: Lactase persistence has always been a textbook example of selection, and
now itll be a textbook example in a totally different way. 10Hirschhorns comments also highlight two key factors
underscoring the creation-evolution issue. First, the extreme flexibility of evolutionary theory. It seemingly doesnt matter
what the evidence isevolutionary theory can be made to do an about-face when desired. Second, the classic bait-andswitch tactic of interchanging evolution with selection. But natural selection is not evolution.13Another unexpected result
of the milk-drinking mutation survey in east Africa was the finding that the Hadza people of Tanzania show a surprisingly
high level of lactase persistence despite having very little to do with cattle. That led to this evolutionarily-radical suggestion:
One possibility is that, though they are now mainly hunter-gatherers, their ancestors might have been pastoralists. While
that idea goes against the traditional evolutionary order, it is right in line with a young age perspective. Furthermore, this is
not the first time that evolutionists have had to face up to evidence that todays hunter-gatherer peoples previously practised
farming or animal husbandrycontrary to their cherished ideas.14
Being a mutant can be advantageous
Although the loss of the ability to turn off lactase production following weaning is a loss of information (i.e. a downhill
change), the mutation confers some obvious advantages in areas where milk is available. The cost of the mutation, i.e.,
the extra energy needed to continue to produce lactase beyond infancy, would be more than compensated for by being able
to safely extract the energy and nutrients in milk. 15So in what time frame could the milk-drinking mutation have arisen? Even
the evolutionists acknowledge that mans original status was indeed lactose intolerance. University of California geneticist
Leena Peltonen quipped, I find it ironic that a so-called disease [i.e. lactase deficiency] actually represents the original
condition.18So, those of you who are unable to drink milk as adults today without feeling nauseous (or worse) can take heart
from being closer in that respect to the originally physically-perfect first man and woman than are those of us who are milkdrinking mutants!
Lactase and lactose
Most human infants produce ample quantities of lactase for milk digestion. The cells that line the small intestine produce the
enzyme lactase, which breaks down lactose, the characteristic disaccharide sugar of milk, into the monosaccharides glucose
and galactose. These sugars are easily digested (absorbed) by humans.However, when lactase is lacking, as it is in most
adult humans19 and animals, the lactose cannot be broken down and absorbed in the small intestine. The lactose therefore

passes to the large intestine where the resident bacteria ferment it, generating gashence the discomfort of
nausea/bloating/flatulence experienced by lactase-deficient people after drinking milk.Many lactose intolerant people are
able to consume some dairy products, such as cheese, without experiencing the debilitating symptoms they get following
consumption of milk. That is because there is little lactose in such fermented products, as the bacteria (e.g. lactobacilli) have
already fermented most of the lactose in the original milk into lactic acid, with the by-product gas released harmlessly into the
atmosphere.

Dont confuse lactose intolerance for milk protein allergy


Some children who are said to be lactose intolerant are later found to be able to tolerate milk. However, almost certainly
those children were not lactase deficient but rather were allergic to a milk protein (lactoglobulin). This is fairly common in
young childrenthey usually grow out of milk allergies as their gastro-intestinal tract matures. Such children can usually
tolerate goats milk, which also has lactose, of course, but lacks the cows milk proteins that cause the problems. There is
even a difference between breeds of cows, it turns out. A person might not tolerate Friesian milk (most commercial
pasteurized milk is largely Friesian) but tolerate Guernsey or Jersey milk, for example. Lactose-intolerant persons can often
well tolerate natural yoghurt, which has about 2/3 the lactose of milk. Researchers think that this is due to the presence of
lots of live fermenting bacteria, which quickly digest the lactose at body temperature, and also contribute lactase.

At last, a good mutation?


by Carl Wieland
There are children who are so prone to infection that, if they survive at all, they have to spend their lives in an artificial
bubble. This is the usual fate of those who have inherited two defective copies (one from each parent) of a gene which
produces an enzyme called ADA (adenosine deaminase).Because they are unable to make ADA, toxic substances
accumulate in their blood which slowly damage the bodys immune cells.However, in an unprecedented finding, a U.S. boy
called Jordan Houghton has spontaneously recovered from his condition.1 All the evidence indicates that in one line of his
immune cells, one of the faulty genes has apparently repaired itself.Geneticist Hagop Youssoufian at Brigham and Womens
Hospital, Boston, says about this 'fascinating' occurrence:
We finally have a clear example of a mutation doing something good.
Back mutations, replacing a letter in the DNA sequence which was faulty back to what it originally should have been, are
not unknown. They certainly do not show us how significant information can arise de novo, as they merely (accidentally)
restore what should have been there.An occurrence like this (encouraging, but exquisitely rare) may actually not be
mutational as such, as there are abundant error-checking, proof-reading and repair mechanisms in our genetic
machinery.Youssoufians at last statement highlights the fact that mutations, random accidental changes in copying
hereditary information, are overwhelmingly a downhill process. Geneticists in hospitals are all too familiar with the harm they
cause in people who inherit their effects.
A-I Milano mutationevidence for evolution?
21 February 2003
This weeks selected feedback from J.R. is about another claimed beneficial mutation, since many people have an idea that
this would disprove creation. Despite the rule against URLs in feedbacks, in this case it was unavoidable, and we thought
that Dr Don Battens scientific perspective would be helpful. Once again, the key is that evolution requires informationincreasing mutations, while even the rare beneficial ones do not help evolution because they are losses of information. J.R.
responded to the original answer with appreciation, showing that Dr Battens explanation was helpful in clearing up this
common misunderstanding.
I really hope [your ministry] does an article or report on this. So far, I havent found a creationist group that will report
anything on this complex. Seeing as how [CMI] reported on the fruit fly issue, with the evolutionists claiming this A-I Milano
complex as vastly more important evidence than the fruit fly stuff, I hope to see a [CMI] report soon.
Here
are
two
URLs:
Site #1)The Milano Mutation: A Rare Protein Mutation Offers New Hope for Heart Disease Patients 1
Site #2)Defective but beneficial gene may bring about novel ways to clear arterial plaque buildup2
It would appear that the questioner is under the mistaken impression that beneficial mutations are a problem for creationists.
Some creationists make this unfortunate error. The mutations Q&A section of our Web site clearly teaches that the issue is
not whether the mutation is beneficial but if it adds new genetic information (specified complexity). So it would have been
clear that the A-I Milano mutation is not evidence for microbe-to-man evolution.What has happened? One amino acid has
been replaced with a cysteine residue in a protein that normally assembles high density lipoproteins (HDLs), which are
involved in removing bad cholesterol from arteries. The mutant form of the protein is less effective at what it is supposed to
do, but it does act as an antioxidant, which seems to prevent atherosclerosis (hardening of arteries). In fact, because of the
added -SH on the cysteine, 70% of the proteins manufactured bind together in pairs (called dimers), restricting their
usefulness. The 30% remaining do the job as an antioxidant. Because the protein is cleverly designed to target hot spots in
arteries and this targeting is preserved in the mutant form, the antioxidant activity is delivered to the same sites as the
cholesterol-transporting HDLs. In other words, specificity of the antioxidant activity (for lipids) does not lie with the mutation
itself, but with the protein structure, which already existed, in which the mutation occurred. The specificity already existed in
the wild-type A-I protein before the mutation occurred.Now in gaining an anti-oxidant activity, the protein has lost a lot of
activity for making HDLs. So the mutant protein has sacrificed specificity. Since antioxidant activity is not a very specific
activity (a great variety of simple chemicals will act as antioxidants), it would seem that the result of this mutation has been a
net loss of specificity, or, in other words, information. This is exactly as we would expect with a random change.Note that
quantifying the amount of information is not as easy as just counting the number of functions or even the number of base
pairs (letters) in a gene. This is simplistic reasoning. It is firstly, but not only, a question of specificity. For example, if I said,
Fix the Porsche, this conveys more information than Fix the automobile, although the latter has more letters. If I said, Fix
the car and the truck, we now have two functions in this sentence, but does it contain more information than Fix the
Porsche? We are now comparing a command with two functions, but both of low specificity, with a command with one

function and high specificity. In this case deciding which has the most information is not simple. This illustrates the
importance of context and purpose (teleology). For example, if there were only one car to fix, a Porsche, Fix the automobile
would carry as much information as Fix the Porsche. But if there were dozens of possible cars or trucks to fix, Fix the
Porsche would contain much more useful information than Fix the car and truck. Dr Werner Gitt explores these issues in
detail in his incisive In the Beginning Was Information.For more information on defining information mathematically, see How
is information content measured? (somewhat technical). However, mathematical definitions of information only work in
certain contexts (e.g. substrate specificity of enzymes).It would also be useful to study the article Is antibiotic resistance
really due to increase in information? and the explanations about information content accompanied by Dr Lee Spetner's
graphs of the activity of the enzyme ribitol dehydrogenase. The Milano mutation seems to parallel the mutant enzyme, with a
lower peak and broader spectrum, i.e. towards lower specificity hence lower information.Of course it remains to be seen if
this mutation is completely beneficial. The fact that the persons with it are unable to produce normal levels of HDLs, which
are known to perform a valuable role in moving bad cholesterol, suggests that there could be a health down side to this
mutation (as there is with sickle-cell anemia).Apparently this mutation has only been seen in heterozygotes. That is, all
those who have the mutation have a normal gene pairing the mutant gene. The homozygous state (both genes the same)
could be lethal. This would then parallel sickle-cell anemia, which evolutionists often put up as an example of evolution in
action. Here the heterozygote has an advantage, but the homozygote is lethal. This cannot be an example of upward
evolutionary progression since the mutant form can never take over the population; it will always be limited to a small
percentage of individuals in the population.However, with the A-I Milano mutation, there are not yet many people with the
mutation, so the chances of two people with the mutation marrying and having children so that a homozygote could be
produced (1 in 4 of the children) would be very lowit probably has not happened yet. The jury remains out on whether a
homozygote would be viable.3Needless to say, if someone follows a healthy lifestyle, eats the right things (something like the
food pyramid as recently revised by Harvard Medical School, although this could be improved further), exercises, maintains
a healthy weight and does not abuse their body by smoking, the A-I Milano mutation will likely be of no use. Epidemiological
studies show that heart disease can probably be avoided.
Dr Don BattenFor the original paper, see: Bielicki, J.K., Oda, M.N., Apolipoprotein A-I (Milano) and apolipoprotein A-I(Paris)
exhibit an antioxidant activity distinct from that of wild-type apolipoprotein A-I, Biochemistry 41(6):2089-96, 2002.
Note: the original posting of this response was modified following interaction with a Dr Steven Pirie-Shepherd, an
evolutionist. We started out publishing Dr Pirie-Shepherds objections with Dr Battens responses, but he was clearly not
happy with our publishing his letters after we had demonstrated the flaws in them; he kept coming back for more, continually
changing his point of contention. If we had persisted with publishing this interaction as the back-and-forth continued it would
have become quite tedious to follow. Also, it became clear that Dr Pirie-Shepherd was willing to concede nothing and was
using the opportunity merely to develop a propaganda piece to be published on a web site given to opposing the young age
model. Consequently, our answer was modified in response to Pirie-Shepherds claims, but his words were not included.Dr
Pirie-Shepherd has contributed his name to Project Steve of the National Center for Science Education, an organization
founded and still run by secular humanists (atheists)NOT to promote real science such as physics, chemistry and
experimental biology, but solely to oppose the young agemodel and promote evolution from goo-to-you-via-the-zoo
(see How Religiously Neutral are the Anti-Creationist Organizations?).
Special tools of life
by Dr Jean Lightner
Have you ever tried to do a job, but not had the right tools? Many tools (like screw drivers, saws and wrenches) come in a
variety of sizes and shapes so that different jobs can be done quickly and easily. Specialty tools are designed with unique
shapes and bends so a person can do a special job very well. Trying to use a tool poorly suited for the job (like a slotted
screw driver on a Phillips screw) often leads to poor results and frustration.Did you know that living things make their own
tools? For example, they use enzymes to break down large molecules for parts (building blocks) and some for energy.
Other enzymes construct things, such as cell walls. These enzymes are like awesomely designed specialty tools with a very
specific size and shape. Many do only one job and do it very well. There are several thousand known enzymes that are
used to do all the different jobs necessary to keep living things alive and well. 1How do living things make these tools? They
use the information from their DNA (the molecule in the nucleus that contains all the programs a cell needs to function) and
materials from the world around them. Since it would be wasteful for a creature to constantly be producing an enzyme even
when it wasnt needed, the DNA also contains an ON/OFF switch (repressor gene). This is normally switched OFF until the
enzyme is needed.2
There are two competing worldviews as to how this information got on the DNA. Creationists believe that the designer put
the information there when he created life.Evolutionists believe that it got there by
mistake; actually by a lot of mistakes. Thats right! Before a cell divides, it copies the
DNA so each new cell has a copy. Sometimes there is a copying error, called a
mutation, that takes place. These mutations, according to the theory of evolution, are
the source of new information. Natural selection then supposedly selects the useful
mutations and eliminates the useless ones. This is how evolutionists account for new
enzymes, hormones, organs, etc. as living things have (supposedly) developed upward
from a single-celled beginning.Most mutations that make a noticeable difference are
harmful. A few might be considered beneficialat least sometimes.3 But do these
mutations add information to the DNA? A number of scientists thought so when they
saw mutations in a bacterium that normally grows in the soil. 4 This bacterium can grow
well when it has one of several unusual sugars as an energy source: ribitol or Darabitol.5 The scientists growing it in the lab tried giving it a very similar sugar, xylitol, as
its only energy source. Xylitol is not normally found in the bacterias environment. The
bacteria have no enzymes specifically designed for the first step in breaking it down.The The unsuspecting soil dwelling
wild-type of the bacteria couldnt grow. However, a mutant strain (X1) arose that could bacteria were suddently whisked
grow on xylitol, although very slowly. Later, a second mutant strain (X2) arose from X1 off to the lab and fed nothing but
that grew faster on xylitol. From this, a third mutant strain (X3) developed that grew xylitol.
faster still. Evidence for evolution? A new enzyme evolving? Hardly.Ribitol is broken
down for energy by a series of steps, and each step is done by a different enzyme. The enzyme for the first step has a
really cool name, ribitol dehydrogenase (RDH). If youre ever stuck trying to come up with a name for a pet, this might be
just the name. RDH, the enzyme, is like a specialty tool that is designed to take apart ribitol. Although xylitol is very similar
to ribitol, RDH doesnt take it apart very easily. When it does finally get it apart, the bacteria has all the right enzymes to
break it down the rest of the way. The X1 was able to grow because the mutation destroyed the ON/OFF switch for RDH.

Therefore, RDH was constantly produced and the large amounts of this enzyme broke down the xylitol that was able to get
into the cell.
The second mutation affected the RDH
enzyme and changed its shape slightly.
A new enzyme, or a defective one?6
This new shape caused RDH to be less
effective on ribitol, but more effective on
xylitol and L-arabitol (the mirror image of Darabitol) as shown in the accompanying
graph. This means RDH in X2 was less
specific in its action. Is it beneficial for a
mechanic to have bolts randomly loosened
throughout the car when he wants to
remove the starter? There are plenty of
acids and bases that can take lots of
molecules apart very well. However, living
things use tools that are more specific so
that important parts of the cell arent
dismantled. A loss of specificity is a loss of
information and usually not beneficial.
Nevertheless, since the scientists insisted
on growing the bacteria on only xylitol, the
mutation was beneficial in this case. It
allowed X2 to grow about 2 times faster
than X1, which was still slow, but it did grow
faster.The third mutation allowed more
Living things use enzymes that are usually very specific. Ribitol
xylitol into the bacteria cell. Normally cells
dehydrogenase (RDH) is very specific in the wild-type bacteria (red) as seen
are very picky about what they let in. Some
by the tall spike in the graph. The mutant (X2, shown in green) has an
things that are important to the cells are
enzyme that has lostsome of its specificity as seen by a less pronounced
pulled in. This bacteria has an enzyme that
peak and higher base. If the theory of evolution were true, there should
brings D-arabitol into the cell. When xylitol
be many examples of new enzymes developing with increasing specificity.
is present, it is brought in as well because it
So far we know of none.
is so similar. This enzyme also has an
ON/OFF switch in the DNA which is kept
switched OFF unless D-arabitol is
available. The third mutation destroyed this ON/OFF switch and xylitol ended up with a free ride into the cell. This enabled
the X3 mutant to grow about twice as fast as X2.If the theory of evolution were true, each step, on average, would add new
information. This would explain the tremendous amount of information in bass, bison and birds compared to bacteria. In the
length of time mankind has been studying genetics and mutations, many examples of information increase should have
been documented. Yet, none have been found. In the example discussed here, the X3 mutant still only grew about half as
well on xylitol as the wild-type did on ribitol. If the mutants were put back into a soil environment, they would quickly die out
because they are less fit. Many otherwise intelligent people, in their quest for knowledge, have missed the obvious. Anyone
who looks at a well designed tool should know that someone made it.
Mutations: A losing change
Although it is extremely rare for a mutation to be beneficial, the following mutations were, as long as the bacteria
were restricted to xylitol.
Bacterium Loss
Benefit
Wild-type None
Grows well in soil
Mutant X1 On/off switch for RDH destroyed; RDH constantlyRDH available in large amounts;
produced even when ribitol is absent (waste of xylitol broken down slowly; X1 grows
resources; inefficient)
slowly on xylitol
Mutant X2 RDH shape changed; loses specificity for ribitolRDH works better on xylitol; X2
(grows slower than wild type on ribitol)
grows faster on xylitol than X1
Mutant X3 On/off switch for enzyme that brings in D-arabitol Xylitol gets a free ride into the cell;
destroyed; enzyme constantly produced, evenmore xylitol available in cell; X3
when D-arabitol absent (waste of resources)
grows faster on xylitol than X2
DAILY
Mutations: evolutions engine becomes evolutions end!
by Alex Williams
In neo-Darwinian theory, mutations are uniquely biological events that provide the engine of natural variation for all the
diversity of life. However, recent discoveries show that mutation is the purely physical result of the universal mechanical
damage that interferes with all molecular machinery. Lifes error correction, avoidance and repair mechanisms themselves
suffer the same damage and decay. The consequence is that all multicellular life on earth is undergoing inexorable genome
decay. Mutation rates are so high that they are clearly evident within a single human lifetime, and all individuals suffer, so
natural selection is powerless to weed them out. The effects are mostly so small that natural selection cannot see them
anyway, even if it could remove their carriers. Our reproductive cells are not immune, as previously thought, but are just as
prone to damage as our body cells. Irrespective of whether creationists or evolutionists do the calculations, somewhere
between a few thousand and a few million mutations are enough to drive a human lineage to extinction, and this is likely to
occur over a time scale of only tens to hundreds of thousands of years. This is far short of the supposed evolutionary time
scales.
Mutations destroy

Ever since Hugo de Vries discovered mutations in the 1890s they have been
given a central role in evolutionary theory. De Vries was so enamoured with
mutations that he developed an anti-Darwinian saltationist theory of
evolution via mutation alone.1 But as more became known, mutations of
large effect were found to be universally lethal, so only mutations of small
effect could be credibly considered as of value to evolution, and de Vries
saltationist theory waned. When the Neo-Darwinian Synthesis emerged in
the 1930s and 1940s, mutations were said to provide the natural variations
that natural selection worked on to produce all new forms of life.However,
directly contradicting mutations central role in lifes diversity, we have seen
growing experimental evidence that mutations destroy life. In medical circles,
mutations are universally regarded as deleterious. They are a fundamental
cause of ageing,2,3cancer4,5 and infectious diseases.6Even among
evolutionary apologists who search for examples of mutations that are
beneficial, the best they can do is to cite damaging mutations that have
beneficial side effects(e.g. sickle-cell trait,7 a 32-base-pair deletion in a
human chromosome that confers HIV resistance to homozygotes and delays
AIDS
onset
in
heterozygotes,8 CCR5delta32 mutation,9 animal
10
11
melanism, and stickleback pelvic spine suppression ). Such results are not at all surprising in the light of the discovery that
DNA undergoes up to a million damage and repair events per cell per day.12
Mutation physics
Neo-Darwinian theory represents mutations as uniquely biological events that constitute the engine of biological variation.
However, now that we can see life working in molecular detail, it becomes obvious that mutations are not uniquely biological
eventsthey are purely physical events.Life works via the constant (often lightning-fast) movement of molecular machinery
in cells. Cells are totally filled with solids and liquidsthere are no free spaces. The molecular machines and the cell
architecture and internal structures are made up of long-chain organic polymers (e.g. proteins, DNA, RNA, carbohydrates,
lipids) while the liquid is mostly water. All forms of movement are subject to the laws of motion, yet the consequences of this
simple physical fact have been almost universally ignored in biology.Newtons first law of motion says that a physical body
will remain at rest, or continue to move at a constant velocity, unless an external force acts upon it. Think of a message
molecule that is sent from one part of a cell to another. Since the cell is full of other molecules, with no empty spaces, the
message molecule will soon hit other molecules and either slow down or stop altogether. This is the universal problem
known as friction.Friction events can result from many causes, but can be crudely divided into two types: one is referred to
as ploughing and the other is shearing. Ploughing involves the physical displacement of materials to facilitate the motion of
an object, while shearing arises from the disruption of adhesive interactions between adjacent surfaces. 13Molecular
machines in cells owe a great deal of their structure to hydrogen bonds, but these are rather weak and fairly easily broken.
For example, most proteins are long, strongly-bonded chains of amino acids, but these long chains are coiled up into 3dimensional machine components, and the 3-dimensional structures are held together by hydrogen bonds.14 When such
structures suffer mechanical impacts, the transfer of momentum can distort or break the hydrogen bonds and critically
damage the molecules function.
The inside of a cell has a density and viscosity somewhat similar to yogurt (figure 1). The stewed fruit (dark colour) added to
the yogurt during manufacture can be seen swirling out into the white yogurt. The fruit has not continued to disperse
throughout the yogurt. It was completely stopped by the initial friction. This is like what happens in a cellany movement is
quickly dampened by friction forces of all kinds coming from all directions.
Figure 1. A transparent carton of fruit yogurt illustrates how friction in the
viscous fluid stopped the motion initiated by mixing the fruit (dark colour)
with the yogurt (white colour).
How do cells cope with this friction? In at least five different ways. First,
there are motor proteins available all over the cell that attach to mobile
molecules and carry them along the filaments and tubules that make up
the cytoskeleton of the cell. Second, these motor proteins are continually
re-energized after friction collisions by energy inputs packaged in the form
of ATP molecules. Third, there are address labels attached to mobile
molecules to ensure they are delivered to the correct destination (friction
effects continually divert mobile molecules from their course). Fourth, thin
films of water cover all the molecular components of cells and provide both
a protective layer and a lubricant that reduces the frequency and severity
of friction collisions. Fifth, there is a wide range of maintenance and repair
mechanisms available to repair the damage that friction causes.The
friction problemand the damage that results from itis orders of
magnitude greater in cells than it is in larger mechanical systems.
Biomolecules are very spiky objects with extremely rough and highly
adhesive surfaces. They cannot be manufactured and honed to the
smoothness that we achieve in our vehicle engine components such as
pistons and flywheel pivots, nor can ball-bearings be inserted to reduce the
surface contact area, such as we do in wheel axles. As a biological
example, consider the rotary motor that drives the bacterial flagellum. The
major wear surfaces are on the rotor (attached to the flagellum) and the stator (the housing for the rotor, attached to the cell
wall). The stator consists of 22 molecules, set in 11 pairs. The wear rate is so great that the average residence time for a
stator molecule in the stator is only about 30 seconds. 15 The cells maintenance system keeps a pool of about 200 stator
molecules in reserve to cope with this huge turnover rate.Finding suitable lubricants to overcome friction is a major focus in
the nanotechnology industry. A special technique called friction force microscopy has been developed to quantitatively
evaluate potential lubricants.16This shows that the laws of physics, operating among the viscous components of the cell,
both predict and explain the high rate of molecular damage that we observe in DNA. Between 50% and 80% of the DNA in a
cell is continually consulted for the information necessary for everyday metabolism. This consultation requires numerous
steps that each involve physical deformation of the DNAmoving around within the nucleus, winding and unwinding of the
chromatin structures, unzipping the double-helix, binding and unbinding of the transcription machinery, re-zipping the

double-helix, rewinding the chromatin structures and shuffling around within the nucleus. Each step of motion is powered by
ATP discharges and inevitably causes mechanical damage among the components. While most of this damage is repaired,
the repair mechanisms are not 100% perfect because they suffer mechanical damage themselves.17
Mutations rapidly destroy
Within neo-Darwinian theory, natural selection is supposed to be the guardian of our genomes because it weeds out
unwanted deleterious mutations and favours beneficial ones. Not so, according to genetics expert Professor John
Sanford.18 Natural selection can only weed out mutations that have a significant negative effect upon fitness (number of
offspring produced). But such fitness is affected by a huge variety of factors, and the vast majority of mutations have too
small an effect for natural selection to be able to detect and remove them.Furthermore, if the average mutation rate per
person per generation is around 1 or more, then everyone is a mutant and no amount of selection can stop degeneration of
the whole population. As it turns out, the mutation rate in the human population is very much greater than 1. Sanford
estimates at least 100, probably about 300, and possibly more.
All multicellular life suffers
Two recent reviews of the mutation literature not only confirm Sanfords claims, but extend them to all multi-cellular life.In a
review of the distribution of fitness effects (DFE) of mutations, 19 the authors are unable to give any examples of beneficial
mutations for humans. In their calculations regarding the rate of deleterious mutations (MD) and neutral mutations (MN),
they use the equalities MD = 1 MN and MN = 1 MD which both imply that the rate of beneficial mutations is zero. They do
give a few non-zero values for beneficial mutation rates in some experimental organisms, but qualify these results by noting
the interference of other variables.In a review of mutation rate variations in eukaryotes, 20 the authors admit that all
multicellular organisms are undergoing inexorable genome decay from mutations because natural selection cannot remove
the damage.21 Their Box 2 and Table 1 list deleterious mutation rates for a wide range of multicellular organisms, noting they
are all underestimates, with the possible exception of those for the fruit fly Drosophila melanogaster with a value of 1.2. The
value given for humans is ~3.Thus, all multicellular life on earth is undergoing inexorable genome decay because the
deleterious mutation rates are so high, the effects of the most individual mutations are so small, there are no compensatory
beneficial mutations, and natural selection is ineffective in removing the damage.
The wheels have come off the neo-Darwinian juggernaut!
How long to extinction?
How long could multicellular life survive in the face of universal genetic degradation? This is a very important question, and I
will attempt to answer it by using several different lines of evidence.
Human ageing and cancer
We have recently discovered that there is a common biology in cancer and ageingboth are the result of accumulating
molecular damage in cells.22 This confirms the arguments outlined above, that for purely physical reasons molecular
machinery suffers extremely high damage rates, clearly evident within the lifespan of a single human. Every cell has a builtin time clock to limit this damage and minimize the chance of it becoming cancerous. At every cell division,
each telomere (the caps on both ends of a chromosome that stop the double-helix from unravelling) is shortened by a small
amount, until they reach the Hayflick Limitdiscovered in 1965 to be a little over 50 cell divisions. The cells then stop
dividing and they are dismantled and their parts are recycled.By adding the enzyme telomerase, the telomere shortening
problem can be circumvented, but that then exposes the cell to a greater risk of becoming cancerous because of
accumulating damage elsewhere in the cell. The overall balance between protection from damage and the need for
longevity determines fitness (reproductive success) and life span.23 The bodys normal reaction to increasing genome
damage is to kill off the damaged cells via programmed senescence (of which the telomere clock with its Hayflick limit is but
one part). But cells become malignant (cancerous) when mutation disables the senescence mechanism itself, which then
enables the damaged cells to proliferate without limit. 22 The Hayflick limit of around 50 cell divisions for humans seems to
provide the optimum balance.Fifty human generations of 20 years each gives us only 1,000 years as a timescale over which
a human lineage would begin to experience a significant mutation load in its genome. This is alarmingly rapid compared with
the supposed evolutionary time scale of millions and billions of years.
Reproductive cells
Figure 2. Schematic representation of human
life expectancy (), male fertility (), and risk
of fetal abnormality with mothers age (---).
Despite the protective Hayflick limit on cell
divisions and life expectancy, very significant
molecular damage accumulates in humans
even during the most productive years of life.
Mutations do even more damage than the
Hayflick limit and associated cancer rates
suggest.Ever
since
August
Weismann
published The Germ-Plasm: A Theory of
Heredity24 in 1893, a discrete separation has
been shown to exist between body cells
(the soma) and germ-line cells (germplasm).
Germ-line cells were thought to be more
protected from mutation than other body cells.
However, another recently discovered cause
of ageing is that our stem cells grow old as a
result of heritable DNA damage and
degeneration of their supporting niches(the
special nest areas in most organs and tissues of the body where stem cells grow and are nurtured and protected). The
telomere shortening mechanismintended to reduce cancer incidenceappears to also induce the unwanted side-effect of
a decline in the replicative capacity of certain stem-cell types with advancing age. This decreased regenerative capacity has
led to a stem-cell hypothesis for human age-associated degenerative conditions.25Human fertility problems suggest that the
decline in niche protection of stem cells also applies to our gametes (eggs and sperm). For males, fertilityas measured by
sperm count, sperm vigor and chance of conceptionbegins to decline significantly by age 40 and the rate of certain
paternal-associated birth defects increases rapidly during the 30s (figure 2).26 For females, the chance of birth defects
increases rapidly from around the mid-30s, particularly because of chromosome abnormalities (figure 2). In the middle of the
most productive part of our lives, our bodies are therefore showing clear evidence of decline through accumulation of
molecular damage in our genomes.

Do germ-line cells really suffer less damage?


When DNA was discovered to be the carrier of inheritance, Weissmans germ-plasm theory gave rise to the immortal strand
hypothesis. When the DNA of an embryonic stem cell replicates itself, it was thought that the old strand would remain with
the self-renewing mother stem cell, while the newly constructed daughter strand proceeds down the path of differentiation
into a body cell. In this way, the old strand would remain error freebecause it has not suffered any copying errorsand
thus becomes effectively immortal.However, a research team at the Howard Hughes Memorial Institute recently tested this
theory using the stem cells that produce blood, and found that they segregate their chromosomes randomly. 27 That is, the
immortal strand hypothesis is wrong. If stem cells are not given this kind of preferential treatment then it is reasonable to
conclude that germ-line cells are also subject to the same molecular damage as somatic cells. This is confirmed by the
observation that human fertility exhibits damage long before age-related diseases take over.
A single human lifetime is enough to show very significant mutation damage, even in our reproductive cells.
Haldanes dilemma
The severe contradictions that these findings pose for neo-Darwinian theory corroborate what has become known
as Haldanes dilemma. J.B.S. Haldane was one of the architects of neo-Darwinism who pioneered its application to
population biology. He realized that it would take a long time for natural selection to fix an advantageous mutation in a
populationfixation is when every member has two copies of an allele, having inherited it from both mother and father. He
estimated that for vertebrates, about 300 generations would be required, on average, where the selective advantage is 10%.
In humans, with a 20-year generation time and about 6 million years since our last common ancestor with the chimpanzee,
only about 1,000 such advantageous mutations could have been fixed. Haldane believed that substitution of about 1,000
alleles would be enough to create a new species, but it is not nearly enough to explain the observed differences between us
and our closest supposed relatives.The measured difference between the human and chimpanzee genomes amounts to
about 125 million nucleotides, which are thought to have arisen from about 40 million mutation events. 28 If only 1000 of
these mutations could have been naturally selected to produce the new (human) species, it means the other 39,999,000
mutations were deleterious, which is completely consistent with the reviews showing that the vast majority of mutations are
deleterious. Consequently, we must have degenerated from the apes, which is an absurd conclusion.According to Kirschner
and Gerharts facilitated variation theory,29 life consists of two main componentsconserved core processes (the structure
and machinery in cells) and modular regulatory processes (the signalling circuits and switches that operate the machinery
and provide a built-in source of natural variation). The 40 million mutation differences between humans and chimps are
therefore much more reasonably explained as 40 million modular differences between the design of chimps and
the design of humans.
Quantitative estimates of time to extinction
There are a number of different ways to estimate the time it would take for relentlessly accumulating mutations to send our
species to extinction.
Binomial estimates
Some very rough estimates can be derived from the Binomial distribution, which can predict the likelihood of multiple
mutations accumulating in an essential genetic functional module. A binomial model of a mutating genome could consist of
the cells DNA being divided into N functional modules, of which Ne are essential; that is, the lineage fails to reproduce if any
of the essential modules are disabled. For any given mutational event, p = 1/N is the probability of being hit, q is the
probability of being missed, and p q = 1.
What is the likely value of N? We can derive two estimates from the knowledge that there are about 25,000 genes, plus the
discovery from the pilot study report of the ENCODE project that virtually the whole human genome is functional. 30For the
first estimate, the average protein contains a few hundred amino acids and each amino acid requires three nucleotides of
code, so the average gene would take up about 1,000 nucleotides of exon space (an exon is the protein-coding part of a
gene). There are about 3 billion nucleotides in the whole human genome, so if we assume that the average protein
represents an average functional unit then N = 3 million.The second estimate comes from the ENCODE report that gene
regions produce on average 5 RNA transcripts per nucleotide, and the untranslated regions produce on average 7 RNA
transcripts per nucleotide. There are about 33 times as many nucleotides in the untranslated regions as in the genic regions.
Assuming that transcript size is approximately equal in each region, then there are 25,000 x 5 = 125,000 gene transcripts
and 25,000 x 33 x 7 = 5,775,000 untranslated transcripts, making N= 5,900,000 in total. Our two estimates of N are
therefore 3 to 6 million in round figures.What is the likely value of Ne? Experiments with mice indicate that 85% of genes can
be knocked out one at a time without lethal effects. 31This is due to the robustness and failure-tolerance through fallback
processes built into the genomic designs. That means any one of those remaining 15% genes will be fatal if disabled.
Multiple mutations occur however, so the likely value of Ne when exposed to multiple mutations will be much higher than
15%. The maximum possible value is 100%. In a study of 2,823 human metabolic pathways, 96% produced disease
conditions when disrupted by mutation,32 so if we take an average between this value and the minimum 15% then we get
about 60% of functional units being essential.How many random mutations are required on average to disable an essential
functional module? In rare cases, a single mutation is enough to disable a persons ability to reproduce. A two-hit model is
common in cancer. In a study of cell signalling networks, these two hits usually knocked out: (i) the programmed death
system for dealing with damaged (cancerous) cells, and (ii) the normal controls on cell proliferationso the damaged cancer
cells can proliferate without limit. The proportion of cancer-associated genes was also found to increase with the number of
linkages between genes. When a healthy gene is linked to more than 6 mutated genes, ~80% of all genes in the network
are cancerous. Extrapolating from this, we find that by the time a normal gene is linked to about 10 mutated genes, then the
whole network has become cancerous.33Almost 70% of known human genes can be causal agents of cancer when
mutated.34 Cancers can result from as little as a single mutation in a stem cell, or multiple mutations in somatic cells. 35 The
minimum possible value of 1 is known to be rare, so the more common occurrence of the 2-hit model makes it a reasonable
best-estimate minimum. But it may require 10 modules to receive two hits each for the whole network to become
dysfunctional.The maximum number of hits required to disable a single module may be 100 or more, but if the average
functional module only contains 1,000 nucleotides then this figure, at 10% of the whole, seems rather large. An order-ofmagnitude average is perhaps more likely to be 10 random mutations per functional module.To provide some context for
these estimates, recent work shows that the cell-cycle checkpoint damage repair system is activated when 10 to 20 doublestrand breaks accumulate in a cell undergoing division. 36 That is, life will tolerate only 10 to 20 DNA breaks per cell before it
starts repair work, whereas we are examining scenarios in which there are thousands and millions of damage events per
cell. Our numbers are clearly up in a region where the cells repair mechanisms are working at their hardest.What then is the
likelihood of accumulating either 2 hits in 10 modules, or 10 hits in one module, in any one of either 15% or 60% of the 3 to
6 million functional modules? The binomial distribution in Microsoft Excel was used to make the following calculations,
making the further assumption that the likelihood of the unit being a critical one must exceed 50% for extinction to be more
likely than not in the next generation.Assuming 60% essentiality, only one functional module needs to be disabled for the

probability of its essential status to exceed 50%. For the 2-hit model, about 6,000 to 12,000 mutations are required to
disable ten of the 3 to 6 million functional modules. For the 10-hit model, 3 to 6 million mutations are required to disable one
functional module.Assuming 15% essentiality, four modules need to be disabled before the probability of at least one of
them being essential exceeds 50%. For the 2-hit model, 250,000 to 500,000 mutations are required to disable ten modules
with four mutations each among the 3 to 6 million functional modules. For the 10-hit model, 3.7 to 7.5 million mutations are
required to disable four functional modules.If every individual produces 100 new mutations every generation (assuming a
generation time of 20 years) and these mutations are spread among 3 to 6 million functional modules across the whole
genome, then the average time to extinction is:
1,200 to 2,400 years for the 2-hits in 10 modules model and 60% essentiality
50,000 to 100,000 years for the 2-hits in 10 modules model and 15% essentiality
600,000 to 1,200,000 years for the 10-hit model and 60% essentiality
740,000 to 1,500,000 years for the 10-hit model and 15% essentiality.
Truncation selection
Evolutionary geneticist Dr James Crow argued that humans are probably protected by truncation selection. 26 Truncation
occurs when natural selection preferentially deletes individuals with the highest mutation loads. Plant geneticist John
Sanford put Crows claims to the test by developing a computer simulation of truncation. His assumptions were: 100
individuals in the population, 100 mutations per person per generation, 4 offspring per female, 25% non-genetic random
deaths per generation, and 50% selection against the most mutant offspring per generation. He assumed an average fitness
loss per mutation of 1 in 10,000. His species became extinct in only 300 generations. With a generation time of 20 years this
corresponds to 6,000 years.37Sanfords assumptions are somewhat unrealistic, but there are other ways to approach the
problem. Mutations are pure chance events that follow a Poisson distribution, and this behaves like the normal curve when
the average expected value is greater than about 30.38 In a Poisson distribution, the variance is equal to the average
expected value, and the standard deviation is the square root of the variance. When the expected average value is 100, the
standard deviation will be 10. The normal curve now tells us the following:
Half the people will suffer about 100 mutations or more, and half the people will suffer about 100 mutations or less.
About 84% of people will suffer 110 mutations or less, and so the remaining 16% of people will suffer 110 or more mutations.
Alternatively, about 16% of people will suffer 90 or less.
About 97.7% of the population will experience 120 mutations or less, and the remaining 2.3% will suffer 120 mutations or
more. Alternatively, 2.3% will suffer 80 or less.
About 99.9% of the population will suffer 130 mutations or less, and the remaining 0.1% will suffer 130 or more mutations.
Alternatively, 0.1% will suffer 70 or less.
If we remove the most mutantthose above 130 mutations per person per generationthen we will only remove 0.1% of
the population and it will make virtually no difference. If we removed the most mutant 50% of the population that would not
solve the problem either, for two reasons. First, the great majority of the remaining people still suffer between 70 and 100
mutations per person per generation, far above the value of 1 that ensures inexorable decline. Second, removing half the
population each generation would send it extinct in a few dozen generations.
Table
1. Estimated
number
of
generations and years to extinction for
populations of various sizes, when
fitness declines by 1.5% in each
generation.None of the above models
include
the
effect
of synergistic
epistasis (if one gene is mutated, its
impact is ameliorated by the coordinated
activity of other genes) or of population
size. We can include these by using
Crows estimate that the fitness of the
human race is currently degenerating at
a rate of about 1 to 2% per generation. If
we use an average value of 1.5% then
only 98.5% of the next generation will
produce reproductively viable offspring.
The next generation after that will only
have 98.5% of those survivors able to
produce reproductively viable offspring,
and so on.For any given stable
population size N, the size of the next generation that can produce reproductively viable offspring will be 98.5% of N ,and for
any given number of generations G, the number of survivors able to produce reproductively viable offspring will be
(98.5%)G of N.Table 1 shows the approximate numbers of generations after which the population degenerates to extinction
(only one individual is left, so breeding cannot continue). No population can sustain a continual loss of viability of 1.5%.The
above model assumes that right from the beginning there will be 1.5% loss of fitness each generation. However, the
binomial simulations earlier showed that individuals can tolerate somewhere between a few thousand to a few million
mutations before the damage critically interferes with their ability to reproduce. This means that synergistic epistasis is a real
phenomenonlife is robust in the face of mutational assault. Instead of the immediate loss of 1.5% every generation, the
general population would remain apparently healthy for a much longer time before the damage became apparent.However,
the rate at which mutations accumulate will remain the same because the cause remains the samemechanical damage.
This means that most people will be apparently healthy, but then approach the threshold of dysfunction over a much shorter
period, creating a population crash rather than a slow decline.Either way, however, the time scales will be approximately the
same because the rate of damage accumulation remains approximately the same.
Summary
Mutations are not uniquely biological events that provide an engine of natural variation for natural selection to work upon
and produce all the variety of life. Mutation is the purely physical result of the all-pervading mechanical damage that
accompanies all molecular machinery. As a consequence, all multicellular life on earth is undergoing inexorable genome
decay because the deleterious mutation rates are so high, the effects of the individual mutations are so small, there are no
compensatory beneficial mutations and natural selection is ineffective in removing the damage.So much damage occurs that
it is clearly evident within a single human lifetime. Our reproductive cells are not immune, as previously thought, but are just

as prone to mechanical damage as our body cells. Somewhere between a few thousand and a few million mutations are
enough to drive a human lineage to extinction, and this is likely to occur over a time scale of only tens to hundreds of
thousands of years. This is far short of the supposed evolutionary time scales. Like rust eating away the steel in a bridge,
mutations are eating away our genomes and there is nothing we can do to stop them.
Evolutions engine, when properly understood, becomes evolutions end.
Meiotic recombinationdesigned for inducing genomic change
by Jean K. Lightner
Creationary biologists have recognized that the diversity seen
within created kinds today cannot be adequately explained by the
shuffling of pre-existing gene versions (alleles) and accidental
errors that accumulate within the genome.1 Within the context of
creation, the development of genetic diversity has been a means
by which the designer has enabled his creatures to adapt to the
many different environmental niches they occupy today. Further,
it has played an important role in adding variety, beauty, and
productivity in various domesticated plants and animals.2There is
certainly no logical reason to believe that unguided chance
processes can bring about a functional genome.3 Neither is there
sound reason to believe that accidental changes to the genome
are a productive source of useful genetic diversity. Logically,
therefore, the genome must contain biological information that
allows it to induce variation from within. 4 One mechanism
involved in this is meiotic recombination.5 Continued scientific
research is elucidating some amazing details of this process.Meiosis is a special type of cell division necessary for the
formation of gametes (eggs or sperm) so sexual reproduction can take place. In most plants and animals, chromosomes
come in pairs (homologs, one derived from each parent), but gametes only carry one of each homolog. Early in meiosis,
each chromosome must be drawn to its homolog and stably pair. Then each homolog will be pulled in the opposite direction
so that the two cells that form during the division will have exactly one of each homolog.
Meiotic recombination is no accident
The meiosis was desined in a way that naturally tends to increase diversity. In order for the chromosomes to stably pair,
recombination occurs between the homologs. The process is initiated by an enzyme which cuts the DNA on one homolog,
forming a double-stranded break. Then each side of the break is resected in one direction. This leaves two tails, which are
important in repairing the break (figure 1).The genome must contain biological information that allows it to induce variation
from within.
There are several pathways by which the break can be repaired. The best known resolution of the break is called crossing
over. For this to occur, both of the tails must invade the homolog to form a double Holliday junction (dHJ). DNA synthesis
occurs extending these tails. Then, depending on which enzymes are used to cut this structure apart, the distal ends of the
chromosomes are swapped.This swapping between homologs is important in helping to shuffle alleles, which allows for new
combinations that may be advantageous. The method of DNA repair described above is known as double-stranded break
repair (DSBR). It does not always result in crossing over. A different enzyme can be used to cut the dHJ at a different
location and gene conversion will result instead. In gene conversion, a segment from one homolog is copied onto the other.
A second pathway for resolving double-stranded breaks is called synthesis-dependent strand annealing (SDSA). In this
circumstance only one tail invades the intact homolog and gene conversion is the result.6
Meiotic recombination is mutagenic
Technically, swapping portions of a chromosome and gene conversion are mutations when they alter the nucleotide
sequence. Other mutations can also occur during the repair of double-stranded breaks. It appears to be more common with
gene conversion. One study in yeast revealed a mutation rate 1,000 times higher during gene conversion than the normal
spontaneous mutation rate for that locus. Most mutations were base pair substitutions. About 40% of the mutations were
attributable to some form of template switching. In yeast strains with a proofreading defect in a DNA polymerase, template
switch mutations were absent.7 This suggests that template switching is a complex, enzyme-driven process.There is a bias
to where meiotic recombination occurs. In a study of Drosophila, crossing over tended to occur in specific hot spots, but
these were not influenced by whether or not it was a genic region. Gene conversion had a more uniform distribution, was
more common among genic sequences, and was seen where crossing over was rare or absent. The authors emphasized
the importance of having information on rates of recombination to include in population genetics models. 8 Studies in plants
indicate that a variety of genetic and epigenetic factors influence the frequency of crossing over. 9There are several other
pathways by which double-stranded breaks can be repaired. One of the most interesting and mutagenic is break-induced
replication (BIR). It has been shown to produce complex rearrangements including copy number variation (CNV) and nonreciprocal translocations. These often involve multiple rounds of template switching. Specific endonucleases are necessary
for proper BIR; an absence of these endonucleases has been shown to significantly reduce template switching.10
Significance of mutations
At times mutations are explained as the result of accidents which introduce errors into the DNA sequence. The concept of
non-directed change is foundational in the standard evolutionary model. Logically, accidental changes in a complex system
should be consistently harmful to some degree. Creationists have pointed this out in emphasizing the implausibility of
accidents in accounting for the complexity of life.

Figure 1. In meiotic recombination a doublestranded break is enzymatically induced and


the ends are resected, forming tails. Repair of
the break begins when one tail invades the
corresponding region on the homolog and
DNA synthesis takes place. From there
several different pathways are possible.
Crossing over can occur if the second tail also
invades and a double Holliday junction forms.
This pathway is called double-stranded break
repair (DSBR). However, this pathway can
have an alternative resolution, gene
conversion (non-cross over). A second
pathway is synthesis-dependent strand
annealing (SDSA), which can also result in
gene conversion.However, when diversity is
examined within a creation model, it is evident
that significant diversity has arisen since the
time of the Flood. In contrast to the notion that
all mutations are harmful, the observed
diversity does not appear to be typically
harmful, and much is considered to be
healthy. It has been pointed out that this useful
diversity is not logically the result of accidents,
but some designed mechanism(s) must be
producing it.1Several specific examples are worth noting. In a gene influencing coat colour, a pattern of in-frame indel
(insertion or deletion) mutations was noted across several unrelated kinds. These generally result in a black coat colour.
Statistically, only one in three indels should be in-frame. It does not appear that natural selection can explain this bias
toward in-frame indels, and so a designed mechanism was suggested as its source. 11Resistance to organophosporous
insecticides has been studied in sheep blowflies. There is a particular gene where specific mutations can confer resistance
to one organophosphate or another. Resistance to one of these insecticides (malathion) was identified in pinned specimens
that pre-dated the first use of that insecticide; therefore, selection would be a reasonable explanation for how it spread in the
fly population. Resistance to a second insecticide (diazinon) appears to have arisen by mutation since the insecticide was
introduced. This rapid appearance of resistance is quite impressive (though disheartening for those trying to get rid of this
pest). In addition to this, flies have emerged that are resistant to both insecticides as a result of gene duplication (a form of
CNV). It appears that such gene duplications have arisen at least three separate times in these flies, and always involve the
resistant alleles.12The point here is that the mutagenic nature of meiosis appears to provide a plausible mechanism for
inducing this type of variation within a creationary timeframe. The requirement of specific enzymes and non-random pattern
of change in meiotic recombination suggests it could play a significant role in producing the observed useful genetic
diversity.
Gene conversion, a designed mechanism which can result in fixation of alleles
Gene conversion can lead to a transmission distortion, a deviation from the expected ratio of alleles in the gametes. Studies
in mice revealed an example of this due to a preferential induction of double-stranded breaks on one homolog, which
yielded an over-transmission of the allele from the other. Given the distortion, population simulations predicted that the
favoured allele would be fixed in the population in less than 1,200 generations. 6The mutagenic nature of meiosis appears to
provide a plausible mechanism for inducing this type of variation within a creationary timeframe. Transmission distortion is
extremely significant. Most models attempting to explain the changes in allele frequency of a population assume that a
heterozygous parent would have an equal chance of passing either of the alleles on to the offspring. The fixation of alleles
within a population is generally attributed to natural selection, although genetic drift is also recognized as a possibility. These
are naturalistic explanations that fit well within the anti-designer presuppositions of the evolutionary model.Despite the
appeal of scenarios crediting natural selection, they may have little semblance to reality if designed mechanisms are
involved in changing allele frequencies. One example in animals would be migration. Perhaps animals move to where they
are most comfortable. This comfort factor may be related to having a genotype compatible with (adapted to) that
environment. So essentially, animals with adaptive alleles stay, and the others leave. This is rather the reverse of natural
selection (where the environment selects the animals), as it is the animal making a conscious choice.Transmission
distortion due to gene conversion, as described above in mice, may also prove to be an important mechanism for fixation of
adaptive alleles in populations. If this turns out to be the case, it is a serious problem for evolutionists. It would be another
major blow to the view that naturalistic processes adequately explain the origin of new species. Instead, designed
mechanisms would be important for both the generation of diversity and fixing adaptive alleles within a population. If
designed processes are necessary for adaptive changes even within created kinds, it points again to an awesome designer!
Summary
One thing is clear; the evolutionary based inference that mutations (any change in the DNA sequence) are always accidents
or copying errors is false. Changes in DNA sequence can arise for a number of reasons. One reason is that meiotic
recombination, an essential step in reproduction for many plants and animals, is designed to induce genetic changes. This is
highlighted by the fact that enzymes are necessary for this complex processes, including enzymes which induce the doublestranded breaks and facilitate template switching. Since this is the case, I fully expect that better understanding meiotic
recombination will be one piece in the puzzle to better understanding how diversity has risen so quickly within created kinds
since the time of the Flood.
Teenage mutant ninja people
by Gordon Howard
Chernobyl, Three Mile Island, Fukushima; Why are these names associated with fear and foreboding? Because we know
the potential dangers, albeit often overstated, of radioactive materials leaking from damaged nuclear power plants. We have
read about the disastrous effects they can have on people, crops and stock.But, isnt there an upside to this? Surely the
believers in evolution should be jumping with glee, hoping for some new mutation that will propel the human race to a new
level of evolutionary progress? After all, we know about Spiderman, the Hulk, Teenage Mutant Ninja Turtles and the X-men,

all of whom fictionally benefited hugely from contact with radioactive materials. Comic strips, movies and other popular
publications have entrenched this positive idea of mutants being superior to the normal in the public mind. The reality,
however, is vastly different.Darwins theory of evolution relies on the selection of the fittest from a continually varying
population; but at the time he was writing, the Austrian monk and scientist, Gregor Mendel, was demonstrating that there are
definite limits to the variation possible. Darwin provided no mechanism for broadening these limits, but modern NeoDarwinism suggests that extra variation can come through mutations. These are alterations to the normal genetic material
that often produce alterations in the offspring, and these alterations can be selected for. They can and do occur naturally, as
copying mistakes during cell replication, or under the influence of chemicals or radiation. They are the hope of evolutionists.
However, if these mutations are so desirable, and are responsible for
the marvellous supposed success of accidental evolutionary invention
and progress, why do we want to keep people from living near the
Fukushima power plant amongst the leaked radioactive materials?
Wont there be lots of mutations? Cant we expect some beneficial
ones to appear and elevate the human race to a superior level?
Unfortunately not. The reality is that mutations caused by such
influences as ionizing radiation are much more damaging than
helpful. The DNA in a cell is like the instruction book for the cells
workings, and any random changes will be like similar changes in an
instruction book. Imagine a manual for the construction of something
you know about. Now imagine making random changes to the letters
and symbols in those instructions. Changes to an or the may not
be significant, but a change from mm to m (millimetres to metres), or
a 9 to a 2, or a change in the order of the pages, could be very
damaging. Your construction would have significant faults (such as
building the roof before the foundations), and might fail to work at
all.Radiation does this to genesit scrambles the genetic instructions, and functionality is reduced, unless the built-in
(created) repair mechanisms1 are able to cope. Scrambled instructions in the reproductive cells are passed on to offspring,
and the genetic errors accumulate over the generations faster than they can be eliminated by natural selection. 2When the
above fictional characters were being invented and popularized, scientists were actively working to produce mutated
marvels. Many experiments with this kind of aim were conducted through the twentieth century.3 One research project
involved the irradiation of millions of pine tree seeds. The researchers hoped that the mutations would produce super trees,
growing faster, thicker and higher, with denser timber and fewer branches. Most of the irradiated seeds failed to germinate.
Of those that grew, many died because of such problems as lack of chlorophyll, or proper vascular systems, while many that
managed to live grew along the ground, or sprouted many trunks, or fuzzy leaves, or spongy trunks or showed other signs of
disorganization. The only ones to survive any length of time were those that escaped significant mutational damage, and
grew normally.4Obviously, mutations were occurring in the DNA, but, just as obviously, the mutations were not creating the
super-trees they were hoping for. Today, similar research continues, but usually with the expectation of a similarly damaged
product, although the results may be useful to mankind. 5 Examples of successful mutation breeding are dwarfed crop plants
(short wheat plants dont fall over in stormy weather), seedlessness in fruit and change of colour in flowers due to
eliminating specific pigments. But, again, even when mutations do something useful, in each case it involves damaging
existing genetic instructions, not gain of new ones. Furthermore, a recent paper showed that even the rare beneficial
mutations tend to work against each otherthe phenomenon called antagonistic epistasis.6The same happens to any living
thing, such as humans. If we receive a dose of radiation, we can expect at least some damage to the DNA in some of our
cells. If it happens in only a couple of cells, we will hardly notice, so low doses (such as occasional X-rays) are OK. As the
dose rises, the risk of damage also rises, so that one will begin to feel ill from radiation poisoning as more cells get
damaged. Higher doses may cause deformed offspring (if the damage occurs in a reproductive cell), or the shut-down of a
specific organ, or cancer (which is, after all, a cell multiplying wrongly because its control genes have been damaged). Yet
higher doses will cause irreparable damage in a large number of cells, leading to death 7 (as the workers who volunteered to
repair the Fukushima plant accepted).Commercial irradiation of foodstuffs is deliberately used to kill all undesirable
organisms within the packaging, and it does a thorough job by completely disrupting cell processes.It is fear of such damage
that caused the Japanese government to provide an exclusion zone around Fukushima, and has led to countries importing
Japanese goods checking for any residual radiation. No informed person believes that anyone will benefit from random
changes (mutations) in their genes. No one wants to be irradiated in the hope of beneficial mutations. No one wants to be an
evolutionary experiment.Our increasing knowledge of the content and activity of the genome assures us that the DNA of any
creature is designed to produce exactly that creature, and any random change will damage the required set of instructions. It
is even more obvious that the types of changes necessary for a Spiderman to exist are way beyond the bounds of
possibility. Why then do some people believe that such changes have occurred in the past to make a bird out of a dinosaur,
or a mammal out of a lizard, or a frog out of a fish? It is simply not possible. Mutations damage DNA. They dont invent new,
more complex traits.While geneticists may make improvements by deliberately transferring genes from one creature to
another, this is using already-created information from the biosphere. Random changes due to radiation are not going to
produce a super creature, and especially not a new kind of creature. The genetic information was originally given for each
kind of creature represented the full measure of genetic information for that organism, and chance mutations will never add
coordinated instructions for the kinds of improvements required to make microbes-to-mankind evolution possible
Critic ignores reality of Genetic Entropy
The author of a landmark book on genomic decay responds to unsustainable criticisms.
by Dr John Sanford
Published: 7 March 2013 (GMT+10)
I do not normally spend my time responding to bloggers, but several people have asked me to respond to Scott Buchanans
polemic1 against my book Genetic Entropy. This article is a one-time clarification as I cannot afford the time to be drawn into
the blog-o-sphere and its associated death by a thousand emails.Scotts lengthy essay is certainly not an objective review
of my book; it is an ideological attack based upon a commitment to the standard Darwinian theory. He does not
acknowledge any of the legitimate concerns I raise regarding the Darwinian process, not even the many points widely
acknowledged by my fellow geneticists. Shouldnt even ardent Darwinists honestly acknowledge the known problems with
current Darwinian theory? I will only briefly touch on each of his arguments.
Scott gives his arguments in this order:

First, he claims, that my book is all about deliberate deception, and I am fundamentally a liar (but perhaps I am otherwise a
nice man).
He then spends three pages expressing how annoyed he is with the exact way in which I cite a Kimura referenceas I try to
make clear the actual distribution of mutational effects.
He argues that, while beneficial mutations are rare, they are not as rare as I suggest. Since beneficial mutations clearly
happen, and since adaptation clearly happens, he imagines the Primary Axiomthat man is merely the product of random
mutations plus natural selectionmust then obviously be true.
The Primary Axiom:
Man is merely the product of random mutations plus natural selection.
Scott cites a series of flawed mutation accumulation experiments, which he thinks demonstrate extremely high rates of
beneficial mutation.
He points out that duplications can have real biological consequences.
He also points out that we cannot generally see measurable degeneration in extended lab experiments. He argues that, if I
were correct, then in just the last few thousand years all forms of life having short life cycles (bacteria, mice) should have
gone extinct. Since we do not see obvious degeneration happening, the Primary Axiom must be true, according to him.
Scott suggests that, since human life spans have increased in the last several centuries, this proves that man is not
degenerating.
He argues that Crows conclusion that the human race is presently degenerating at 12% per generation2 does not mean he
stopped being faithful to the Primary Axiom.
He also argues that synergistic epistasis3 really happens (at least to some extent), and cites those who feel this might aid in
mutation elimination.
Finally, he casually dismisses all the papers I cite in Appendix 1 of my book, where the leaders of the population genetics
field acknowledge all of the basic problems with the Primary Axiom.
It seems to me that Scott makes his arguments in the wrong order, starting with the trivial, and at the end simply waving off
the most crucial issues, so I wish to address his points in reverse order:
10. What other geneticists say:
Let me begin by going to the very end of my book (Appendix 1), where I quote key papers written by the leading experts
within the field of population genetics. Scott refers to this as my final shotgun-blast of misrepresentation to the gullible
reader. This seems grossly unfair, since I am simply quoting the leaders in the field where they acknowledge major aspects
of my thesis. In my introduction to that section, I am careful NOT to imply that those
scientists would agree with my personal viewpoint, but I point out that they all very
clearly acknowledge the major problems which I outline in my book regarding the
Primary Axiom.
These experts in the field provide strong support for all the main points of my book
Is man presently degenerating genetically? It would seem so, according the papers
by Muller, Neal, Kondrashov, Nachman/Crowell, Walker/Keightley, Crow, Lynch et al.,
Howell, Loewe and also myself (in press). Scott suggests this is foolishness and
dismisses the Crow paper (12% fitness decline per generation). But Kondrashov, an
evolutionist who is an expert on this subject, has advised me that virtually all the
human geneticists he knows agree that man is degenerating genetically. The most
definitive findings were published in 2010 in the Proceedings of the National Academy
of Scienceby Lynch.4 That paper indicates human fitness is declining at 35% per
generation. I personally feel the average mutational effect on fitness is much more
subtle than Lynch doesso I think the rate of human degeneration is much slower
than he suggestsbut we at least agree that fitness is going down, not up. Can Scott
find any qualified geneticist who asserts man is NOT now degenerating genetically?
There is really no debate on current human genetic degenerationScott is 100%
wrong here, and is simply not well informed.
Is there a theoretical problem associated with continuously growing genetic load due to subtle un-selectable
deleterious mutations? Yes, according to Muller, Kondrashov, Loewe, and many others. Population geneticists all seem to
acknowledge the fact that a large fraction of deleterious mutations are too subtle to be effectively selected away. The
question is, what is that fraction? At what point does the fitness effect of a deleterious mutation become too small to be
selected away? I have been studying this for about 7 years. Our numerical simulations indicate that for higher organisms, up
to 90% of all deleterious mutations should be un-selectable (in press). This manuscript was previously sent to Scott, but it
seems he did not read it. Can Scott explain away this theoretical problem?
What is Dr. Ohtas view on genetic degeneration? Dr. Tomoko Ohta was a key student of Kimura, and published
extensively with Kimura. Dr Ohta came to be known as the Queen of Population Genetics, and is now an honorary member
of the American Academy of Arts and Sciences, and an associate of the National Academy of Sciences, USA. She is the
worlds authority on the topic of near-neutral mutations. One of my co-authors went to Japan to spend several days
discussing with her a manuscript in which we used numerical simulation to clearly demonstrate that near-neutral deleterious
mutations generally escape selective removal and lead to continuous and linear accumulation of genetic damage. She
acknowledged that our numerical simulations appeared to be valid, and that our conclusions appeared to be valid. This
clearly reflects a profound evolutionary paradox (it is the same paradox Kondrashov addressed in his paper why have we
not died 100 times over?5). When asked about synergistic epistasis, she immediately acknowledged that synergistic
epistasis should make the problem worse, not better, just as I argue in my book. Using numerical simulations, we have
confirmed that synergistic epistasis fails to slow mutation accumulation and accelerates genetic decline (in press). I think Dr.
Ohta would like me to clarify that she is a faithful Darwinist and remains committed to the Primary Axiom, and that she is in
fact hostile to the thesis of my book.
The other quotes: I encourage Scott to read all the other quotes in the appendix. It is clear that the leading population
geneticists have recognized major theoretical problems with the Primary Axiom for a long time. Why try to deny this?
9. Synergistic Epistasis:
Scott spends a lot of time selling synergistic epistasis. What he may not realize is that population geneticists almost
universally understand that most mutations interact either additively or multiplicatively.Suppose two mutations each reduce
fitness 10%. When both mutations are present, then fitness might be reduced by 20% (additive interaction), or less than
20% (multiplicative interaction), or more than 20% (synergistic epistasis). To the extent synergistic epistasis happens, it
obviously will accelerate degeneration! The only reason synergistic epistasis is even invoked in a generalized sense is when
trying to argue that more genetic damage might somehow induce more effective selection (we can now clearly show that

even if synergistic epistasis was normally true, it makes the degeneration problem worse). Population genetic theory was
built on the understanding that genetic interactions are primarily additive or multiplicative. If synergistic epistasis was
generally true (it is just a rare exception to the norm), most of the papers ever published in the field of population genetics
would be invalid! Yet synergistic epistasis would have to be the norm before it would even be conceivable that it might
enhance selection.Naturally, all possible types of interactions occur in a complex genome, including instances of synergistic
epistasis. Synergistic epistasis would be of little concern to population geneticists except for one thing; it is their last straw in
the struggle to solve the degeneration problem. Careful consideration makes it clear that even if synergistic epistasis was
the primary mode of interaction (rather than being a rare exception), it only makes the degeneration problem worse. I point
this out in my book, and Ohta reaches the identical conclusion (see above). We have done extensive numerical simulations
that show that this is true (in press).
8. Crow still believed in the Primary Axiom:
Scott spends a lot of time showing that even though Crow believed mankind is degenerating at 12% per generation, he
remained faithful to the Primary Axiom. That is certainly true, and I made that clear in my book.
7. Human life span has recently been increasing:
It is obviously true that human longevity has increased in recent centuries, but that is not due to evolutionary advance. It is
clearly due to improved diet, sanitation, and modern medicine. We have figured out how to keep people from dying in
infancy and extended the life expectancy for those who catch many diseases associated with middle-age. Thus,
the average has gone up. The maximum possible lifespan has not gone up. This is a simple concept.6
6. Genetic entropy is not obvious in lab experiments or in nature:
It is true that most lab experiments do not show clear degeneration. But Scott
should realize that anything alive today must have been degenerating slowly
enough to still be here, even in a young earth scenario. All three of the downward
decay curves I show in my book indicate that degeneration slows dramatically as it
becomes more advanced. If a species is alive today and has been around for
thousands of years, the rate of degeneration must be very slow (too subtle to
measure in most cases). Obviously, genetic degeneration is not going to be clearly
visible in most lab experiments.Regarding Scotts argument about viruses and
bacteria, such microbes should degenerate very slowly because mutation rate per
genome is low, and selection is intense and continuous. Despite this, we have just
published a paper showing that RNA viruses are clearly subject to genetic
entropy.7 Another reason viruses (and bacteria) can persist in spite of genetic
entropy is that they can be preserved in a dormant state for thousands of years.
Therefore, even if most active strains continuously died out (say after a thousand
years), new strains could be continuously reseeded into the environment from
natural dormant reservoirs.Regarding mice, our numerical simulations suggest
organisms like mice should last longer than longer-lived mammals because they
have lower mutation rates per generation and much more frequent cycles of
selection. The first species to go extinct due to mutation accumulation should be
large, long-lived organisms.8 Even if species are not actually degenerating, the
question of what sustains them would remain. Careful analysis shows
mutations/selection could not be that sustaining force.
5. Duplications have biological effects:
This is obviously true, but how is it relevant? Like the accidental duplications that happen in emails and student essays,
duplications are almost universally deleterious. Very rarely, some are beneficial. A few rare beneficial duplications cannot
offset the many accumulating deleterious duplications, let alone all the other accumulating mutations.9
4. Mutation accumulation experiments suggest extremely high rates of beneficial mutations:
Mutation accumulation experiments are a very poor way to understand deleterious mutation accumulation. Such
experiments do not study actual mutations, they only study performance of strains (the supposed mutations are only
inferred). In the papers of this type I have examined, zero mutations are actually documented. All that is observed is
differential performance of strains. Non-genetic causes, including epigenetic effects or gain/loss of viruses in some bacterial
culture, etc., cannot be precluded. More to the point, since the overwhelming majority of mutations are very subtle and do
not express a clear phenotype, almost all mutations will be invisible in these experiments, which only monitor gross
differences in performance. Only high-impact mutations can be observed in such experiments, and these represent a biased
sampling of the actual mutational spectrum. Furthermore, high-impact deleterious mutations will still always be selected
away in such experiments, no matter how hard the experimenter tries to preclude natural selection. Therefore there will be a
strong tendency to preferentially observe only high-impact beneficials. Since the crux of the genetic entropy argument
involves the low-impact deleterious mutations (which will always be invisible in such experiments), these types of
experiments have no relevance to this discussion. A final point: in these experiments, fitness is always narrowly defined (i.e.,
ability to grow on a given medium). For simple, one-dimensional traits like this, any genetic change affecting that trait has a
reasonable chance of being beneficial (in a one-dimensional system, any change can only be either up or down, as opposed
to improving a real-world complex network of traits where fitness is enormously multi-dimensional).
3. Just how rare are beneficial mutations?
Scott speaks as if I do not acknowledge there are beneficial mutations. I acknowledge them very openly in the book, but I
also insist that beneficials must be very rare compared to deleterious mutations (as do nearly all geneticists). The critical
question is how rare?
Genomes are the genetic specifications that allow life to exist.
Specifications are obviously inherently SPECIFIC. This means
that random changes in specifications will disrupt information with
a very high degree of certainty. This has become especially clear
ever since the publication of the ENCODE results, which show
that very little of our genome is actually junk DNA. 10 The
ENCODE project also shows that most nucleotides play a role in
multiple overlapping codes, making any beneficial mutations
which are not deleterious at some level vanishingly rare (in
preparation). Our own numerical simulations (in press) show that
that unless beneficial mutations are extremely common, they are
not sufficient to compensate for accumulating deleterious mutations. The bottom line is that selection removes only the

worst deleterious mutations and amplifies only the best beneficial mutations. This means that the accumulating damage is
largely invisible (like rust on a car), while adaptations tend to be highly visible (e.g., antibiotic resistance). This means that
even if Scott presents us with 1000 examples of adaptation via beneficial point mutation, he has still failed to address the
key issuenet gain versus net loss. Adaptation explains fine-tuning to an environment; it does not explain the astounding
internal workings of life. It does not begin to explain the mystery of the genome.Where are the beneficial mutations in man?
It is very well documented that there are thousands of deleterious Mendelian mutations accumulating in the human gene
pool, even though there is strong selection against such mutations. Yet such easily recognized deleterious mutations are just
the tip of the iceberg. The vast majority of deleterious mutations will not display any clear phenotype at all. There is a very
high rate of visible birth defects, all of which appear deleterious. Again, this is just the tip of the iceberg. Why are no
beneficial birth anomalies being seen? This is not just a matter of identifying positive changes. If there are so many
beneficial mutations happening in the human population, selection should very effectively amplify them. They should be
popping up virtually everywhere. They should be much more common than genetic pathologies. Where are they? European
adult lactose tolerance appears to be due to a broken lactase promoter [see Cant drink milk? Youre normal! Ed.]. African
resistance to malaria is due to a broken hemoglobin protein [see Sickle-cell disease. Also, immunity of an estimated 20% of
western Europeans to HIV infection is due to a broken chemokine receptorsee CCR5-delta32: a very beneficial
mutation. Ed.] Beneficials happen, but generally they are loss-of-function mutations, and even then they are very rare!
Scott makes a big deal about Lenskis long-term bacterial experiments, but these actually support my thesis. Although a very
trivial adaptation happened (optimal growth on a given medium), his bacteria shrank in genome size (the functional genome
decreased). Evidently the more rapid growth was largely accomplished through genetic degeneration. Many useful genes
not essential in that artificial environmentwere apparently lost. When transferred to a natural environment, those highly
degenerated bacteria would essentially be dead-on-arrival.Scott seems to think that as long as beneficials happen
(regardless of how rare they are) the Primary Axiom must be true. Likewise he thinks that, as long as he can show selective
adaptations happen (no matter how trivial), this proves the Primary Axiom. I do not think he grasps just how difficult it is to
build a genome apart from design. Nor does he seem to understand that a population can be undergoing genetic decline
due to vast numbers of slightly deleterious mutations even while selection may be amplifying a handful of beneficial
mutations. He seems to fail to realize that a species can undergo minor adaptive fine-tuning to its environment even while
degenerating in many other ways. So let me try to make it even simpler. Picture a ten year old car. It is degenerating in all
possible ways. Install new windshield wipers. Has the car stopped degenerating? There has certainly been an improvement,
but not the type of improvement that can reverse the ubiquitous and systematic degeneration.In collaboration with other
scientists, we have advanced the field of population genetics by developing the state of the art in terms of numerical
simulation of the mutation/selection process.11 Using biologically realistic parameters, the program Mendels Accountant
consistently shows genetic decline even given very generous rates of beneficial mutation. This strongly validates my book.
Mendels Accountant cannot tell us the true history of life, but what it can do is tell us what selection can and cannot
realistically do in the present.
from Genetic Entropy pages 31 and 32
2. Kimuras Figure:
Scott makes a huge deal about my
reference to a figure in Kimuras work.
He misrepresents me by arguing I
misrepresented Kimura (I did not claim
Kimura agrees with me). But this is a
rabbit trail; the argument is not about
Kimura. The crucial issue is about
defining the correct distribution of
mutation
effects.
For
deleterious
mutations, Kimura and most other
population geneticists agree the distribution is essentially exponential. Figure 3c in my book (based upon Kimura) shows an
exponential-type distribution of deleterious mutations, with most deleterious mutations being nearly-neutral and hence unselectable (effectively neutral). But, as I point out, Kimuras picture is not complete, because degeneration is all about the
ratio of good to bad mutations. Kimura does not show the beneficial distribution, which is essential to the question of net
gain versus net loss! When I show the beneficial distribution (while Kimura did not do this, I suspect he would have drawn it
much as I did), anyone can see the problem: the vast majority of beneficial mutations will be un-selectable (Figure 3d). Scott
does not appear to contest my representation of the mutational effect distribution, which is the main issue here. Scott should
easily be able to see that most mutations fall within the no-selection zone and that almost all of them are deleterious. So
even with strong selection, this entire zone can only undergo degeneration. Outside this zone, the substantially bad
mutations will be selected away, and an occasional rare high-impact beneficial will be amplified (which can explain isolated
events such as antibiotic resistance).
1. Sanford is a liar:
Scott repeatedly asserts that my book is all about deliberate deception, and I am fundamentally a liar. He bases this upon
two things: a) there were a few references he thinks highly relevant, which I failed to cite and which he says proves I have
withheld and suppressed evidence; b) He argues I must surely know that beneficial mutations happen, that natural selection
happens, and also that long term lab experiments do not show rapid degeneration. Therefore, I must be dishonestly
pretending to be ignorant of these things in order to deceive the ignorant. He has not considered these possibilities: a) given
the mountain of relevant literature, I might legitimately miss a few papers; b) I do not share his view on which papers are
significant. He cites a great many papers which only speak of the obvious: beneficials do happen, selection does happen,
adaptation does happen. Any high school student knows these things. My argument only begins AFTER acknowledging
these obvious things.Scott and I corresponded briefly before his posting, and I tried to explain to him why his criticisms were
not correct. I did not find him to be a very good listener as I tried to explain how he was misrepresenting me. I then sent him
a series of preprints (in press), which extensively and conclusively addressed all his objections. Upon reading his essay
now, I can see he did not bother reading those preprints, which are very rigorously written scientific research papers. I also
see from his current arguments, that he really did not give my book a fair read. If Scott has misrepresented both the book
and myself, then which of us is lacking in integrity?This book cost me a great deal. I basically laid down my reputation and
my career in order to say what I believe to be the truth. I believe the real deception is clearly the Primary Axiom. I am still
convinced I can persuade any impartial person that the Primary Axiom is indefensible (if they will listen). So why would I lie?
I am a sincere orthodox Christian, I believe God will judge me in a very literal sense, and I consider lying is a very serious
sin. I am distinguished in my field and I greatly value my integrity as an honest scientist .That is why I have been willing to

defend what I believe to be true, even knowing that attacking this sacred cow (the Primary Axiom) would bring slander and
scorn. Why would I write a book that would ruin a very good scientific reputation knowing it would make me a liar before
God?In our personal correspondence, Scott closed our conversation saying he intended to present me as being intentionally
deceitful. My last word to him was that while I might be technically in error on certain points, my book reflects what I really
believe to be true. Any technical errors in my book show that I am human, but there certainly was no deliberate deception in
my book. In terms of the scientific issues, I would ask Scott to append this response to his blog attack. I still welcome any
fair-minded and balanced analysis of the scientific merits of my book and my subsequent studies.
Genetic entropy and simple organisms
If genetic entropy is true, why do bacteria still exist?
by Robert Carter
Published: 25 October 2012 (GMT+10)
Summary
Genetic entropy (GE) is eroding the genomes of all living organisms because mutations
are inherited from one generation to the next. Many people wonder why, if GE is real, are
bacteria still alive today? There are multiple reasons for this, including the fact that their
genomes are simpler, they have high population sizes and short generation times, and
they have lower overall mutation rates. This combination makes them the most resistant to
extinction. Of all the forms of life on Earth, bacteria are the best candidates for surviving
the effects of GE over the long term. This does not mean they can do so forever, but it
explains why they are still around today.
What is genetic entropy?
After the landmark publication of Genetic Entropy and the Mystery of the Genome by
Cornell University Professor Dr John Sanford, we have often been asked to supply further
details of this major challenge to evolutionary theory. The central part of Sanfords
argument is that mutations (spelling mistakes in DNA) are accumulating so quickly in some
creatures (particularly people) that natural selection cannot stop the functional degradation
of the genomelet alone drive an evolutionary process that can turn apes into people.A
simple analogy would be rust slowly spreading throughout a car over time. Each little bit of
rust (akin to a single mutation in an organism) is almost inconsequential on its own, but if
the rusting process cannot be stopped it will eventually destroy the car. A more accurate analogy would be to imagine a copy
of Encyclopedia Britannica on a computer that has a virus that
randomly swaps, switches, deletes, and inverts letters over time.
For a while there would be almost no noticeable effect, but over
time the text would contain more and more errors, until it became
meaningless gibberish. In biological terms, mutational meltdown
would have occurred.
When living things reproduce, they make a copy of their DNA and
pass this to their progeny. From time to time, mistakes occur, and
the next generation does not have a perfect copy of the original
DNA. These copying errors are known as mutations. Most people
think that natural selection can dispose of harmful mutations by
eliminating individuals that carry them. But natural selection
properly defined simply means differential reproduction,
meaning some organisms leave more progeny than others based
on the mutations they carry and the environment in which they
live. Moreover, reproductive success is only affected by
mutations that have a significant effect. Unless mutations cause a noticeable reduction in reproductive rates, the organisms
that carry them will be just as successful in leaving offspring as all the others. In other words, if the mutations arent bad
enough, selection cant see them, cannot eliminate them, and the mutations will accumulate. The result is genetic entropy.
Each new generation carries all the mutations of previous generations plus their own. Over time, all these very slightly
harmful mutations build up to a point that, in combination, they start to have serious effects on reproductive fitness. The
downward spiral becomes unstoppable, because every member of the population has the same problem: natural selection
cant choose between fit and less fit individuals if every member of the population is, more or less, equally mutated. The
population descends into sickness and finally becomes extinct. Theres simply no way to stop it.Dr Sanford argues that
humans could not possibly have been around for tens of thousands of years (let alone millions, or billions if one considers
our supposed evolutionary animal ancestors) because, at the current rate of mutation and the number of generations that
would have occurred, we should have already become extinct.
Genetic entropy in bacteria
From time to time, we are asked by honest people seeking a better understanding, as well as hostile people trying to
challenge us, to explain why, if genetic entropy (GE) is true, do bacteria still exist? After all, bacteria have extremely short
generation times. Some bacteria can reproduce every 20 minutes, so would be gaining far more mutations in a day than
humans would in a hundred years. And bacteria are much simpler organisms, so it should take less time to break down their
genetic instruction set compared to humans. Why, then, did they not go extinct long ago?There are several ways to answer
this. First, the idea of GE was developed by population geneticists working on higher genomes (i.e. genomes of the more
complex organisms with longer generation times). The big puzzle is why species like humans have not gone extinct if we
have been around for tens of thousands of years as evolutionists maintain. 1 In a complex organism, a high mutation rate
combined with a low reproduction rate makes it very difficult for natural selection to remove deleterious mutations from the
population. Thus, higher mammals like people and elephants are not good candidates for long-term survival because
mutations accumulate from one generation to the next. For eukaryotic organisms (everything more complex than bacteria),
the complexity of the genome makes the mutation target quite largein these more-complicated systems, there are more
things that can go wrong, i.e. more machinery that can be broken.2

On the other hand, changes to simpler genomes will often have


more of a profound effect. Changing one letter out of the three
billion letters in the human genome is not likely to create a radical
difference. But the genome of the bacterium E. coli, for example,
is about 1,000 times smaller than that of humans; bacteria are
more specialized and perform fewer functions. Any letter change
is more likely to do something that natural selection can see.
That is, it is more likely that a small change will produce a large
enough effect that it will make a difference in the number of
individuals carrying that trait generations later.Its important to
note that there are multiple things going on at once. We have to
consider a combination of factors in order to understand why
bacteria are still with us today. Lets use an illustration. Bacteria
are like bicycles. People are like sports cars. One can make a
number of modifications to both without breaking them, but there
are fewer parts in a bicycle, so any given modification is more
likely to produce a non-working bicycle. They need two wheels, a
handle bar, a frame, a chain, and at least two gear sprockets.
There is very little you can remove from them or break before
they cant be used. Cars, on the other hand, dont need a roof,
windshield, or headlights. There are a lot more modifications you
can make to a car and still drive it around. You may not get to
work on time, because it does not operate at full potential, but the car can still be driven.
But why, if mutation is more likely to kill or harm a bacterial cell, do they still exist?
First, bacteria do suffer from GE. In fact, and perhaps counter intuitively, this is what allows them to specialize quickly.3 Many
have become resistant to antibiotics4 and at least one has managed to pick up the ability to digest non-natural, man-made
nylon.5 This is only possible with much genetic experimentation, mostly through mutation, but sometimes through the
wholesale swapping of working genes from one species to another. Many mutations plus many generations gives lots of
time for lots of genetic experiments. In fact, we have many examples, including those just mentioned, where breaking a
perfectly good working system allows a new trait to develop. 6 Recently, it was discovered that oceanic bacteria tend to lose
genes for vital functions as long as other species of bacteria are living in the area. Here we have an example
of multiple species losing working genes but surviving because they are supported by the metabolic excretions of other
species.7Since the changes are one-way and downhill, this is another form of GE.
Lower mutation rates
Another reason why bacteria still exist is that they have a lower overall mutation rate. The mutation rate in E. coli has been
estimated to be about 1 in 10 10, or one mutation for every 10 billion letters copied. 8 Compare this to the size of the E.
coli genome (about 4.2 million letters) and you can see that mutation is rare per cell. Now compare this statistic to the
estimated rate of mutation per newborn human baby (about 100 new mutations per child2) and one can begin to see the
problem. Thus, there are nearly always non-mutated bacteria around, enabling the species to survive. However, there are
also always mutated bacteria present, so the species are able to explore new ecological niches (although most known
examples have arisen at the expense of long-term survival).
Incredible growth potential
Bacteria have an amazing growth rate. The entire world population of a species like E. coli turns over very fast (perhaps
once per hour). Trillions upon trillions of these cells die for many different reasons each and every hour. Thus, this may be a
system where natural selection can actually halt the inevitable decay. Why? Because any mutation that confers even a small
disadvantage (and most do) can be removed through differential reproduction, given enough time. (Time in this case is
measured in generations.)Bacteria can replace themselves after a population crash in a very short period of time. This is a
key reason they do not suffer extinction. Thus, when exposed to antibiotics, for example, the few resistant cells within the
population can grow into a large replacement population in short order, even though 99.99% of the original bacteria may
have died. If the antibiotic is removed, the population can turn over again, with the non-resistant ones replacing the resistant
ones (because antibiotic resistance is usually associated with impaired growth, so the originals grow faster and would
dominate the population in a few generations). Humans cannot do this. It would take thousands of years to replace the
current population of 7 billion people, and the inbreeding that would occur when the few survivors were forced to marry close
relations might drive us to extinction anyway.9
Bacteria vastly outnumber people
Population size is another consideration. There are many more bacteria than people. But since bacterial population sizes
are relatively constant, there isnt room for more, and competition is extreme. Most lineages die out in the long run. In large
populations, with lots of competition, mutations can be purged more efficiently through differential reproduction. Any cell with
a slight advantage over another is more likely, over generations, to persist.
Environmental sources
It is quite feasible that many bacterial species undergo significant periods of dormancy. Bacteria coming out of dormancy
would serve as a continual source of older, less mutated versions and would help to prevent GE over the long term.
Mutations cant hide in prokaryotic genomes
Eukaryotes, such as humans, inherit two copies of each chromosomeone from each parent. 10 Thus, any mutation on one
human chromosome is often masked by the good copy on the other chromosome. This interferes with differential
reproduction based on mutational differences (e.g. natural selection) and increases the mutation burden of our species.
This is not true for bacteria, which reproduce asexually and inherit their DNA from only one parent.
What about other fast-reproducing organisms?
One might reply, But mice have genomes about the size of the human genome and have much shorter generation times.
Why do we not see evidence of GE in them? Actually, we do. The common house mouse, Mus musculus, has much more
genetic diversity than people do, including a huge range of chromosomal differences from one sub-population to the next.
They are certainly experiencing GE. On the other hand, they seem to have a lower per-generation mutation rate. Couple
that with a much shorter generation time and a much greater population size, and, like bacteria, there is ample opportunity
to remove bad mutations from the population. Long-lived species with low population growth rates (e.g. humans) are the
most threatened, but the others are not immune.
Conclusions

There are attempted evolutionary counter arguments to the basic GE hypothesis. They are weak, but it is not the purpose of
this article to give an all-comprehensive defense of the theory. It is sufficient to say, however, that bacteria, of all the life
forms on Earth, are the best candidates for surviving the effects of GE over the long term. Their simpler genomes, high
population sizes, short generation times, and lower overall mutation rates combine to make them the most resistant to
extinction.
The diminishing returns of beneficial mutations
Low-temperature electron micrograph of a cluster of E. coli bacteria.
Each individual bacterium is oblong shaped.
by Shaun Doyle
Published: 7 July 2011(GMT+10)
Subsequently published in Journal of Creation 25(3)
Beneficial mutations are often seen as the engine of evolution
(Mutations: evolutions engine becomes evolutions end!). However,
beneficial mutations by themselves dont solve the problem (see Beetle
Bloopers). Mutations not only have to be beneficial, but they have to
add biological information, i.e.specified complexity. However, practically
all beneficial mutations observed have been losses of specified
complexity (The evolution trains a-comin), with only a few disputable
examples of mutations increasing information ever found (e.g. bacteria
that digest nylon, citrate or xylitol).
Epistasis: how do mutations interact?
However, mutations need to be more than beneficial and information-increasing to produce new coordinated structures and
systems, as microbes-to-man evolution requires. Mutations dont act alone; the effect of a mutation on an organisms
phenotype depends on other genes, and mutations in those genes, in the genome. This is calledepistasis; it is an important
consideration for evolution because how mutations interact will determine if they could possibly build new structures in a
stepwise manner.For microbes-to-man evolution to occur, mutations need to be not just (specified) information-increasing
and beneficial, they also need to work together. This also has to be the main dominant trend in adaptive evolution so that the
mutations can together produce new biological structures and systems. This phenomenon is called synergistic
epistasis (SE), where the combined effect of mutations is greater together than the sum of their individual effects. This is
obviously a good situation for beneficial mutations, but very bad for harmful mutations. In harmful mutations, SE can result
in synthetic lethality1, where the combined effects of several harmful mutations are compounded by each others presence,
resulting in such a bad effect that it kills the organism. 2 So evolution needs SE to be common only in beneficial mutations; it
works against evolution when it occurs in harmful mutations.Antagonistic epistasis (AE) is the opposite of SE. It occurs when
mutations have a negative influence on each other, such that their combined effect is less than the sum of the effect of the
individual mutations. For harmful mutations, this is a good thing because it mutes the effect of individual mutations and stalls
error catastrophe.3 This is obviously no help for evolution in the long run, since they are still harmful mutations. However, AE
presents problems for evolution if it occurs in beneficial mutations. The benefits of individual mutations are muted by other
beneficial mutations, resulting in a decreasing rate of fitness increase with every beneficial mutation added.
How not to work together
Two recent studies investigated the effects that beneficial mutations have on each other and came up with basically the
same results. One study looked at the combined effect on fitness from some of the earliest beneficial mutations to occur in
Richard Lenskis Long Term Evolution Experiment on 12 Escherichia coli populations.4This is the same experiment that
produced an E. coli population with the ability to utilize citrate under aerobic conditions, whereas it couldnt before. This was
widely hailed as an example of evolution, but see our response, Bacteria evolving in the lab?. Another study, published in
the same issue of Science, looked at the effect beneficial mutations have on each other in an engineered strain
of Methylobacterium extorquens.5Both studies found that beneficial mutations interacted under an overall trend
of antagonistic epistasis. Khan et al., in comparing their study with Chou et al., pointed out the results of both studies were
virtually identical:Note that similar trends were seen by Chou et al. . That study, like ours, found that four mutations
interacted to yield diminishing fitness returns, whereas one mutation had the opposite effect.6Therefore, the cumulative
effect of the beneficial mutations together was smaller than it would be if the mutations were considered independently
i.e. they display an overall trend of antagonistic epistasis. Some individual mutations displayed synergistic epistasis, but
they were a minority, and were not enough to reverse the overall trend.
Khan et al. explain this as a result of environmental adaptation:
Mechanisms that may explain this deceleration include reductions in the number and effect-size of beneficial mutations as a
population becomes better adapted to its environment In other words, epistasis acts as a drag that reduces the
contribution of later beneficial mutations.But is this it? No doubt that this is a fair assessment of these results as far as they
go. These experiments were done in strictly controlled environmental conditions, so the range of questions that can be
answered is limited. However, these results didnt take into account environmental flexibility and change. Khan et al.
observed examples of previous mutations that stymied the adaptive capabilities of some lines relative to others in the
population.7 But the obvious question is this: what effect does antagonistic epistasis have when the environment changes?
Are the organisms as robust to mutation and as adaptable as their ancestors?Humanitys own long-term experiments in
artificial selection would probably present the clearest answer, and it would be no. For example, Dogs have been artificially
selected for all sorts of traits for centuries, and the typical experience is that purebred dogs are weaker, have more
congenital problems, and live shorter lives than mongrel dogs (A Parade of MutantsPedigree Dogs and Artificial
Selection).
What is a beneficial mutation?
Both studies stated they were studying beneficial mutations. But what do they mean by beneficial? Are these
mutations universally beneficial, or only within a certain environmental context? These may seem like trite questions, but
they become immensely important when we consider the context of these studies. As I stated above, these are laboratory
studies conducted in strictly controlled environments, so the mutations observed are only known to be beneficial within a
strict environmental context.Moreover, Chou et al. conducted their experiments on an engineered bacterial strain that, even
without mutations, grew three times slowerthan the wild-type in the same environment.8In the engineered strain Chou et al.
eliminated an essential metabolic pathway and replaced it with another from a different species. All the beneficial mutations

in the engineered strain were merely compensating for the loss of the native metabolic pathway. The same mutations in the
wild type would most likely be harmful.Finally, a beneficial mutation is not necessarily a mutation that increases specified
complexity. Something is beneficial if it confers an advantage, not necessarily if it adds information. This points to an
important issue: mutations not only have to add information to support evolution, but they also have to be selectable. Since
mutations (apart from a few trivial examples) are universally losses of specified complexity, and natural selection is
incredibly slow and weak, beneficial mutations are ultimately no help to evolution.
Genetic entropy and the mystery of epistasis
These studies reflect a universally consistent trend in lab experiments on adaptation:
The most consistent finding across studies of laboratory-evolved populations has been a rapid deceleration of the rate of
fitness increase.The two scientific reports discussed above are in line with those consistent results, and serve as further
confirmation of Dr John Sanfords landmark book: Genetic Entropy and the Mystery of the Genome.9 Dr Sanford pointed out
that the genome is in a state of inexorable decay because of mutations. If beneficial mutations generally get in the way of
each other, their combined effects cannot stop this process of decay in the genome. Evolution thus has three strikes against
it: most mutations are not beneficial, practically all mutations destroy specified complexity, and, now, even beneficial
mutations work against each other. While mutations may be of limited benefit to a single organism in a limited context (e.g.,
sickle cell anemia can protect against malaria even though the sickle cell trait is harmful), mutations seem to be no benefit
whatsoever for microbes-to-man evolution, whether individually or together.
Pesticide resistance is not evidence of evolution
by David Catchpoole
Published: 20 August 2009(GMT+10)
Since aerial application of pesticides (crop dusting) first began in
the 1920s, there have been tremendous improvements in
knowledge, technology and safety. However, irrespective of the
application method used, farmers must face the phenomenon of
pesticide resistance.A favourite icon of evolutionists, i.e. oft-cited by
them as evidence of evolution, is the phenomenon of pesticide
resistance.On the evolution-proclaiming PBS1 website for example,
the diminished efficacy of rodent poisons and insecticides is
because we have simply caused pest populations to evolve. 2 And
no doubt wanting to prove that evolutionary theory has practical
relevance, the PBS Evolution Librarypaints a grim picture of how
this evolution is making life harder for us:It has the menacing
sound of an Alfred Hitchcock movie: Millions of rats arent even
getting sick from pesticide doses that once killed them. In one
county in England, these super rats have built up such resistance to certain toxins that they can consume five times as
much poison as rats in other counties before dying. From insect larvae that keep munching on pesticide-laden cotton in the
US to head lice that wont wash out of childrens hair, pests are slowly developing genetic shields that enable them to
survive whatever poisons humans give them.2 (Emphasis added.)Having now got the readers attention, and warning that
the problem is getting worse, the PBS article comfortingly (?) says, but the pests are only following the rules of evolution.
However, looking past the evolutionary assertions, the PBS article makes it clear that pesticide resistance is not evidence of
evolution at all:Every time chemicals are sprayed on a lawn to kill weeds or ants for example, a few naturally resistant
members of the targeted population survive and create a new generation of pests that are poison-resistant.2 (Emphasis
added.)
And again:
Individuals with a higher tolerance for our poisons survive and breed, and soon resistant individuals outnumber the ones we
can control.2
So the mechanisms that allow pests to tolerate pesticides are already present in a few naturally resistant members of the
targeted population, which survive to reproduce themselves, thus passing the genes conferring pesticide resistance to the
next generation.Thus pests are not slowly developing genetic shields because the genetic shield already exists, i.e. it has
not evolved out of thin air. What is happening is that the genetic shield becomes more widespread in the population, as
an astute reader will discern from the PBS articles subsequent explanation of what happens when farmers, noting lowered
kill rates, increase the dosage:
Farmers spray higher doses of pesticide if the traditional dose doesnt kill, so genetic mechanisms that enable the pests
to survive the stronger doses rapidly become widespread as the offspring of resistant individuals come to dominate
the population.2 (Added emphasis.)
And the spread of the genetic mechanisms conferring resistance can be very rapid indeedcoming to dominate the
population in just a few generations, in fact.
Rapid resistance in nematodes
For example, when researchers exposed the nematode Caenorhabditis elegans to the widely used nematicide levamisole,
they reported that resistance to that pesticide accumulated within very few generations.3
The researchers explained that this rapid adaptation was likely due to the standing genetic variation of the nematode
population, i.e. that the genes conferring resistance were already present in the population, but at low frequency. Exposing
the nematodes to levamisole selected for the resistant individuals, providing a direct demonstration of the speed of this
process. (Emphasis added.) There are numerous other examples of rapid adaptation in the scientific
literature.4Evolutionists are often needlessly surprised at the speed with which a population can adapt to a change in
environment, because they are so used to thinking of such changes as being evolution, with evolution being inextricably
associated with slow-and-gradual-over-millions-of-years processes (see Speedy Species Surprise). The changes are rapid
alright, but they are not evolutionarythat is, relevant to the core claim of evolution that primordial microbes changed into
mankind and all other living things.But even the nematode researchers were victims of their evolutionary mindset. Despite
not having observed any evolution whatsoever (i.e. the sorts of changes that supposedly resulted in pond scum becoming
pesticide scientists), they nevertheless peppered their scientific paper with claims it was rapid evolution they had
witnessed. Our results demonstrate that pesticide resistance can evolve at an extremely rapid pace, they wrote. Their
results demonstrated no such thing. Rapid rise in resistance to pesticidesyes; but evolve?no, as individuals with the
genetic shield conferring nematicide resistance were apparently already in the population.
The price of resistance

Mechanisms of pesticide resistance can come at a cost, research has shown. Referred to as fitness cost, resistance genes
are said to alter some components of the basic physiology and interfere with fitness-related life history traits. 5A famous
example is that of warfarin resistance in rats, first detected in the late 1950s. 6 Rats resistant to that poison have a higher
requirement for vitamin K than normal rats (more than 10 times!). When vitamin K is inadequate, warfarin-resistant rats
suffer from blood clotting disordersin fact, many will die from internal bleeding. Consequently, resistant individuals have a
lower fitness under most field conditions, hence the proportion of rats having warfarin resistance in Britain was seen to
decline when rat populations were no longer exposed to the rodenticide.
Warfarin is an anti-coagulant (stops blood clotting) drug, used
both in the treatment of human thromboses (unwanted blood
clots) and as a poison for rats and mice. Obviously the amounts
administered to people are carefully controlled, whereas the aim
in giving it to rats is to kill them. It works by interfering with the
normal blood-clotting mechanism, such that the normal rapid
repair of small blood vessel leakages does not occur. The rat
then dies from internal bleeding. Warfarin was first used in Britain
in 1953, and was at first extremely effective at killing rodents. But
colonies of resistant rats were first noticed in 1959 in Welshpool,
then in the United States and continental Europe.So, the genetic
makeup conferring warfarin resistance in rats is associated with
increased survival when the pesticide is present, but decreased
survival when the pesticide is absent.That fitness cost
phenomenon occurs in insect pests too. Researchers
monitoring Culex pipiens mosquitoes overwintering in a cave in
southern France (in an area where organophosphate insecticides
are widely used) noted a decline in the overall frequency of insecticide-resistant mosquitoes relative to susceptible ones as
the winter progressed, indicating a large fitness cost. 5 This is understandable in the light of the genetic mechanism
conferring resistance in these mosquitoes. Organophosphate insecticides affect the ability of certain enzymes (proteins)
called esterases to function properly, thus killing the insect. But the resistance genes induce an overproduction of esterase,
due to either gene amplification or gene regulation.5 Note that having additional copies of existing genes or having genes
that fail to switch off (regulate) production is not evidence for evolution because to change microbes into microbiologists,
evolution needs a mechanism for adding new complex functions, not copying existing ones or breaking them (photocopying
a chapter of a book or breaking an electric switch does not create new new complex functionality).Similar overproduction of
proteins occurred in DDT-resistant strains of Anopheles mosquitoes, too.7 The proteins metabolize DDT (an organochlorinebased insecticide). In the researchers words, the transcripts and their proteins are over-expressed in the resistant strains
and, as a consequence, are allowing them to exhibit this resistance.8 Similarly, in Drosophila fruit flies, insecticide
resistance is associated with overtranscription of a particular gene, resulting in 10 to 100 times as much mRNA in resistant
strains as in susceptible strains.9 Given the extra energy and resources
needed for such overproduction, its hardly surprising then that pesticide
resistance carries a fitness cost.10,11In all of the above examples, were not
seeing the genes, the information, for complex new functions appearing out of
nowhere, i.e. by evolution. Instead were seeing either possible amplification
of genes (i.e. additional copies of existing genes) or, more usually, a loss-ofcontrol over regulation of genes. In other words, the mechanisms for pesticide
resistance are not from new genes but from existing genesand especially
from damaged versions of existing genes. There has been no increase in
meaningful genetic information but rather a loss of information.The old joke
about Whats worse than finding a worm in your apple? [Answer: Half a
worm!] is no joke for apple producers. The FAO has estimated that pests cost
horticultural and agricultural producers thousands of millions of dollars
annually in lost production. At least 520 insects and mites, 150 plant diseases
and 113 weeds have become resistant to pesticides meant to control
them.Thus the pesticide resistance icon of evolution actually gives no
support to molecules-to-man evolution whatsoever. Were not seeing
improvement in the genes, we see brokenness, for that is what mutations dothey break genes, not create brand new
ones. In todays world sometimes its beneficial to have broken genes (e.g. if youre a rat and theres warfarin around), but
the genes are nevertheless brokenundeniably degraded genetic information. No evolution is in evidence.12
Not an arms race
Evolutionists love to portray the development of pesticide resistance as a grim arms race, no doubt leaving many people
with the perception that pests are evolving new features all the time. But now that weve seen that pesticide resistance is
due to breaking things, not creating new complex features, we can see that arms race is a misnomer. Rather, the struggle
is better likened to trench warfare,13 where the defending forces will destroy their own bridge, or blow up their own road, to
impede the enemys advance. An arms race implies that the defending forces are inventing new weapons, but the processes
of selection and mutation operating in pests facing a pesticide are not inventing new weapons. So the phenomenon of
resistance to nematicides, rodenticides, insecticides, etc., cannot be construed in any way as giving support to evolutions
Grand Idea that todays life forms evolved from some single-celled organism billions of years ago.
Rather, the broken genes conferring pesticide resistance have arisen in the time since the Fall. And as surveys have
shown, in a world where pesticides are used widely, it doesnt take long for a genetic mutation conferring resistance to
rapidly spread around the world.14
Implications for (rather, from!) effective pest control in todays world
What are the practical implications for pest control programs todayi.e. how should pesticide strategies be changed?In fact,
pesticide advisers15 at the pest control frontline are mostly already operating practically as if with a creationist perspective
(even though as individuals they might not realize it, i.e. they might still accept evolution as being true16). They recognize:
An individual rat or insect or other pest does not develop resistance over time. What changes over time is the susceptibility
of a population to a pesticide.Resistance may be present in a population even before being exposed to a new pesticidebut
in very low numbers. The resistance mechanism might affect the pest adversely in certain ways. But upon exposure to
pesticide, those individuals that have the ability to break down a pesticide molecule that kills most other individuals in the
population survive.Individuals surviving a pesticide application pass the genetic mechanisms conferring resistance to that

particular pesticide on to the next generation. Thus the resistant genes make up a greater proportion of the total gene pool
than they did before.At first, a farmer might not notice a pest populations increasing resistance to a pesticide. However, with
the passage of (pest) generations, there comes a point where the farmer is confronted by control failure. But the resistance
hasnt suddenly appeared, but rather built up steadily since first exposure to the pesticide. 17Notice that this has no
relevance to microbes-to-mankind evolution. This is simply a human-imposed selection process (the same principles are at
work with natural selectioni.e. no evolution at all).So what do the pest control experts advise growers to do when faced
with loss of pesticide effectiveness? A key resistance management strategy that most farmers are aware of and practice as
much as possible, is pesticide rotation.18,19 That is, alternating the use of pesticides that have differentmodes of action. (I.e.,
that affect different essential life functions of the pest, e.g. respiration, transmission of nerve signals, etc.) Pesticide rotation
works on the principle that when resistance to, say, an organophosphate-based insecticide is beginning to build up in the
population, the farmer switches to using, say, a pyrethroid insecticide. Then, as resistance builds up to that pesticide, he
switches to a pesticide with a different chemical mode of action again, if one is legally available (e.g. a carbamate).There
have been some instances where multiple resistance has developedthe worst case scenario for farmers.However, in no
way does this represent evolution, as it involved the same processes as discussed above. The fitness cost of such multiple
resistance becomes evident when pesticides are withheld for a period, and non-resistant individuals generally come to
dominate the population once more. Thus effective pesticide rotation strategies can begin again.What are the implications
from the day-to-day reality of pest responses to pesticides? Evolution is not in evidence, nor does evolutionary theory have
any practical relevance to operational science or farming practise.

También podría gustarte