Tag Archives: book

Immortality – Wikipedia

Posted: October 20, 2016 at 11:35 pm

Immortality is eternal life, the ability to live forever.[2]Natural selection has developed potential biological immortality in at least one species, Turritopsis dohrnii.[3]

Certain scientists, futurists, and philosophers have theorized about the immortality of the human body (either through an immortal cell line researched or else deeper contextual understanding in advanced fields that have certain scope in the proposed long term reality that can be attained such as per mentioned in the reading of an article or scientific documentation of such a proposed idea would lead to), and advocate that human immortality is achievable in the first few decades of the 21st century, whereas other advocates believe that life extension is a more achievable goal in the short term, with immortality awaiting further research breakthroughs into an indefinite future. The absence of aging would provide humans with biological immortality, but not invulnerability to death by physical trauma; although mind uploading could solve that issue if it proved possible. Whether the process of internal endoimmortality would be delivered within the upcoming years depends chiefly on research (and in neuron research in the case of endoimmortality through an immortalized cell line) in the former view and perhaps is an awaited goal in the latter case.[4]

In religious contexts, immortality is often stated to be one of the promises of God (or other deities) to human beings who show goodness or else follow divine law. What form an unending human life would take, or whether an immaterial soul exists and possesses immortality, has been a major point of focus of religion, as well as the subject of speculation, fantasy, and debate.

Life extension technologies promise a path to complete rejuvenation. Cryonics holds out the hope that the dead can be revived in the future, following sufficient medical advancements. While, as shown with creatures such as hydra and planarian worms, it is indeed possible for a creature to be biologically immortal, it is not known if it is possible for humans.

Mind uploading is the transference of brain states from a human brain to an alternative medium providing similar functionality. Assuming the process to be possible and repeatable, this would provide immortality to the computation of the original brain, as predicted by futurists such as Ray Kurzweil.[5]

The belief in an afterlife is a fundamental tenet of most religions, including Hinduism, Buddhism, Jainism, Sikhism, Christianity, Zoroastrianism, Islam, Judaism, and the Bah’ Faith; however, the concept of an immortal soul is not. The “soul” itself has different meanings and is not used in the same way in different religions and different denominations of a religion. For example, various branches of Christianity have disagreeing views on the soul’s immortality and its relation to the body.

Physical immortality is a state of life that allows a person to avoid death and maintain conscious thought. It can mean the unending existence of a person from a physical source other than organic life, such as a computer. Active pursuit of physical immortality can either be based on scientific trends, such as cryonics, digital immortality, breakthroughs in rejuvenation or predictions of an impending technological singularity, or because of a spiritual belief, such as those held by Rastafarians or Rebirthers.

There are three main causes of death: aging, disease and physical trauma.[6] Such issues can be resolved with the solutions provided in research to any end providing such alternate theories at present that require unification.

Aubrey de Grey, a leading researcher in the field,[7] defines aging as “a collection of cumulative changes to the molecular and cellular structure of an adult organism, which result in essential metabolic processes, but which also, once they progress far enough, increasingly disrupt metabolism, resulting in pathology and death.” The current causes of aging in humans are cell loss (without replacement), DNA damage, oncogenic nuclear mutations and epimutations, cell senescence, mitochondrial mutations, lysosomal aggregates, extracellular aggregates, random extracellular cross-linking, immune system decline, and endocrine changes. Eliminating aging would require finding a solution to each of these causes, a program de Grey calls engineered negligible senescence. There is also a huge body of knowledge indicating that change is characterized by the loss of molecular fidelity.[8]

Disease is theoretically surmountable via technology. In short, it is an abnormal condition affecting the body of an organism, something the body shouldn’t typically have to deal with its natural make up.[9] Human understanding of genetics is leading to cures and treatments for myriad previously incurable diseases. The mechanisms by which other diseases do their damage are becoming better understood. Sophisticated methods of detecting diseases early are being developed. Preventative medicine is becoming better understood. Neurodegenerative diseases like Parkinson’s and Alzheimer’s may soon be curable with the use of stem cells. Breakthroughs in cell biology and telomere research are leading to treatments for cancer. Vaccines are being researched for AIDS and tuberculosis. Genes associated with type 1 diabetes and certain types of cancer have been discovered, allowing for new therapies to be developed. Artificial devices attached directly to the nervous system may restore sight to the blind. Drugs are being developed to treat a myriad of other diseases and ailments.

Physical trauma would remain as a threat to perpetual physical life, as an otherwise immortal person would still be subject to unforeseen accidents or catastrophes. The speed and quality of paramedic response remains a determining factor in surviving severe trauma.[10] A body that could automatically repair itself from severe trauma, such as speculated uses for nanotechnology, would mitigate this factor. Being the seat of consciousness, the brain cannot be risked to trauma if a continuous physical life is to be maintained. This aversion to trauma risk to the brain would naturally result in significant behavioral changes that would render physical immortality undesirable.

Organisms otherwise unaffected by these causes of death would still face the problem of obtaining sustenance (whether from currently available agricultural processes or from hypothetical future technological processes) in the face of changing availability of suitable resources as environmental conditions change. After avoiding aging, disease, and trauma, you could still starve to death.

If there is no limitation on the degree of gradual mitigation of risk then it is possible that the cumulative probability of death over an infinite horizon is less than certainty, even when the risk of fatal trauma in any finite period is greater than zero. Mathematically, this is an aspect of achieving “actuarial escape velocity”

Biological immortality is an absence of aging, specifically the absence of a sustained increase in rate of mortality as a function of chronological age. A cell or organism that does not experience aging, or ceases to age at some point, is biologically immortal.

Biologists have chosen the word immortal to designate cells that are not limited by the Hayflick limit, where cells no longer divide because of DNA damage or shortened telomeres. The first and still most widely used immortal cell line is HeLa, developed from cells taken from the malignant cervical tumor of Henrietta Lacks without her consent in 1951. Prior to the 1961 work of Leonard Hayflick, there was the erroneous belief fostered by Alexis Carrel that all normal somatic cells are immortal. By preventing cells from reaching senescence one can achieve biological immortality; telomeres, a “cap” at the end of DNA, are thought to be the cause of cell aging. Every time a cell divides the telomere becomes a bit shorter; when it is finally worn down, the cell is unable to split and dies. Telomerase is an enzyme which rebuilds the telomeres in stem cells and cancer cells, allowing them to replicate an infinite number of times.[11] No definitive work has yet demonstrated that telomerase can be used in human somatic cells to prevent healthy tissues from aging. On the other hand, scientists hope to be able to grow organs with the help of stem cells, allowing organ transplants without the risk of rejection, another step in extending human life expectancy. These technologies are the subject of ongoing research, and are not yet realized.[citation needed]

Life defined as biologically immortal is still susceptible to causes of death besides aging, including disease and trauma, as defined above. Notable immortal species include:

As the existence of biologically immortal species demonstrates, there is no thermodynamic necessity for senescence: a defining feature of life is that it takes in free energy from the environment and unloads its entropy as waste. Living systems can even build themselves up from seed, and routinely repair themselves. Aging is therefore presumed to be a byproduct of evolution, but why mortality should be selected for remains a subject of research and debate. Programmed cell death and the telomere “end replication problem” are found even in the earliest and simplest of organisms.[16] This may be a tradeoff between selecting for cancer and selecting for aging.[17]

Modern theories on the evolution of aging include the following:

There are some known naturally occurring and artificially produced chemicals that may increase the lifetime or life-expectancy of a person or organism, such as resveratrol.[20][21]

Some scientists believe that boosting the amount or proportion of telomerase in the body, a naturally forming enzyme that helps maintain the protective caps at the ends of chromosomes,[22] could prevent cells from dying and so may ultimately lead to extended, healthier lifespans. A team of researchers at the Spanish National Cancer Centre (Madrid) tested the hypothesis on mice. It was found that those mice which were genetically engineered to produce 10 times the normal levels of telomerase lived 50% longer than normal mice.[23]

In normal circumstances, without the presence of telomerase, if a cell divides repeatedly, at some point all the progeny will reach their Hayflick limit. With the presence of telomerase, each dividing cell can replace the lost bit of DNA, and any single cell can then divide unbounded. While this unbounded growth property has excited many researchers, caution is warranted in exploiting this property, as exactly this same unbounded growth is a crucial step in enabling cancerous growth. If an organism can replicate its body cells faster, then it would theoretically stop aging.

Embryonic stem cells express telomerase, which allows them to divide repeatedly and form the individual. In adults, telomerase is highly expressed in cells that need to divide regularly (e.g., in the immune system), whereas most somatic cells express it only at very low levels in a cell-cycle dependent manner.

Technological immortality is the prospect for much longer life spans made possible by scientific advances in a variety of fields: nanotechnology, emergency room procedures, genetics, biological engineering, regenerative medicine, microbiology, and others. Contemporary life spans in the advanced industrial societies are already markedly longer than those of the past because of better nutrition, availability of health care, standard of living and bio-medical scientific advances. Technological immortality predicts further progress for the same reasons over the near term. An important aspect of current scientific thinking about immortality is that some combination of human cloning, cryonics or nanotechnology will play an essential role in extreme life extension. Robert Freitas, a nanorobotics theorist, suggests tiny medical nanorobots could be created to go through human bloodstreams, find dangerous things like cancer cells and bacteria, and destroy them.[24] Freitas anticipates that gene-therapies and nanotechnology will eventually make the human body effectively self-sustainable and capable of living indefinitely in empty space, short of severe brain trauma. This supports the theory that we will be able to continually create biological or synthetic replacement parts to replace damaged or dying ones. Future advances in nanomedicine could give rise to life extension through the repair of many processes thought to be responsible for aging. K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation. Raymond Kurzweil, a futurist and transhumanist, stated in his book The Singularity Is Near that he believes that advanced medical nanorobotics could completely remedy the effects of aging by 2030.[25] According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman’s theoretical micromachines (see nanobiotechnology). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) “swallow the doctor”. The idea was incorporated into Feynman’s 1959 essay There’s Plenty of Room at the Bottom.[26]

Cryonics, the practice of preserving organisms (either intact specimens or only their brains) for possible future revival by storing them at cryogenic temperatures where metabolism and decay are almost completely stopped, can be used to ‘pause’ for those who believe that life extension technologies will not develop sufficiently within their lifetime. Ideally, cryonics would allow clinically dead people to be brought back in the future after cures to the patients’ diseases have been discovered and aging is reversible. Modern cryonics procedures use a process called vitrification which creates a glass-like state rather than freezing as the body is brought to low temperatures. This process reduces the risk of ice crystals damaging the cell-structure, which would be especially detrimental to cell structures in the brain, as their minute adjustment evokes the individual’s mind.

One idea that has been advanced involves uploading an individual’s habits and memories via direct mind-computer interface. The individual’s memory may be loaded to a computer or to a new organic body. Extropian futurists like Moravec and Kurzweil have proposed that, thanks to exponentially growing computing power, it will someday be possible to upload human consciousness onto a computer system, and exist indefinitely in a virtual environment. This could be accomplished via advanced cybernetics, where computer hardware would initially be installed in the brain to help sort memory or accelerate thought processes. Components would be added gradually until the person’s entire brain functions were handled by artificial devices, avoiding sharp transitions that would lead to issues of identity, thus running the risk of the person to be declared dead and thus not be a legitimate owner of his or her property. After this point, the human body could be treated as an optional accessory and the program implementing the person could be transferred to any sufficiently powerful computer. Another possible mechanism for mind upload is to perform a detailed scan of an individual’s original, organic brain and simulate the entire structure in a computer. What level of detail such scans and simulations would need to achieve to emulate awareness, and whether the scanning process would destroy the brain, is still to be determined.[27] Whatever the route to mind upload, persons in this state could then be considered essentially immortal, short of loss or traumatic destruction of the machines that maintained them.[clarification needed]

Transforming a human into a cyborg can include brain implants or extracting a human processing unit and placing it in a robotic life-support system. Even replacing biological organs with robotic ones could increase life span (i.e., pace makers) and depending on the definition, many technological upgrades to the body, like genetic modifications or the addition of nanobots would qualify an individual as a cyborg. Some people believe that such modifications would make one impervious to aging and disease and theoretically immortal unless killed or destroyed.

Another approach, developed by biogerontologist Marios Kyriazis, holds that human biological immortality is an inevitable consequence of evolution. As the natural tendency is to create progressively more complex structures,[28] there will be a time (Kyriazis claims this time is now[29]), when evolution of a more complex human brain will be faster via a process of developmental singularity[30] rather than through Darwinian evolution. In other words, the evolution of the human brain as we know it will cease and there will be no need for individuals to procreate and then die. Instead, a new type of development will take over, in the same individual who will have to live for many centuries in order for the development to take place. This intellectual development will be facilitated by technology such as synthetic biology, artificial intelligence and a technological singularity process.

As late as 1952, the editorial staff of the Syntopicon found in their compilation of the Great Books of the Western World, that “The philosophical issue concerning immortality cannot be separated from issues concerning the existence and nature of man’s soul.”[31] Thus, the vast majority of speculation regarding immortality before the 21st century was regarding the nature of the afterlife.

Immortality in ancient Greek religion originally always included an eternal union of body and soul as can be seen in Homer, Hesiod, and various other ancient texts. The soul was considered to have an eternal existence in Hades, but without the body the soul was considered dead. Although almost everybody had nothing to look forward to but an eternal existence as a disembodied dead soul, a number of men and women were considered to have gained physical immortality and been brought to live forever in either Elysium, the Islands of the Blessed, heaven, the ocean or literally right under the ground. Among these were Amphiaraus, Ganymede, Ino, Iphigenia, Menelaus, Peleus, and a great part of those who fought in the Trojan and Theban wars. Some were considered to have died and been resurrected before they achieved physical immortality. Asclepius was killed by Zeus only to be resurrected and transformed into a major deity. In some versions of the Trojan War myth, Achilles, after being killed, was snatched from his funeral pyre by his divine mother Thetis, resurrected, and brought to an immortal existence in either Leuce, the Elysian plains, or the Islands of the Blessed. Memnon, who was killed by Achilles, seems to have received a similar fate. Alcmene, Castor, Heracles, and Melicertes were also among the figures sometimes considered to have been resurrected to physical immortality. According to Herodotus’ Histories, the 7th century BC sage Aristeas of Proconnesus was first found dead, after which his body disappeared from a locked room. Later he was found not only to have been resurrected but to have gained immortality.

The philosophical idea of an immortal soul was a belief first appearing with either Pherecydes or the Orphics, and most importantly advocated by Plato and his followers. This, however, never became the general norm in Hellenistic thought. As may be witnessed even into the Christian era, not least by the complaints of various philosophers over popular beliefs, many or perhaps most traditional Greeks maintained the conviction that certain individuals were resurrected from the dead and made physically immortal and that others could only look forward to an existence as disembodied and dead, though everlasting, souls. The parallel between these traditional beliefs and the later resurrection of Jesus was not lost on the early Christians, as Justin Martyr argued: “when we say… Jesus Christ, our teacher, was crucified and died, and rose again, and ascended into heaven, we propose nothing different from what you believe regarding those whom you consider sons of Zeus.” (1 Apol. 21).

The goal of Hinayana is Arhatship and Nirvana. By contrast, the goal of Mahayana is Buddhahood.

According to one Tibetan Buddhist teaching, Dzogchen, individuals can transform the physical body into an immortal body of light called the rainbow body.

Christian theology holds that Adam and Eve lost physical immortality for themselves and all their descendants in the Fall of Man, although this initial “imperishability of the bodily frame of man” was “a preternatural condition”.[32] Christians who profess the Nicene Creed believe that every dead person (whether they believed in Christ or not) will be resurrected from the dead at the Second Coming, and this belief is known as Universal resurrection.[citation needed]

N.T. Wright, a theologian and former Bishop of Durham, has said many people forget the physical aspect of what Jesus promised. He told Time: “Jesus’ resurrection marks the beginning of a restoration that he will complete upon his return. Part of this will be the resurrection of all the dead, who will ‘awake’, be embodied and participate in the renewal. Wright says John Polkinghorne, a physicist and a priest, has put it this way: ‘God will download our software onto his hardware until the time he gives us new hardware to run the software again for ourselves.’ That gets to two things nicely: that the period after death (the Intermediate state) is a period when we are in God’s presence but not active in our own bodies, and also that the more important transformation will be when we are again embodied and administering Christ’s kingdom.”[33] This kingdom will consist of Heaven and Earth “joined together in a new creation”, he said.

Hindus believe in an immortal soul which is reincarnated after death. According to Hinduism, people repeat a process of life, death, and rebirth in a cycle called samsara. If they live their life well, their karma improves and their station in the next life will be higher, and conversely lower if they live their life poorly. After many life times of perfecting its karma, the soul is freed from the cycle and lives in perpetual bliss. There is no place of eternal torment in Hinduism, although if a soul consistently lives very evil lives, it could work its way down to the very bottom of the cycle.[citation needed]

There are explicit renderings in the Upanishads alluding to a physically immortal state brought about by purification, and sublimation of the 5 elements that make up the body. For example, in the Shvetashvatara Upanishad (Chapter 2, Verse 12), it is stated “When earth, water fire, air and akasa arise, that is to say, when the five attributes of the elements, mentioned in the books on yoga, become manifest then the yogi’s body becomes purified by the fire of yoga and he is free from illness, old age and death.” This phenomenon is possible when the soul reaches enlightenment while the body and mind are still intact, an extreme rarity, and can only be achieved upon the highest most dedication, meditation and consciousness.[citation needed]

Another view of immortality is traced to the Vedic tradition by the interpretation of Maharishi Mahesh Yogi:

That man indeed whom these (contacts) do not disturb, who is even-minded in pleasure and pain, steadfast, he is fit for immortality, O best of men.[34]

To Maharishi Mahesh Yogi, the verse means, “Once a man has become established in the understanding of the permanent reality of life, his mind rises above the influence of pleasure and pain. Such an unshakable man passes beyond the influence of death and in the permanent phase of life: he attains eternal life… A man established in the understanding of the unlimited abundance of absolute existence is naturally free from existence of the relative order. This is what gives him the status of immortal life.”[34]

An Indian Tamil saint known as Vallalar claimed to have achieved immortality before disappearing forever from a locked room in 1874.[35][36]

Many Indian fables and tales include instances of metempsychosisthe ability to jump into another bodyperformed by advanced Yogis in order to live a longer life.[citation needed]

The traditional concept of an immaterial and immortal soul distinct from the body was not found in Judaism before the Babylonian Exile, but developed as a result of interaction with Persian and Hellenistic philosophies. Accordingly, the Hebrew word nephesh, although translated as “soul” in some older English Bibles, actually has a meaning closer to “living being”.[citation needed]Nephesh was rendered in the Septuagint as (psch), the Greek word for soul.[citation needed]

The only Hebrew word traditionally translated “soul” (nephesh) in English language Bibles refers to a living, breathing conscious body, rather than to an immortal soul.[37] In the New Testament, the Greek word traditionally translated “soul” () has substantially the same meaning as the Hebrew, without reference to an immortal soul.[38] Soul may refer to the whole person, the self: three thousand souls were converted in Acts 2:41 (see Acts 3:23).

The Hebrew Bible speaks about Sheol (), originally a synonym of the grave-the repository of the dead or the cessation of existence until the Resurrection. This doctrine of resurrection is mentioned explicitly only in Daniel 12:14 although it may be implied in several other texts. New theories arose concerning Sheol during the intertestamental literature.

The views about immortality in Judaism is perhaps best exemplified by the various references to this in Second Temple Period. The concept of resurrection of the physical body is found in 2 Maccabees, according to which it will happen through recreation of the flesh.[39] Resurrection of the dead also appears in detail in the extra-canonical books of Enoch,[40] and in Apocalypse of Baruch.[41] According to the British scholar in ancient Judaism Philip R. Davies, there is little or no clear reference either to immortality or to resurrection from the dead in the Dead Sea scrolls texts.[42] Both Josephus and the New Testament record that the Sadducees did not believe in an afterlife,[43] but the sources vary on the beliefs of the Pharisees. The New Testament claims that the Pharisees believed in the resurrection, but does not specify whether this included the flesh or not.[44] According to Josephus, who himself was a Pharisee, the Pharisees held that only the soul was immortal and the souls of good people will be reincarnated and pass into other bodies, while the souls of the wicked will suffer eternal punishment. [45]Jubilees seems to refer to the resurrection of the soul only, or to a more general idea of an immortal soul.[46]

Rabbinic Judaism claims that the righteous dead will be resurrected in the Messianic age with the coming of the messiah. They will then be granted immortality in a perfect world. The wicked dead, on the other hand, will not be resurrected at all. This is not the only Jewish belief about the afterlife. The Tanakh is not specific about the afterlife, so there are wide differences in views and explanations among believers.[citation needed]

It is repeatedly stated in Lshi Chunqiu that death is unavoidable.[47]Henri Maspero noted that many scholarly works frame Taoism as a school of thought focused on the quest for immortality.[48] Isabelle Robinet asserts that Taoism is better understood as a way of life than as a religion, and that its adherents do not approach or view Taoism the way non-Taoist historians have done.[49] In the Tractate of Actions and their Retributions, a traditional teaching, spiritual immortality can be rewarded to people who do a certain amount of good deeds and live a simple, pure life. A list of good deeds and sins are tallied to determine whether or not a mortal is worthy. Spiritual immortality in this definition allows the soul to leave the earthly realms of afterlife and go to pure realms in the Taoist cosmology.[50]

Zoroastrians believe that on the fourth day after death, the human soul leaves the body and the body remains as an empty shell. Souls would go to either heaven or hell; these concepts of the afterlife in Zoroastrianism may have influenced Abrahamic religions. The Persian word for “immortal” is associated with the month “Amurdad”, meaning “deathless” in Persian, in the Iranian calendar (near the end of July). The month of Amurdad or Ameretat is celebrated in Persian culture as ancient Persians believed the “Angel of Immortality” won over the “Angel of Death” in this month.[51]

The possibility of clinical immortality raises a host of medical, philosophical, and religious issues and ethical questions. These include persistent vegetative states, the nature of personality over time, technology to mimic or copy the mind or its processes, social and economic disparities created by longevity, and survival of the heat death of the universe.

The Epic of Gilgamesh, one of the first literary works, is primarily a quest of a hero seeking to become immortal.[7]

Physical immortality has also been imagined as a form of eternal torment, as in Mary Shelley’s short story “The Mortal Immortal”, the protagonist of which witnesses everyone he cares about dying around him. Jorge Luis Borges explored the idea that life gets its meaning from death in the short story “The Immortal”; an entire society having achieved immortality, they found time becoming infinite, and so found no motivation for any action. In his book “Thursday’s Fictions”, and the stage and film adaptations of it, Richard James Allen tells the story of a woman named Thursday who tries to cheat the cycle of reincarnation to get a form of eternal life. At the end of this fantastical tale, her son, Wednesday, who has witnessed the havoc his mother’s quest has caused, forgoes the opportunity for immortality when it is offered to him.[52] Likewise, the novel Tuck Everlasting depicts immortality as “falling off the wheel of life” and is viewed as a curse as opposed to a blessing. In the anime Casshern Sins humanity achieves immortality due to advances in medical technology, however the inability of the human race to die causes Luna, a Messianic figure, to come forth and offer normal lifespans because she had believed that without death, humans could not live. Ultimately, Casshern takes up the cause of death for humanity when Luna begins to restore humanity’s immortality. In Anne Rice’s book series “The Vampire Chronicles”, vampires are portrayed as immortal and ageless, but their inability to cope with the changes in the world around them means that few vampires live for much more than a century, and those who do often view their changeless form as a curse.

Although some scientists state that radical life extension, delaying and stopping aging are achievable,[53] there are no international or national programs focused on stopping aging or on radical life extension. In 2012 in Russia, and then in the United States, Israel and the Netherlands, pro-immortality political parties were launched. They aimed to provide political support to anti-aging and radical life extension research and technologies and at the same time transition to the next step, radical life extension, life without aging, and finally, immortality and aim to make possible access to such technologies to most currently living people.[54]

There are numerous symbols representing immortality. The ankh is an Egyptian symbol of life that holds connotations of immortality when depicted in the hands of the gods and pharaohs, who were seen as having control over the journey of life. The Mbius strip in the shape of a trefoil knot is another symbol of immortality. Most symbolic representations of infinity or the life cycle are often used to represent immortality depending on the context they are placed in. Other examples include the Ouroboros, the Chinese fungus of longevity, the ten kanji, the phoenix, the peacock in Christianity,[55] and the colors amaranth (in Western culture) and peach (in Chinese culture).

Immortal species abound in fiction, especially in fantasy literature.

Read more:

Immortality – Wikipedia

Posted in Immortality | Comments Off on Immortality – Wikipedia

Pantheism – Wikipedia

Posted: at 11:33 pm

Pantheism is the belief that all of reality is identical with divinity,[1] or that everything composes an all-encompassing, immanent god.[2] Pantheists thus do not believe in a distinct personal or anthropomorphic god.[3]

In the West, pantheism was formalized as a separate theology and philosophy based on the work of the 17th-century philosopher Baruch Spinoza[4]:p.7 (also known as Benedict Spinoza), whose book Ethics was an answer to Descartes’ famous dualist theory that the body and spirit are separate.[5] Although the term pantheism was not coined until after his death, Spinoza is regarded as its most celebrated advocate.[6] His work, Ethics was the major source from which Western pantheism spread.[7]

Pantheistic concepts may date back thousands of years, and some religions in the East continue to contain pantheistic elements.

Pantheism is derived from the Greek pan (meaning “all, of everything”) and theos (meaning “god, divine”).

There are a variety of definitions of pantheism. Some consider it a theological and philosophical position concerning God.[4]:p.8

As a religious position, some describe pantheism as the polar opposite of atheism.[5]:pp. 7 From this standpoint, pantheism is the view that everything is part of an all-encompassing, immanent God.[8] All forms of reality may then be considered either modes of that Being, or identical with it.[9] Some hold that pantheism is a non-religious philosophical position. To them, pantheism is the view that the Universe (in the sense of the totality of all existence) and God are identical (implying a denial of the personality and transcendence of God).[10]

Pantheistic tendencies existed in a number of early Gnostic groups, with pantheistic thought appearing throughout the Middle Ages.[12] These included a section of Johannes Scotus Eriugena’s 9th-century work De divisione naturae and the beliefs of mystics such as Amalric of Bena (11th-12 centuries) and Eckhart (12th-13th).[12]:pp. 620621

The Roman Catholic Church has long regarded pantheistic ideas as heresy.[13][14]Giordano Bruno, an Italian monk who evangelized about an immanent and infinite God, was burned at the stake in 1600 by the Roman Inquisition. He has since become known as a celebrated pantheist and martyr of science.[15] Bruno influenced many later thinkers including Baruch Spinoza.

In the West, pantheism was formalized as a separate theology and philosophy based on the work of the 17th-century philosopher Baruch Spinoza.[4]:p.7 Spinoza was a Dutch philosopher of Sephardi Portuguese origin,[16] whose book Ethics was an answer to Descartes’ famous dualist theory that the body and spirit are separate.[5] Spinoza held the monist view that the two are the same, and monism is a fundamental part of his philosophy. He was described as a “God-intoxicated man,” and used the word God to describe the unity of all substance.[5] Although the term pantheism was not coined until after his death, Spinoza is regarded as its most celebrated advocate.[6] His work, Ethics was the major source from which Western pantheism spread.[7]

The breadth and importance of Spinoza’s work was not fully realized until many years after his death. By laying the groundwork for the 18th-century Enlightenment[17] and modern biblical criticism,[18] including modern conceptions of the self and the universe,[19] he came to be considered one of the great rationalists of 17th-century philosophy.[20]

Spinoza’s magnum opus, the posthumous Ethics, in which he opposed Descartes’ mindbody dualism, has earned him recognition as one of Western philosophy’s most important thinkers. In his book Ethics, “Spinoza wrote the last indisputable Latin masterpiece, and one in which the refined conceptions of medieval philosophy are finally turned against themselves and destroyed entirely.”[21]Hegel said, “You are either a Spinozist or not a philosopher at all.”[22] His philosophical accomplishments and moral character prompted 20th-century philosopher Gilles Deleuze to name him “the ‘prince’ of philosophers”.[23]

Spinoza was raised in the Portuguese Jewish community in Amsterdam. He developed highly controversial ideas regarding the authenticity of the Hebrew Bible and the nature of the Divine. The Jewish religious authorities issued a cherem (Hebrew: , a kind of ban, shunning, ostracism, expulsion, or excommunication) against him, effectively excluding him from Jewish society at age 23. His books were also later put on the Catholic Church’s Index of Forbidden Books.

The first known use of the term “pantheism” was in Latin, by the English mathematician Joseph Raphson in his work De spatio reali, published in 1697.[24] In De spatio reali, Raphson begins with a distinction between atheistic “panhylists” (from the Greek roots pan, “all”, and hyle, “matter”), who believe everything is matter, and Spinozan “pantheists” who believe in “a certain universal substance, material as well as intelligence, that fashions all things that exist out of its own essence.”[25][26] Raphson found the universe to be immeasurable in respect to a human’s capacity of understanding, and believed that humans would never be able to comprehend it.[27]

The term was first used in English by the Irish writer John Toland in his work of 1705 Socinianism Truly Stated, by a pantheist.[12]:pp. 617618 Toland was influenced by both Spinoza and Bruno, and had read Joseph Raphson’s De Spatio Reali, referring to it as “the ingenious Mr. Ralphson’s (sic) Book of Real Space”.[28] Like Raphson, he used the terms “pantheist” and “Spinozist” interchangeably.[29] In 1720 he wrote the Pantheisticon: or The Form of Celebrating the Socratic-Society in Latin, envisioning a pantheist society which believed, “all things in the world are one, and one is all in all things … what is all in all things is God, eternal and immense, neither born nor ever to perish.”[30][31] He clarified his idea of pantheism in a letter to Gottfried Leibniz in 1710 when he referred to “the pantheistic opinion of those who believe in no other eternal being but the universe”.[12][32][33][34]

In 1785, a major controversy about Spinoza’s philosophy between Friedrich Jacobi, a critic, and Moses Mendelssohn, a defender, known in German as the Pantheismus-Streit, helped to spread pantheism to many German thinkers in the late 18th and 19th centuries.[35]

In the mid-eighteenth century, the English theologian Daniel Waterland defined pantheism this way: “It supposes God and nature, or God and the whole universe, to be one and the same substanceone universal being; insomuch that men’s souls are only modifications of the divine substance.”[12][36] In the early nineteenth century, the German theologian Julius Wegscheider defined pantheism as the belief that God and the world established by God are one and the same.[12][37]

During the beginning of 19th century, pantheism was the theological viewpoint of many leading writers and philosophers, attracting figures such as William Wordsworth and Samuel Coleridge in Britain; Johann Gottlieb Fichte, Friedrich Wilhelm Joseph Schelling and Georg Wilhelm Friedrich Hegel in Germany; Knut Hamsun in Norway; and Walt Whitman, Ralph Waldo Emerson and Henry David Thoreau in the United States. Seen as a growing threat by the Vatican, in 1864 it was formally condemned by Pope Pius IX in the Syllabus of Errors.[38]

In 2011, a letter written in 1886 by William Herndon, Abraham Lincoln’s law partner, was sold at auction for US$30,000.[39] In it, Herndon writes of the U.S. President’s evolving religious views, which included pantheism.

“Mr. Lincoln’s religion is too well known to me to allow of even a shadow of a doubt; he is or was a Theist and a Rationalist, denying all extraordinary supernatural inspiration or revelation. At one time in his life, to say the least, he was an elevated Pantheist, doubting the immortality of the soul as the Christian world understands that term. He believed that the soul lost its identity and was immortal as a force. Subsequent to this he rose to the belief of a God, and this is all the change he ever underwent.”[39][40]

The subject is understandably controversial, but the content of the letter is consistent with Lincoln’s fairly lukewarm approach to organized religion.[40]

Some 19th century theologians considered various pre-Christian religions and philosophies to be pantheistic.

Pantheism was regarded to be similar to the ancient Hindu[12]:pp. 618 philosophy of Advaita (non-dualism) to the extent that the 19th-century German Sanskritist Theodore Goldstcker remarked that Spinoza’s thought was “… a western system of philosophy which occupies a foremost rank amongst the philosophies of all nations and ages, and which is so exact a representation of the ideas of the Vedanta, that we might have suspected its founder to have borrowed the fundamental principles of his system from the Hindus.”[41]

19th-century European theologians also considered Ancient Egyptian religion to contain pantheistic elements and pointed to Egyptian philosophy as a source of Greek Pantheism.[12]:pp. 618620 The latter included some of the Presocratics, such as Heraclitus and Anaximander.[42] The Stoics were pantheists, beginning with Zeno of Citium and culminating in the emperor-philosopher Marcus Aurelius. During the pre-Christian Roman Empire, Stoicism was one of the three dominant schools of philosophy, along with Epicureanism and Neoplatonism.[43][44] The early Taoism of Lao Zi and Zhuangzi is also sometimes considered pantheistic.[32]

In 2007, Dorion Sagan, the son of famous scientist and science communicator, Carl Sagan, published a book entitled Dazzle Gradually: Reflections on the Nature of Nature co-written by Sagan’s ex-wife, Lynn Margulis. In a chapter entitled, “Truth of My Father”, he declares: “My father believed in the God of Spinoza and Einstein, God not behind nature, but as nature, equivalent to it.”[45]

In a letter written to Eduard Bsching (25 October 1929), after Bsching sent Albert Einstein a copy of his book Es gibt keinen Gott, Einstein wrote, “We followers of Spinoza see our God in the wonderful order and lawfulness of all that exists and in its soul [Beseeltheit] as it reveals itself in man and animal.”[46] According to Einstein, the book only dealt with the concept of a personal god and not the impersonal God of pantheism.[46] In a letter written in 1954 to philosopher Eric Gutkind, Albert Einstein wrote “the word God is for me nothing more than the expression and product of human weaknesses.”[47][48] In another letter written in 1954 he wrote “I do not believe in a personal God and I have never denied this but have expressed it clearly.”.[47]

In the late 20th century, pantheism was often declared to be the underlying theology of Neopaganism,[49] and pantheists began forming organizations devoted specifically to pantheism and treating it as a separate religion.[32]

Pantheism is mentioned in a Papal encyclical in 2009[50] and a statement on New Year’s Day in 2010,[51] criticizing pantheism for denying the superiority of humans over nature and “seeing the source of man’s salvation in nature”.[50] In a review of the 2009 film Avatar, Ross Douthat, an author, described pantheism as “Hollywood’s religion of choice for a generation now”.[52]

In 2015, notable Los Angeles muralist Levi Ponce was commissioned to paint “Luminaries of Pantheism” for an area in Venice, California that receives over a million onlookers per year. The organization that commissioned the work, The Paradise Project, is “dedicated to celebrating and spreading awareness about pantheism.” The mural painting depicts Albert Einstein, Alan Watts, Baruch Spinoza, Terence McKenna, Carl Jung, Carl Sagan, Emily Dickinson, Nikola Tesla, Friedrich Nietzsche, Ralph Waldo Emerson, W.E.B. Du Bois, Henry David Thoreau, Elizabeth Cady Stanton, Rumi, Adi Shankara, and Lao Tzu.[53]

There are multiple varieties of pantheism[12][54]:3 and various systems of classifying them relying upon one or more spectra or in discrete categories.

The American philosopher Charles Hartshorne used the term Classical Pantheism to describe the deterministic philosophies of Baruch Spinoza, the Stoics, and other like-minded figures.[55] Pantheism (All-is-God) is often associated with monism (All-is-One) and some have suggested that it logically implies determinism (All-is-Now).[5][56][57][58][59]Albert Einstein explained theological determinism by stating,[60] “the past, present, and future are an ‘illusion'”. This form of pantheism has been referred to as “extreme monism”, in which in the words of one commentator “God decides or determines everything, including our supposed decisions.”[61] Other examples of determinism-inclined pantheisms include those of Ralph Waldo Emerson,[62] and Georg Wilhelm Friedrich Hegel.[63]

However, some have argued against treating every meaning of “unity” as an aspect of pantheism,[64] and there exist versions of pantheism that regard determinism as an inaccurate or incomplete view of nature. Examples include the beliefs of Friedrich Wilhelm Joseph Schelling and William James.[65]

It may also be possible to distinguish two types of pantheism, one being more religious and the other being more philosophical. The Columbia Encyclopedia writes of the distinction:

Philosophers and theologians have often suggested that pantheism implies monism.[67] Different types of monism include:[69]

Views contrasting with monism are:

Monism in modern philosophy of mind can be divided into three broad categories:

Certain positions do not fit easily into the above categories, such as functionalism, anomalous monism, and reflexive monism. Moreover, they do not define the meaning of “real”.

In 1896, J. H. Worman, a theologian, identified seven categories of pantheism: Mechanical or materialistic (God the mechanical unity of existence); Ontological (fundamental unity, Spinoza); Dynamic; Psychical (God is the soul of the world); Ethical (God is the universal moral order, Johann Gottlieb Fichte); Logical (Hegel); and Pure (absorption of God into nature, which Worman equates with atheism).[12]

More recently, Paul D. Feinberg, professor of biblical and systematic theology at Trinity Evangelical Divinity School, also identified seven: Hylozoistic; Immanentistic; Absolutistic monistic; Relativistic monistic; Acosmic; Identity of opposites; and Neoplatonic or emanationistic.[74]

Nature worship or nature mysticism is often conflated and confused with pantheism. It is pointed out by at least one expert in pantheist philosophy that Spinoza’s identification of God with nature is very different from a recent idea of a self identifying pantheist with environmental ethical concerns, Harold Wood, founder of the Universal Pantheist Society. His use of the word nature to describe his worldview is suggested to be vastly different from the “nature” of modern sciences. He and other nature mystics who also identify as pantheists use “nature” to refer to the limited natural environment (as opposed to man-made built environment). This use of “nature” is different from the broader use from Spinoza and other pantheists describing natural laws and the overall phenomena of the physical world. Nature mysticism may be compatible with pantheism but it may also be compatible with theism and other views.[75]

Panentheism (from Greek (pn) “all”; (en) “in”; and (thes) “God”; “all-in-God”) was formally coined in Germany in the 19th century in an attempt to offer a philosophical synthesis between traditional theism and pantheism, stating that God is substantially omnipresent in the physical universe but also exists “apart from” or “beyond” it as its Creator and Sustainer.[76]:p.27 Thus panentheism separates itself from pantheism, positing the extra claim that God exists above and beyond the world as we know it.[77]:p.11 The line between pantheism and panentheism can be blurred depending on varying definitions of God, so there have been disagreements when assigning particular notable figures to pantheism or panentheism.[76]:pp. 7172, 8788, 105[78]

Pandeism is another word derived from pantheism and is characterized as a combination of reconcilable elements of pantheism and deism.[79] It assumes a Creator-deity which is at some point distinct from the universe and then transforms into it, resulting in a universe similar to the pantheistic one in present essence, but differing in origin.

Panpsychism is the philosophical view held by many pantheists that consciousness, mind, or soul is a universal feature of all things.[80] Some pantheists also subscribe to the distinct philosophical views hylozoism (or panvitalism), the view that everything is alive, and its close neighbor animism, the view that everything has a soul or spirit.[81]

Many traditional and folk religions including African traditional religions[82] and Native American religions[84] can be seen as pantheistic, or a mixture of pantheism and other doctrines such as polytheism and animism. According to pantheists there are elements of pantheism in some forms of Christianity.[85][86][87] Hinduism contains pantheistic views on the Divine, but also panentheistic, polytheistic, monetheistic and atheistic views.

Pantheism is popular in modern spirituality and New Religious Movements, such as Neopaganism and Theosophy.[91] Two organizations that specify the word pantheism in their title formed in the last quarter of the 20th century. The Universal Pantheist Society, open to all varieties of pantheists and supportive of environmental causes, was founded in 1975.[92] The World Pantheist Movement is headed by Paul Harrison, an environmentalist, writer and a former vice president of the Universal Pantheist Society, from which he resigned in 1996. The World Pantheist Movement was incorporated in 1999 to focus exclusively on promoting a strict metaphysical naturalistic version of pantheism,[93] considered by some a form of religious naturalism.[94] It has been described as an example of “dark green religion” with a focus on environmental ethics.[95]

Read the original post:
Pantheism – Wikipedia

Posted in Pantheism | Comments Off on Pantheism – Wikipedia

Meme – Wikipedia

Posted: October 19, 2016 at 4:12 am

A meme ( MEEM)[1] is “an idea, behavior, or style that spreads from person to person within a culture”.[2] A meme acts as a unit for carrying cultural ideas, symbols, or practices that can be transmitted from one mind to another through writing, speech, gestures, rituals, or other imitable phenomena with a mimicked theme. Supporters of the concept regard memes as cultural analogues to genes in that they self-replicate, mutate, and respond to selective pressures.[3]

Proponents theorize that memes are a viral phenomenon that may evolve by natural selection in a manner analogous to that of biological evolution. Memes do this through the processes of variation, mutation, competition, and inheritance, each of which influences a meme’s reproductive success. Memes spread through the behavior that they generate in their hosts. Memes that propagate less prolifically may become extinct, while others may survive, spread, and (for better or for worse) mutate. Memes that replicate most effectively enjoy more success, and some may replicate effectively even when they prove to be detrimental to the welfare of their hosts.[4]

A field of study called memetics[5] arose in the 1990s to explore the concepts and transmission of memes in terms of an evolutionary model. Criticism from a variety of fronts has challenged the notion that academic study can examine memes empirically. However, developments in neuroimaging may make empirical study possible.[6] Some commentators in the social sciences question the idea that one can meaningfully categorize culture in terms of discrete units, and are especially critical of the biological nature of the theory’s underpinnings.[7] Others have argued that this use of the term is the result of a misunderstanding of the original proposal.[8]

The word meme originated with Richard Dawkins’ 1976 book The Selfish Gene. Dawkins’s own position is somewhat ambiguous: he welcomed N. K. Humphrey’s suggestion that “memes should be considered as living structures, not just metaphorically”[9] and proposed to regard memes as “physically residing in the brain”.[10] Later, he argued that his original intentions, presumably before his approval of Humphrey’s opinion, had been simpler.[11] At the New Directors’ Showcase 2013 in Cannes, Dawkins’ opinion on memetics was deliberately ambiguous.[12]

The word meme is a shortening (modeled on gene) of mimeme (from Ancient Greek pronounced[mmma] mmma, “imitated thing”, from mimeisthai, “to imitate”, from mimos, “mime”)[13] coined by British evolutionary biologist Richard Dawkins in The Selfish Gene (1976)[1][14] as a concept for discussion of evolutionary principles in explaining the spread of ideas and cultural phenomena. Examples of memes given in the book included melodies, catchphrases, fashion, and the technology of building arches.[15]Kenneth Pike coined the related term emic and etic, generalizing the linguistic idea of phoneme, morpheme and tagmeme (as set out by Leonard Bloomfield), characterizing them as insider view and outside view of behaviour and extending the concept into a tagmemic theory of human behaviour (culminating in Language in Relation to a Unified Theory of the Structure of Human Behaviour, 1954).

The word meme originated with Richard Dawkins’ 1976 book The Selfish Gene. Dawkins cites as inspiration the work of geneticist L. L. Cavalli-Sforza, anthropologist F. T. Cloak[16] and ethologist J. M. Cullen.[17] Dawkins wrote that evolution depended not on the particular chemical basis of genetics, but only on the existence of a self-replicating unit of transmissionin the case of biological evolution, the gene. For Dawkins, the meme exemplified another self-replicating unit with potential significance in explaining human behavior and cultural evolution. Although Dawkins invented the term ‘meme’ and developed meme theory, the possibility that ideas were subject to the same pressures of evolution as were biological attributes was discussed in Darwin’s time. T. H. Huxley claimed that ‘The struggle for existence holds as much in the intellectual as in the physical world. A theory is a species of thinking, and its right to exist is coextensive with its power of resisting extinction by its rivals.'[18]

Dawkins used the term to refer to any cultural entity that an observer might consider a replicator. He hypothesized that one could view many cultural entities as replicators, and pointed to melodies, fashions and learned skills as examples. Memes generally replicate through exposure to humans, who have evolved as efficient copiers of information and behavior. Because humans do not always copy memes perfectly, and because they may refine, combine or otherwise modify them with other memes to create new memes, they can change over time. Dawkins likened the process by which memes survive and change through the evolution of culture to the natural selection of genes in biological evolution.[15]

Dawkins defined the meme as a unit of cultural transmission, or a unit of imitation and replication, but later definitions would vary. The lack of a consistent, rigorous, and precise understanding of what typically makes up one unit of cultural transmission remains a problem in debates about memetics.[20] In contrast, the concept of genetics gained concrete evidence with the discovery of the biological functions of DNA. Meme transmission requires a physical medium, such as photons, sound waves, touch, taste or smell because memes can be transmitted only through the senses.

Dawkins noted that in a society with culture a person need not have descendants to remain influential in the actions of individuals thousands of years after their death:

But if you contribute to the world’s culture, if you have a good idea…it may live on, intact, long after your genes have dissolved in the common pool. Socrates may or may not have a gene or two alive in the world today, as G.C. Williams has remarked, but who cares? The meme-complexes of Socrates, Leonardo, Copernicus and Marconi are still going strong.[21]

Memes, analogously to genes, vary in their aptitude to replicate; successful memes remain and spread, whereas unfit ones stall and are forgotten. Thus memes that prove more effective at replicating and surviving are selected in the meme pool.

Memes first need retention. The longer a meme stays in its hosts, the higher its chances of propagation are. When a host uses a meme, the meme’s life is extended.[22] The reuse of the neural space hosting a certain meme’s copy to host different memes is the greatest threat to that meme’s copy.[23]

A meme which increases the longevity of its hosts will generally survive longer. On the contrary, a meme which shortens the longevity of its hosts will tend to disappear faster. However, as hosts are mortal, retention is not sufficient to perpetuate a meme in the long term; memes also need transmission.

Life-forms can transmit information both vertically (from parent to child, via replication of genes) and horizontally (through viruses and other means). Memes can replicate vertically or horizontally within a single biological generation. They may also lie dormant for long periods of time.

Memes reproduce by copying from a nervous system to another one, either by communication or imitation. Imitation often involves the copying of an observed behavior of another individual. Communication may be direct or indirect, where memes transmit from one individual to another through a copy recorded in an inanimate source, such as a book or a musical score. Adam McNamara has suggested that memes can be thereby classified as either internal or external memes (i-memes or e-memes).[6]

Some commentators have likened the transmission of memes to the spread of contagions.[24] Social contagions such as fads, hysteria, copycat crime, and copycat suicide exemplify memes seen as the contagious imitation of ideas. Observers distinguish the contagious imitation of memes from instinctively contagious phenomena such as yawning and laughing, which they consider innate (rather than socially learned) behaviors.[25]

Aaron Lynch described seven general patterns of meme transmission, or “thought contagion”:[26]

Dawkins initially defined meme as a noun that “conveys the idea of a unit of cultural transmission, or a unit of imitation”.[15] John S. Wilkins retained the notion of meme as a kernel of cultural imitation while emphasizing the meme’s evolutionary aspect, defining the meme as “the least unit of sociocultural information relative to a selection process that has favorable or unfavorable selection bias that exceeds its endogenous tendency to change”.[27] The meme as a unit provides a convenient means of discussing “a piece of thought copied from person to person”, regardless of whether that thought contains others inside it, or forms part of a larger meme. A meme could consist of a single word, or a meme could consist of the entire speech in which that word first occurred. This forms an analogy to the idea of a gene as a single unit of self-replicating information found on the self-replicating chromosome.

While the identification of memes as “units” conveys their nature to replicate as discrete, indivisible entities, it does not imply that thoughts somehow become quantized or that “atomic” ideas exist that cannot be dissected into smaller pieces. A meme has no given size. Susan Blackmore writes that melodies from Beethoven’s symphonies are commonly used to illustrate the difficulty involved in delimiting memes as discrete units. She notes that while the first four notes of Beethoven’s Fifth Symphony (listen(helpinfo)) form a meme widely replicated as an independent unit, one can regard the entire symphony as a single meme as well.[20]

The inability to pin an idea or cultural feature to quantifiable key units is widely acknowledged as a problem for memetics. It has been argued however that the traces of memetic processing can be quantified utilizing neuroimaging techniques which measure changes in the connectivity profiles between brain regions.”[6] Blackmore meets such criticism by stating that memes compare with genes in this respect: that while a gene has no particular size, nor can we ascribe every phenotypic feature directly to a particular gene, it has value because it encapsulates that key unit of inherited expression subject to evolutionary pressures. To illustrate, she notes evolution selects for the gene for features such as eye color; it does not select for the individual nucleotide in a strand of DNA. Memes play a comparable role in understanding the evolution of imitated behaviors.[20]

The 1981 book Genes, Mind, and Culture: The Coevolutionary Process by Charles J. Lumsden and E. O. Wilson proposed the theory that genes and culture co-evolve, and that the fundamental biological units of culture must correspond to neuronal networks that function as nodes of semantic memory. They coined their own word, “culturgen”, which did not catch on. Coauthor Wilson later acknowledged the term meme as the best label for the fundamental unit of cultural inheritance in his 1998 book Consilience: The Unity of Knowledge, which elaborates upon the fundamental role of memes in unifying the natural and social sciences.[28]

Dawkins noted the three conditions that must exist for evolution to occur:[29]

Dawkins emphasizes that the process of evolution naturally occurs whenever these conditions co-exist, and that evolution does not apply only to organic elements such as genes. He regards memes as also having the properties necessary for evolution, and thus sees meme evolution as not simply analogous to genetic evolution, but as a real phenomenon subject to the laws of natural selection. Dawkins noted that as various ideas pass from one generation to the next, they may either enhance or detract from the survival of the people who obtain those ideas, or influence the survival of the ideas themselves. For example, a certain culture may develop unique designs and methods of tool-making that give it a competitive advantage over another culture. Each tool-design thus acts somewhat similarly to a biological gene in that some populations have it and others do not, and the meme’s function directly affects the presence of the design in future generations. In keeping with the thesis that in evolution one can regard organisms simply as suitable “hosts” for reproducing genes, Dawkins argues that one can view people as “hosts” for replicating memes. Consequently, a successful meme may or may not need to provide any benefit to its host.[29]

Unlike genetic evolution, memetic evolution can show both Darwinian and Lamarckian traits. Cultural memes will have the characteristic of Lamarckian inheritance when a host aspires to replicate the given meme through inference rather than by exactly copying it. Take for example the case of the transmission of a simple skill such as hammering a nail, a skill that a learner imitates from watching a demonstration without necessarily imitating every discrete movement modeled by the teacher in the demonstration, stroke for stroke.[30]Susan Blackmore distinguishes the difference between the two modes of inheritance in the evolution of memes, characterizing the Darwinian mode as “copying the instructions” and the Lamarckian as “copying the product.”[20]

Clusters of memes, or memeplexes (also known as meme complexes or as memecomplexes), such as cultural or political doctrines and systems, may also play a part in the acceptance of new memes. Memeplexes comprise groups of memes that replicate together and coadapt.[20] Memes that fit within a successful memeplex may gain acceptance by “piggybacking” on the success of the memeplex. As an example, John D. Gottsch discusses the transmission, mutation and selection of religious memeplexes and the theistic memes contained.[31] Theistic memes discussed include the “prohibition of aberrant sexual practices such as incest, adultery, homosexuality, bestiality, castration, and religious prostitution”, which may have increased vertical transmission of the parent religious memeplex. Similar memes are thereby included in the majority of religious memeplexes, and harden over time; they become an “inviolable canon” or set of dogmas, eventually finding their way into secular law. This could also be referred to as the propagation of a taboo.

The discipline of memetics, which dates from the mid-1980s, provides an approach to evolutionary models of cultural information transfer based on the concept of the meme. Memeticists have proposed that just as memes function analogously to genes, memetics functions analogously to genetics. Memetics attempts to apply conventional scientific methods (such as those used in population genetics and epidemiology) to explain existing patterns and transmission of cultural ideas.

Principal criticisms of memetics include the claim that memetics ignores established advances in other fields of cultural study, such as sociology, cultural anthropology, cognitive psychology, and social psychology. Questions remain whether or not the meme concept counts as a validly disprovable scientific theory. This view regards memetics as a theory in its infancy: a protoscience to proponents, or a pseudoscience to some detractors.

An objection to the study of the evolution of memes in genetic terms (although not to the existence of memes) involves a perceived gap in the gene/meme analogy: the cumulative evolution of genes depends on biological selection-pressures neither too great nor too small in relation to mutation-rates. There seems no reason to think that the same balance will exist in the selection pressures on memes.[32]

Luis Benitez-Bribiesca M.D., a critic of memetics, calls the theory a “pseudoscientific dogma” and “a dangerous idea that poses a threat to the serious study of consciousness and cultural evolution”. As a factual criticism, Benitez-Bribiesca points to the lack of a “code script” for memes (analogous to the DNA of genes), and to the excessive instability of the meme mutation mechanism (that of an idea going from one brain to another), which would lead to a low replication accuracy and a high mutation rate, rendering the evolutionary process chaotic.[33]

British political philosopher John Gray has characterized Dawkins’ memetic theory of religion as “nonsense” and “not even a theory… the latest in a succession of ill-judged Darwinian metaphors”, comparable to Intelligent Design in its value as a science.[34]

Another critique comes from semiotic theorists such as Deacon[35] and Kull.[36] This view regards the concept of “meme” as a primitivized concept of “sign”. The meme is thus described in memetics as a sign lacking a triadic nature. Semioticians can regard a meme as a “degenerate” sign, which includes only its ability of being copied. Accordingly, in the broadest sense, the objects of copying are memes, whereas the objects of translation and interpretation are signs.[clarification needed]

Fracchia and Lewontin regard memetics as reductionist and inadequate.[37] Evolutionary biologist Ernst Mayr disapproved of Dawkins’ gene-based view and usage of the term “meme”, asserting it to be an “unnecessary synonym” for “concept”, reasoning that concepts are not restricted to an individual or a generation, may persist for long periods of time, and may evolve.[38]

Opinions differ as to how best to apply the concept of memes within a “proper” disciplinary framework. One view sees memes as providing a useful philosophical perspective with which to examine cultural evolution. Proponents of this view (such as Susan Blackmore and Daniel Dennett) argue that considering cultural developments from a meme’s-eye viewas if memes themselves respond to pressure to maximise their own replication and survivalcan lead to useful insights and yield valuable predictions into how culture develops over time. Others such as Bruce Edmonds and Robert Aunger have focused on the need to provide an empirical grounding for memetics to become a useful and respected scientific discipline.[39][40]

A third approach, described by Joseph Poulshock, as “radical memetics” seeks to place memes at the centre of a materialistic theory of mind and of personal identity.[41]

Prominent researchers in evolutionary psychology and anthropology, including Scott Atran, Dan Sperber, Pascal Boyer, John Tooby and others, argue the possibility of incompatibility between modularity of mind and memetics.[citation needed] In their view, minds structure certain communicable aspects of the ideas produced, and these communicable aspects generally trigger or elicit ideas in other minds through inference (to relatively rich structures generated from often low-fidelity input) and not high-fidelity replication or imitation. Atran discusses communication involving religious beliefs as a case in point. In one set of experiments he asked religious people to write down on a piece of paper the meanings of the Ten Commandments. Despite the subjects’ own expectations of consensus, interpretations of the commandments showed wide ranges of variation, with little evidence of consensus. In another experiment, subjects with autism and subjects without autism interpreted ideological and religious sayings (for example, “Let a thousand flowers bloom” or “To everything there is a season”). People with autism showed a significant tendency to closely paraphrase and repeat content from the original statement (for example: “Don’t cut flowers before they bloom”). Controls tended to infer a wider range of cultural meanings with little replicated content (for example: “Go with the flow” or “Everyone should have equal opportunity”). Only the subjects with autismwho lack the degree of inferential capacity normally associated with aspects of theory of mindcame close to functioning as “meme machines”.[42]

In his book The Robot’s Rebellion, Stanovich uses the memes and memeplex concepts to describe a program of cognitive reform that he refers to as a “rebellion”. Specifically, Stanovich argues that the use of memes as a descriptor for cultural units is beneficial because it serves to emphasize transmission and acquisition properties that parallel the study of epidemiology. These properties make salient the sometimes parasitic nature of acquired memes, and as a result individuals should be motivated to reflectively acquire memes using what he calls a “Neurathian bootstrap” process.[43]

Although social scientists such as Max Weber sought to understand and explain religion in terms of a cultural attribute, Richard Dawkins called for a re-analysis of religion in terms of the evolution of self-replicating ideas apart from any resulting biological advantages they might bestow.

As an enthusiastic Darwinian, I have been dissatisfied with explanations that my fellow-enthusiasts have offered for human behaviour. They have tried to look for ‘biological advantages’ in various attributes of human civilization. For instance, tribal religion has been seen as a mechanism for solidifying group identity, valuable for a pack-hunting species whose individuals rely on cooperation to catch large and fast prey. Frequently the evolutionary preconception in terms of which such theories are framed is implicitly group-selectionist, but it is possible to rephrase the theories in terms of orthodox gene selection.

He argued that the role of key replicator in cultural evolution belongs not to genes, but to memes replicating thought from person to person by means of imitation. These replicators respond to selective pressures that may or may not affect biological reproduction or survival.[15]

In her book The Meme Machine, Susan Blackmore regards religions as particularly tenacious memes. Many of the features common to the most widely practiced religions provide built-in advantages in an evolutionary context, she writes. For example, religions that preach of the value of faith over evidence from everyday experience or reason inoculate societies against many of the most basic tools people commonly use to evaluate their ideas. By linking altruism with religious affiliation, religious memes can proliferate more quickly because people perceive that they can reap societal as well as personal rewards. The longevity of religious memes improves with their documentation in revered religious texts.[20]

Aaron Lynch attributed the robustness of religious memes in human culture to the fact that such memes incorporate multiple modes of meme transmission. Religious memes pass down the generations from parent to child and across a single generation through the meme-exchange of proselytism. Most people will hold the religion taught them by their parents throughout their life. Many religions feature adversarial elements, punishing apostasy, for instance, or demonizing infidels. In Thought Contagion Lynch identifies the memes of transmission in Christianity as especially powerful in scope. Believers view the conversion of non-believers both as a religious duty and as an act of altruism. The promise of heaven to believers and threat of hell to non-believers provide a strong incentive for members to retain their belief. Lynch asserts that belief in the Crucifixion of Jesus in Christianity amplifies each of its other replication advantages through the indebtedness believers have to their Savior for sacrifice on the cross. The image of the crucifixion recurs in religious sacraments, and the proliferation of symbols of the cross in homes and churches potently reinforces the wide array of Christian memes.[26]

Although religious memes have proliferated in human cultures, the modern scientific community has been relatively resistant to religious belief. Robertson (2007) [44] reasoned that if evolution is accelerated in conditions of propagative difficulty,[45] then we would expect to encounter variations of religious memes, established in general populations, addressed to scientific communities. Using a memetic approach, Robertson deconstructed two attempts to privilege religiously held spirituality in scientific discourse. Advantages of a memetic approach as compared to more traditional “modernization” and “supply side” theses in understanding the evolution and propagation of religion were explored.

In Cultural Software: A Theory of Ideology, Jack Balkin argued that memetic processes can explain many of the most familiar features of ideological thought. His theory of “cultural software” maintained that memes form narratives, social networks, metaphoric and metonymic models, and a variety of different mental structures. Balkin maintains that the same structures used to generate ideas about free speech or free markets also serve to generate racistic beliefs. To Balkin, whether memes become harmful or maladaptive depends on the environmental context in which they exist rather than in any special source or manner to their origination. Balkin describes racist beliefs as “fantasy” memes that become harmful or unjust “ideologies” when diverse peoples come together, as through trade or competition.[46]

In A Theory of Architecture, Nikos Salingaros speaks of memes as “freely propagating clusters of information” which can be beneficial or harmful. He contrasts memes to patterns and true knowledge, characterizing memes as “greatly simplified versions of patterns” and as “unreasoned matching to some visual or mnemonic prototype”.[47] Taking reference to Dawkins, Salingaros emphasizes that they can be transmitted due to their own communicative properties, that “the simpler they are, the faster they can proliferate”, and that the most successful memes “come with a great psychological appeal”.[48]

Architectural memes, according to Salingaros, can have destructive power. “Images portrayed in architectural magazines representing buildings that could not possibly accommodate everyday uses become fixed in our memory, so we reproduce them unconsciously.”[49] He lists various architectural memes that circulated since the 1920s and which, in his view, have led to contemporary architecture becoming quite decoupled from human needs. They lack connection and meaning, thereby preventing “the creation of true connections necessary to our understanding of the world”. He sees them as no different from antipatterns in software design as solutions that are false but are re-utilized nonetheless.[50]

An “Internet meme” is a concept that spreads rapidly from person to person via the Internet, largely through Internet-based E-mailing, blogs, forums, imageboards like 4chan, social networking sites like Facebook, Instagram or Twitter, instant messaging, and video hosting services like YouTube and Twitch.tv.[51]

In 2013 Richard Dawkins characterized an Internet meme as one deliberately altered by human creativity, distinguished from Dawkins’s original idea involving mutation by random change and a form of Darwinian selection.[52]

One technique of meme mapping represents the evolution and transmission of a meme across time and space.[53] Such a meme map uses a figure-8 diagram (an analemma) to map the gestation (in the lower loop), birth (at the choke point), and development (in the upper loop) of the selected meme. Such meme maps are nonscalar, with time mapped onto the y-axis and space onto the x-axis transect. One can read the temporal progression of the mapped meme from south to north on such a meme map. Paull has published a worked example using the “organics meme” (as in organic agriculture).[53]

Follow this link:

Meme – Wikipedia

Posted in Memetics | Comments Off on Meme – Wikipedia

New Atheism – Wikipedia

Posted: at 4:10 am

New Atheism is the journalistic term used to describe the positions promoted by atheists of the twenty-first century. This modern-day atheism and secularism is advanced by critics of religion and religious belief,[1] a group of modern atheist thinkers and writers who advocate the view that superstition, religion and irrationalism should not simply be tolerated but should be countered, criticized, and exposed by rational argument wherever its influence arises in government, education and politics.[2]

New Atheism lends itself to and often overlaps with secular humanism and antitheism, particularly in its criticism of what many New Atheists regard as the indoctrination of children and the perpetuation of ideologies founded on belief in the supernatural.

The 2004 publication of The End of Faith: Religion, Terror, and the Future of Reason by Sam Harris, a bestseller in the United States, was joined over the next couple years by a series of popular best-sellers by atheist authors.[3] Harris was motivated by the events of September 11, 2001, which he laid directly at the feet of Islam, while also directly criticizing Christianity and Judaism.[4] Two years later Harris followed up with Letter to a Christian Nation, which was also a severe criticism of Christianity.[5] Also in 2006, following his television documentary The Root of All Evil?, Richard Dawkins published The God Delusion, which was on the New York Times best-seller list for 51 weeks.[6]

In a 2010 column entitled “Why I Don’t Believe in the New Atheism”, Tom Flynn contends that what has been called “New Atheism” is neither a movement nor new, and that what was new was the publication of atheist material by big-name publishers, read by millions, and appearing on bestseller lists.[7]

These are some of the significant books on the subject of atheism and religion:

On September 30, 2007 four prominent atheists (Richard Dawkins, Christopher Hitchens, Sam Harris, and Daniel Dennett) met at Hitchens’ residence for a private two-hour unmoderated discussion. The event was videotaped and titled “The Four Horsemen”.[9] During “The God Debate” in 2010 featuring Christopher Hitchens vs Dinesh D’Souza the men were collectively referred to as the “Four Horsemen of the Non-Apocalypse”,[10] an allusion to the biblical Four Horsemen from the Book of Revelation.[11]

Sam Harris is the author of the bestselling non-fiction books The End of Faith, Letter to a Christian Nation, The Moral Landscape, and Waking Up: A Guide to Spirituality Without Religion, as well as two shorter works, initially published as e-books, Free Will[12] and Lying.[13] Harris is a co-founder of the Reason Project.

Richard Dawkins is the author of The God Delusion,[14] which was preceded by a Channel 4 television documentary titled The Root of all Evil?. He is also the founder of the Richard Dawkins Foundation for Reason and Science.

Christopher Hitchens was the author of God Is Not Great[15] and was named among the “Top 100 Public Intellectuals” by Foreign Policy and Prospect magazine. In addition, Hitchens served on the advisory board of the Secular Coalition for America. In 2010 Hitchens published his memoir Hitch-22 (a nickname provided by close personal friend Salman Rushdie, whom Hitchens always supported during and following The Satanic Verses controversy).[16] Shortly after its publication, Hitchens was diagnosed with esophageal cancer, which led to his death in December 2011.[17] Before his death, Hitchens published a collection of essays and articles in his book Arguably;[18] a short edition Mortality[19] was published posthumously in 2012. These publications and numerous public appearances provided Hitchens with a platform to remain an astute atheist during his illness, even speaking specifically on the culture of deathbed conversions and condemning attempts to convert the terminally ill, which he opposed as “bad taste”.[20][21]

Daniel Dennett, author of Darwin’s Dangerous Idea,[22]Breaking the Spell[23] and many others, has also been a vocal supporter of The Clergy Project,[24] an organization that provides support for clergy in the US who no longer believe in God and cannot fully participate in their communities any longer.[25]

The “Four Horsemen” video, convened by Dawkins’ Foundation, can be viewed free online at his web site: Part 1, Part 2.

After the death of Hitchens, Ayaan Hirsi Ali (who attended the 2012 Global Atheist Convention, which Hitchens was scheduled to attend) was referred to as the “plus one horse-woman”, since she was originally invited to the 2007 meeting of the “Horsemen” atheists but had to cancel at the last minute.[26] Hirsi Ali was born in Mogadishu, Somalia, fleeing in 1992 to the Netherlands in order to escape an arranged marriage.[27] She became involved in Dutch politics, rejected faith, and became vocal in opposing Islamic ideology, especially concerning women, as exemplified by her books Infidel and The Caged Virgin.[28] Hirsi Ali was later involved in the production of the film Submission, for which her friend Theo Van Gogh was murdered with a death threat to Hirsi Ali pinned to his chest.[29] This resulted in Hirsi Ali’s hiding and later immigration to the United States, where she now resides and remains a prolific critic of Islam,[30] and the treatment of women in Islamic doctrine and society,[31] and a proponent of free speech and the freedom to offend.[32][33]

While “The Four Horsemen” are arguably the foremost proponents of atheism, there are a number of other current, notable atheists including: Lawrence M. Krauss, (author of A Universe from Nothing),[34]James Randi (paranormal debunker and former illusionist),[35]Jerry Coyne (Why Evolution is True[36] and its complementary blog,[37] which specifically includes polemics against topical religious issues), Greta Christina (Why are you Atheists so Angry?),[38]Victor J. Stenger (The New Atheism),[39]Michael Shermer (Why People Believe Weird Things),[40]David Silverman (President of the American Atheists and author of Fighting God: An Atheist Manifesto for a Religious World), Ibn Warraq (Why I Am Not a Muslim),[41]Matt Dillahunty (host of the Austin-based webcast and cable-access television show The Atheist Experience),[42]Bill Maher (writer and star of the 2008 documentary Religulous),[43]Steven Pinker (noted cognitive scientist, linguist, psychologist and author),[44]Julia Galef (co-host of the podcast Rationally Speaking), A.C. Grayling (philosopher and considered to be the “Fifth Horseman of New Atheism”), and Michel Onfray (Atheist Manifesto: The Case Against Christianity, Judaism, and Islam).

Many contemporary atheists write from a scientific perspective. Unlike previous writers, many of whom thought that science was indifferent, or even incapable of dealing with the “God” concept, Dawkins argues to the contrary, claiming the “God Hypothesis” is a valid scientific hypothesis,[45] having effects in the physical universe, and like any other hypothesis can be tested and falsified. Other contemporary atheists such as Victor Stenger propose that the personal Abrahamic God is a scientific hypothesis that can be tested by standard methods of science. Both Dawkins and Stenger conclude that the hypothesis fails any such tests,[46] and argue that naturalism is sufficient to explain everything we observe in the universe, from the most distant galaxies to the origin of life, species, and the inner workings of the brain and consciousness. Nowhere, they argue, is it necessary to introduce God or the supernatural to understand reality. Atheists have been associated with the argument from divine hiddenness and the idea that “absence of evidence is evidence of absence” when evidence can be expected.[citation needed]

Non-believers assert that many religious or supernatural claims (such as the virgin birth of Jesus and the afterlife) are scientific claims in nature. They argue, as do deists and Progressive Christians, for instance, that the issue of Jesus’ supposed parentage is not a question of “values” or “morals”, but a question of scientific inquiry.[47] Rational thinkers believe science is capable of investigating at least some, if not all, supernatural claims.[48] Institutions such as the Mayo Clinic and Duke University are attempting to find empirical support for the healing power of intercessory prayer.[49] According to Stenger, these experiments have found no evidence that intercessory prayer works.[50]

Stenger also argues in his book, God: The Failed Hypothesis, that a God having omniscient, omnibenevolent and omnipotent attributes, which he termed a 3O God, cannot logically exist.[51] A similar series of logical disproofs of the existence of a God with various attributes can be found in Michael Martin and Ricki Monnier’s The Impossibility of God,[52] or Theodore M. Drange’s article, “Incompatible-Properties Arguments”.[53]

Richard Dawkins has been particularly critical of the conciliatory view that science and religion are not in conflict, noting, for example, that the Abrahamic religions constantly deal in scientific matters. In a 1998 article published in Free Inquiry magazine,[47] and later in his 2006 book The God Delusion, Dawkins expresses disagreement with the view advocated by Stephen Jay Gould that science and religion are two non-overlapping magisteria (NOMA) each existing in a “domain where one form of teaching holds the appropriate tools for meaningful discourse and resolution”. In Gould’s proposal, science and religion should be confined to distinct non-overlapping domains: science would be limited to the empirical realm, including theories developed to describe observations, while religion would deal with questions of ultimate meaning and moral value. Dawkins contends that NOMA does not describe empirical facts about the intersection of science and religion, “it is completely unrealistic to claim, as Gould and many others do, that religion keeps itself away from science’s turf, restricting itself to morals and values. A universe with a supernatural presence would be a fundamentally and qualitatively different kind of universe from one without. The difference is, inescapably, a scientific difference. Religions make existence claims, and this means scientific claims.” Matt Ridley notes that religion does more than talk about ultimate meanings and morals, and science is not proscribed from doing the same. After all, morals involve human behavior, an observable phenomenon, and science is the study of observable phenomena. Ridley notes that there is substantial scientific evidence on evolutionary origins of ethics and morality.[54]

Popularized by Sam Harris is the view that science and thereby currently unknown objective facts may instruct human morality in a globally comparable way. Harris’ book The Moral Landscape[55] and accompanying TED Talk How Science can Determine Moral Values[56] proposes that human well-being and conversely suffering may be thought of as a landscape with peaks and valleys representing numerous ways to achieve extremes in human experience, and that there are objective states of well-being.

New atheism is politically engaged in a variety of ways. These include campaigns to reduce the influence of religion in the public sphere, attempts to promote cultural change (centering, in the United States, on the mainstream acceptance of atheism), and efforts to promote the idea of an “atheist identity”. Internal strategic divisions over these issues have also been notable, as are questions about the diversity of the movement in terms of its gender and racial balance.[57]

Edward Feser’s book The Last Superstition presents arguments based on the philosophy of Aristotle and Thomas Aquinas against New Atheism.[58] According to Feser it necessarily follows from AristotelianThomistic metaphysics that God exists, that the human soul is immortal, and that the highest end of human life (and therefore the basis of morality) is to know God. Feser argues that science never disproved Aristotle’s metaphysics, but rather Modern philosophers decided to reject it on the basis of wishful thinking. In the latter chapters Feser proposes that scientism and materialism are based on premises that are inconsistent and self-contradictory and that these conceptions lead to absurd consequences.

Cardinal William Levada believes that New Atheism has misrepresented the doctrines of the church.[59] Cardinal Walter Kasper described New Atheism as “aggressive”, and he believed it to be the primary source of discrimination against Christians.[60] In a Salon interview, the journalist Chris Hedges argued that New Atheism propaganda is just as extreme as that of Christian right propaganda.[61]

The theologians Jeffrey Robbins and Christopher Rodkey take issue with what they regard as “the evangelical nature of the new atheism, which assumes that it has a Good News to share, at all cost, for the ultimate future of humanity by the conversion of as many people as possible.” They believe they have found similarities between new atheism and evangelical Christianity and conclude that the all-consuming nature of both “encourages endless conflict without progress” between both extremities.[62] Sociologist William Stahl said “What is striking about the current debate is the frequency with which the New Atheists are portrayed as mirror images of religious fundamentalists.”[63]

The atheist philosopher of science Michael Ruse has made the claim that Richard Dawkins would fail “introductory” courses on the study of “philosophy or religion” (such as courses on the philosophy of religion), courses which are offered, for example, at many educational institutions such as colleges and universities around the world.[64][65] Ruse also claims that the movement of New Atheismwhich is perceived, by him, to be a “bloody disaster”makes him ashamed, as a professional philosopher of science, to be among those hold to an atheist position, particularly as New Atheism does science a “grave disservice” and does a “disservice to scholarship” at more general level.[64][65]

Glenn Greenwald,[66][67] Toronto-based journalist and Mideast commentator Murtaza Hussain,[66][67]Salon columnist Nathan Lean,[67] scholars Wade Jacoby and Hakan Yavuz,[68] and historian of religion William Emilsen[69] have accused the New Atheist movement of Islamophobia. Wade Jacoby and Hakan Yavuz assert that “a group of ‘new atheists’ such as Richard Dawkins, Sam Harris, and Christopher Hitchens” have “invoked Samuel Huntington’s ‘clash of civilizations’ theory to explain the current political contestation” and that this forms part of a trend toward “Islamophobia […] in the study of Muslim societies”.[68] William W. Emilson argues that “the ‘new’ in the new atheists’ writings is not their aggressiveness, nor their extraordinary popularity, nor even their scientific approach to religion, rather it is their attack not only on militant Islamism but also on Islam itself under the cloak of its general critique of religion”.[69] Murtaza Hussain has alleged that leading figures in the New Atheist movement “have stepped in to give a veneer of scientific respectability to today’s politically useful bigotry”.[66][70]

See the rest here:
New Atheism – Wikipedia

Posted in Atheism | Comments Off on New Atheism – Wikipedia

Posthumanism – Wikipedia

Posted: October 17, 2016 at 1:19 am

This article is about a critique of humanism. For the futurist ideology and movement, see transhumanism.

Posthumanism or post-humanism (meaning “after humanism” or “beyond humanism”) is a term with at least seven definitions according to philosopher Francesca Ferrando:[1]

Philosopher Ted Schatzki suggests there are two varieties of posthumanism of the philosophical kind:[12]

One, which he calls ‘objectivism’, tries to counter the overemphasis of the subjective or intersubjective that pervades humanism, and emphasises the role of the nonhuman agents, whether they be animals and plants, or computers or other things.[12]

A second prioritizes practices, especially social practices, over individuals (or individual subjects) which, they say, constitute the individual.[12]

There may be a third kind of posthumanism, propounded by the philosopher Herman Dooyeweerd. Though he did not label it as ‘posthumanism’, he made an extensive and penetrating immanent critique of Humanism, and then constructed a philosophy that presupposed neither Humanist, nor Scholastic, nor Greek thought but started with a different religious ground motive.[13] Dooyeweerd prioritized law and meaningfulness as that which enables humanity and all else to exist, behave, live, occur, etc. “Meaning is the being of all that has been created,” Dooyeweerd wrote, “and the nature even of our selfhood.”[14] Both human and nonhuman alike function subject to a common ‘law-side’, which is diverse, composed of a number of distinct law-spheres or aspects.[15] The temporal being of both human and non-human is multi-aspectual; for example, both plants and humans are bodies, functioning in the biotic aspect, and both computers and humans function in the formative and lingual aspect, but humans function in the aesthetic, juridical, ethical and faith aspects too. The Dooyeweerdian version is able to incorporate and integrate both the objectivist version and the practices version, because it allows nonhuman agents their own subject-functioning in various aspects and places emphasis on aspectual functioning.[16]

Ihab Hassan, theorist in the academic study of literature, once stated:

Humanism may be coming to an end as humanism transforms itself into something one must helplessly call posthumanism.[17]

This view predates most currents of posthumanism which have developed over the late 20th century in somewhat diverse, but complementary, domains of thought and practice. For example, Hassan is a known scholar whose theoretical writings expressly address postmodernity in society.[citation needed] Beyond postmodernist studies, posthumanism has been developed and deployed by various cultural theorists, often in reaction to problematic inherent assumptions within humanistic and enlightenment thought.[4]

Theorists who both complement and contrast Hassan include Michel Foucault, Judith Butler, cyberneticists such as Gregory Bateson, Warren McCullouch, Norbert Wiener, Bruno Latour, Cary Wolfe, Elaine Graham, N. Katherine Hayles, Donna Haraway Peter Sloterdijk, Stefan Lorenz Sorgner, Evan Thompson, Francisco Varela, Humberto Maturana and Douglas Kellner. Among the theorists are philosophers, such as Robert Pepperell, who have written about a “posthuman condition”, which is often substituted for the term “posthumanism”.[5][6]

Posthumanism differs from classical humanism by relegating humanity back to one of many natural species, thereby rejecting any claims founded on anthropocentric dominance.[18] According to this claim, humans have no inherent rights to destroy nature or set themselves above it in ethical considerations a priori. Human knowledge is also reduced to a less controlling position, previously seen as the defining aspect of the world. The limitations and fallibility of human intelligence are confessed, even though it does not imply abandoning the rational tradition of humanism.[citation needed]

Proponents of a posthuman discourse, suggest that innovative advancements and emerging technologies have transcended the traditional model of the human, as proposed by Descartes among others associated with philosophy of the Enlightenment period.[19] In contrast to humanism, the discourse of posthumanism seeks to redefine the boundaries surrounding modern philosophical understanding of the human. Posthumanism represents an evolution of thought beyond that of the contemporary social boundaries and is predicated on the seeking of truth within a postmodern context context. In so doing, it rejects previous attempts to establish ‘anthropological universals’ that are imbued with anthropocentric assumptions.[18]

The philosopher Michel Foucault placed posthumanism within a context that differentiated humanism from enlightenment thought. According to Foucault, the two existed in a state of tension: as humanism sought to establish norms while Enlightenment thought attempted to transcend all that is material, including the boundaries that are constructed by humanistic thought.[18] Drawing on the Enlightenments challenges to the boundaries of humanism, posthumanism rejects the various assumptions of human dogmas (anthropological, political, scientific) and take the next step by attempting to change the nature of thought about what it means to be human. This requires not only decentering the human in multiple discourses (evolutionary, ecological, technological) but also examining those discourses to uncover inherent humanistic, anthropocentric, normative notions of humanness and the concept of the human.[4]

Posthumanistic discourse aims to open up spaces to examine what it means to be human and critically question the concept of “the human” in light of current cultural and historical contexts[4] In her book How We Became Posthuman, N. Katherine Hayles, writes about the struggle between different versions of the posthuman as it continually co-evolves alongside intelligent machines.[20] Such coevolution, according to some strands of the posthuman discourse, allows one to extend their subjective understandings of real experiences beyond the boundaries of embodied existence. According to Hayles’s view of posthuman, often referred to as technological posthumanism, visual perception and digital representations thus paradoxically become ever more salient. Even as one seeks to extend knowledge by deconstructing perceived boundaries, it is these same boundaries that make knowledge acquisition possible. The use of technology in a contemporary society is thought to complicate this relationship.

Hayles discusses the translation of human bodies into information (as suggested by Hans Moravec) in order illuminate how the boundaries of our embodied reality have been compromised in the current age and how narrow definitions of humanness no longer apply. Because of this, according to Hayles, posthumanism is characterized by a loss of subjectivity based on bodily boundaries.[4] This strand of posthumanism, including the changing notion of subjectivity and the disruption of ideas concerning what it means to be human, is often associated with Donna Haraways concept of the cyborg.[4] However, Haraway has distanced herself from posthumanistic discourse due to other theorists use of the term to promote utopian views of technological innovation to extend the human biological capacity[21] (even though these notions would more correctly fall into the realm of transhumanism[4]).

While posthumanism is a broad and complex ideology, it has relevant implications today and for the future. It attempts to redefine social structures without inherently humanly or even biological origins, but rather in terms of social and psychological systems where consciousness and communication could potentially exist as unique disembodied entities. Questions subsequently emerge with respect to the current use and the future of technology in shaping human existence,[18] as do new concerns with regards to language, symbolism, subjectivity, phenomenology, ethics, justice and creativity.[22]

Posthumanism is sometimes used as a synonym for an ideology of technology known as “transhumanism” because it affirms the possibility and desirability of achieving a “posthuman future”, albeit in purely evolutionary terms.

James Hughes comments that there is considerable confusion between the two terms.[23][24]

Some critics have argued that all forms of posthumanism have more in common than their respective proponents realize.[25]

However, posthumanists in the humanities and the arts are critical of transhumanism, in part, because they argue that it incorporates and extends many of the values of Enlightenment humanism and classical liberalism, namely scientism, according to performance philosopher Shannon Bell:[26]

Altruism, mutualism, humanism are the soft and slimy virtues that underpin liberal capitalism. Humanism has always been integrated into discourses of exploitation: colonialism, imperialism, neoimperialism, democracy, and of course, American democratization. One of the serious flaws in transhumanism is the importation of liberal-human values to the biotechno enhancement of the human. Posthumanism has a much stronger critical edge attempting to develop through enactment new understandings of the self and others, essence, consciousness, intelligence, reason, agency, intimacy, life, embodiment, identity and the body.[26]

While many modern leaders of thought are accepting of nature of ideologies described by posthumanism, some are more skeptical of the term. Donna Haraway, the author of A Cyborg Manifesto, has outspokenly rejected the term, though acknowledges a philosophical alignment with posthumanim. Haraway opts instead for the term of companion species, referring to nonhuman entities with which humans coexist.[21]

Questions of race, some argue, are suspiciously elided within the “turn” to posthumanism. Noting that the terms “post” and “human” are already loaded with racial meaning, critical theorist Zakiyyah Iman Jackson argues that the impulse to move “beyond” the human within posthumanism too often ignores praxes of humanity and critiques produced by black people, including Frantz Fanon and Aime Cesaire to Hortense Spillers and Fred Moten. Interrogating the conceptual grounds in which such a mode of beyond is rendered legible and viable, Jackson argues that it is important to observe that blackness conditions and constitutes the very nonhuman disruption and/or disruption” which posthumanists invite. In other words, given that race in general and blackness in particular constitutes the very terms through which human/nonhuman distinctions are made, for example in enduring legacies of scientific racism, a gesture toward a beyond actually returns us to a Eurocentric transcendentalism long challenged.

Visit link:
Posthumanism – Wikipedia

Posted in Post Human | Comments Off on Posthumanism – Wikipedia

Space Tourism – National Space Society

Posted: October 6, 2016 at 2:56 pm

NSS deeply regrets the tragic loss of SpaceShipTwo on October 31 and extends it’s heartfelt sympathy to the families involved and to everyone who worked in this program.

“The process of creating a successful off-world tourism industry will be the key economic and technological driver enabling the human species to evolve into a real Solar System Species.” John Spencer, author of Space Tourism and President and founder of the Space Tourism Society.

“SpaceShipOne [showed that] space travel was no longer just the domain of prohibitively expensive government programs subject to political whim. Now it was just like any other business that could be developed into a thriving industry.” From Rocketeers.

2008: Tourists in Space: A Practical Guide, By Erik Seedhouse. Springer-Praxis. 314 pages. [Review]. [Amazon link]. The bulk of this book goes into considerable detail about what sort of training prospective spaceflight participants should undergo.

2007: Rocketeers: How a Visionary Band of Business Leaders, Engineers, and Pilots Is Boldly Privatizing Space, by Michael Belfiore. Smithsonian Books. 304 pages. [Review]. [Amazon link]. An excellent and exciting read that allows you to meet the major players in the development of privatized space flight.

2007: Destination Space: How Space Tourism Is Making Science Fiction a Reality, by Kenny Kemp. Virgin Books. 262 pages. [Amazon link]. A more accurate title would be The Virgin Galactic Story because that is essentially all that is covered (note that the publisher is Virgin Books).

2005: The Space Tourist’s Handbook, by Eric Anderson and Joshua Piven. Quirk Books. 192 pages. [Review]. [Amazon link]. A more accurate title would be The Space Adventures Story because author Eric Anderson is president of that company the first company to actually fly space tourists.

2004: Space Tourism: Do You Want to Go? by John Spencer. Apogee Books. 224 pages. [Amazon link]. A broad overview of the entire topic of space tourism, written by the founder and president of the Space Tourism Society. Offers unique perspectives not found elsewhere, such as parallels with the yachting and cruise industries. A significant contribution to the literature.

2002: Making Space Happen: Private Space Ventures and the Visionaries Behind Them, by Paula Berinstein. Plexus Publishing. 490 pages. [Amazon link]. A broad overview of space privatization featuring extensive interviews with the movers and shakers that are making it happen.

1998: General Public Space Travel and Tourism: Volume 1, Executive Summary. Joint NASA study concludes that serious national attention should be given to enabling the creation of in-space travel and tourism businesses, and that, in time, this should become a very important part of our country’s overall commercial and civil space business-program structure. 40 pages. [PDF 100K]

1996: Halfway to Anywhere: Achieving America’s Destiny in Space, by G. Harry Stine. M. Evans and Company. 306 pages. [Review]. [Amazon link]. Discusses what is involved in airline-like operations for spacecraft, and provides a history of the first re-usable rocket, the Delta Clipper.

“The sheer beauty of it just brought tears to my eyes. If people can see Earth from up here, see it without those borders, see it without any differences in race or religion, they would have a completely different perspective. Because when you see it from that angle, you cannot think of your home or your country. All you can see is one Earth….”

Anousheh Ansari, Iranian-American space tourist who flew to the International Space Station in September 2006.

“It was amazing. The zero-g part was wonderful. I could have gone on and on space here I come.”

Stephen Hawking, renowned British astrophysicist who was able to leave his wheel chair and experience zero-gravity aboard a parabolic airplane flight on April 26, 2007. Hawking plans to fly on SpaceShipTwo.

Read the original:

Space Tourism – National Space Society

Posted in Space Travel | Comments Off on Space Tourism – National Space Society

A History of Cryonics – BEN BEST

Posted: September 22, 2016 at 7:51 pm

by Ben Best

Robert Ettinger is widely regarded as the “father of cryonics” (although he often said that he would rather be the grandson). Mr.Ettinger earned a Purple Heart in World WarII as a result of injury to his leg by an artillery shell. He subsequently became a college physics teacher after earning two Master’s Degrees from Wayne State University. (He has often been erroneously called “Doctor” and “Professor”.) Robert Ettinger was cryopreserved at the Cryonics Institute in July2011 at the age of92. See The Cryonics Institute’s 106th Patient Robert Ettinger for details.

A lifelong science fiction buff, Ettinger conceived the idea of cryonics upon reading a story called The Jameson Satellite in the July 1931 issue of Amazing Stories magazine. In 1948 Ettinger published a short story having a cryonics theme titled The Pentultimate Trump. In 1962 he self-published THE PROSPECT OF IMMORTALITY, a non-fictional book explaining in detail the methods and rationale for cryonics. He mailed the book to 200 people listed in WHO’S WHO IN AMERICA. Also in 1962, Evan Cooper independently self-published IMMORTALITY:PHYSICALLY, SCIENTIFICALLY, NOW, which is also a book advocating cryonics. In 1964 Isaac Asimov assured Doubleday that (although socially undesirable, in his opinion) cryonics is based on reasonable scientific assumptions. This allowed THE PROSPECT OF IMMORTALITY to be printed and distributed by a major publisher. The word “cryonics” had not been invented yet, but the concept was clearly established.

In December, 1963 Evan Cooper founded the world’s first cryonics organization, the Life Extension Society, intended to create a network of cryonics groups throughout the world. Cooper eventually became discouraged, however, and he dropped his cryonics-promoting activities to pursue his interest in sailing. His life was ended by being lost at sea. Cooper’s networking had not been in vain, however, because people who had become acquainted through his efforts formed cryonics organizations in northern and southern California as well as in New York.

In 1965 a New York industrial designer named Karl Werner coined the word “cryonics”. That same year Saul Kent, Curtis Henderson and Werner founded the Cryonics Society of New York. Werner soon drifted away from cryonics and became involved in Scientology, but Kent and Henderson remained devoted to cryonics. In 1966 the Cryonics Society of Michigan and the Cryonics Society of California were founded. Unlike the other two organizations, the Cryonics Society of Michigan was an educational and social group which had no intention to actually cryopreserve people and it exists today under the name Immortalist Society.

A TV repairman named Robert Nelson was the driving force behind the Cryonics Society of California. On January12, 1967 Nelson froze a psychology professor named James Bedford. Bedford was injected with multiple shots of DMSO, and a thumper was applied in an attempt to circulate the DMSO with chest compressions. Nelson recounted the story in his book WE FROZE THE FIRST MAN. Bedford’s wife and son took Bedford’s body from Nelson after six days and the family kept Dr.Bedford in cryogenic care until 1982 when he was transferred to Alcor. Of 17cryonics patients cryopreserved in the period between 1967 and 1973, only Bedford remains in liquid nitrogen.

In 1974 Curtis Henderson, who had been maintaining three cryonics patients for the Cryonics Society of New York, was told by the New York Department of Public Health that he must close down his cryonics facility immediately or be fined $1,000per day. The three cryonics patients were returned to their families.

In 1979 an attorney for relatives of one of the Cryonics Society of California patients led journalists to the Chatsworth, California cemetery where they entered the vault where the patients were being stored. None of the nine “cryonics patients” were being maintained in liquid nitrogen, and all were badly decomposed. Nelson and the funeral director in charge were both sued. The funeral director could pay (through his liability insurance), but Nelson had no money. Nelson had taken most of the patients as charity cases or on a “pay-as-you-go” basis where payments had not been continued. The Chatsworth Disaster is the greatest catastrophe in the history of cryonics.

In 1969 the Bay Area Cryonics Society(BACS) was founded by two physicians, with the assistance of others, notably Edgar Swank. BACS (which later changed its name to the American Cryonics Society) is now the cryonics organization with the longest continuous history in offering cryonics services. In 1972 Trans Time was founded as a for-profit perfusion service-provider for BACS. Both BACS and Alcor intended to store patients in New York, but in 1974 Trans Time was forced to create its own cryostorage facility due to the closure of the storage facility in New York. Until the 1980s all BACS and Alcor patients were stored in liquid nitrogen at Trans Time.

In 1977 Trans Time was contacted by a UCLA cardiothoracic surgeon and medical researcher named Jerry Leaf, who responded to an advertisement Trans Time had placed in REASON magazine. In 1978 Leaf created a company called Cryovita devoted to doing cryonics research and to providing perfusion services for both Alcor and Trans Time.

By the 1980s acrimony between Trans Time and BACS caused the organizations to disassociate. BACS was renamed the American Cryonics Society (ACS) in 1985. Jim Yount (who joined BACS in 1972 and became a Governor two years later) and Edgar Swank have been the principal activists in ACS into the 21st century.

For 26 years from the time of its inception until 1998 the President of Trans Time was Art Quaife. The name “Trans Time” was inspired by Trans World Airlines, which was then a very prominent airline. Also active in Trans Time was Paul Segall, a man who had been an active member of the Cryonics Society of New York. Segall obtained a PhD from the University of California at Berkeley, studying the life-extending effects of tryptophan deprivation. He wrote a book on life extension (which included a section on cryonics) entitled LIVING LONGER, GROWING YOUNGER. He founded a BioTech company called BioTime, which sells blood replacement products. In 2003 Segall deanimated due to an aortic hemorrhage. He was straight-frozen because his Trans Time associates didn’t think he could be perfused. The only other cryonics patients at Trans Time are two brains, which includes the brain of Luna Wilson, the murdered teenage daughter of Robert Anton Wilson. When Michael West (who is on the Alcor Scientific Advisory Board) became BioTime CEO, the company shifted its emphasis to stem cells.

Aside from Trans Time, the other four cryonics organizations in the world which are storing human patients in liquid nitrogen are the Alcor Life Extension Foundation (founded in 1972 by Fred and Linda Chamberlain), the Cryonics Institute (founded in 1976 by Robert Ettinger), KrioRus (located near Moscow in Russia, founded in 2006), and Oregon Cryonics (incorporated by former CI Director Jordan Sparks, and beginning service in May 2014).

Fred and Linda Chamberlain had been extremely active in the Cryonics Society of California until 1971 when they became distrustful of Robert Nelson because of (among other reasons) Nelson’s refusal to allow them to see where the organization’s patients were being stored. In 1972 the Chamberlains founded Alcor, named after a star in the Big Dipper used in ancient times as a test of visual acuity. Alcor’s first cryonics patient was Fred Chamberlain’s father who, in 1976, became the world’s first “neuro” (head-only) cryonics patient. (Two-thirds of Alcor patients are currently “neuros”). Trans Time provided cryostorage for Alcor until Alcor acquired its own storage capability in 1982.

After 1976 the Chamberlains encouraged others to run Alcor, beginning with a Los Angeles physician, who became Alcor President. The Chamberlains moved to Lake Tahoe, Nevada where they engaged in rental as well as property management and held annual Life Extension Festivals until 1986. They had to pay hefty legal fees to avoid being dragged into the Chatsworth lawsuits, a fact that increased their dislike of Robert Nelson. In 1997 they returned to Alcor when Fred became President and Linda was placed in charge of delivering cryonics service. Fred and Linda started two companies (Cells4Life and BioTransport) associated with Alcor, assuming responsibility for all unsecured debt of those companies. Financial disaster and an acrimonious dispute with Alcor management led to Fred and Linda leaving Alcor in 2001, filing for bankruptcy and temporarily joining the Cryonics Institute. They returned to Alcor in 2011, and Fred became an Alcor patient in 2012.

Saul Kent, one of the founders of the Cryonics Society of New York, became one of Alcor’s strongest supporters. He was a close associate of Pearson & Shaw, authors of the 1982 best-selling book LIFE EXTENSION. Pearson & Shaw were flooded with mail as a result of their many media appearances, and they gave the mail to Saul Kent. Kent used that mail to create a mailing list for a new mail-order business he created for selling supplements: the Life Extension Foundation(LEF). Millions of dollars earned from LEF have not only helped build Alcor, but have created and supported a company doing cryobiological research (21st Century Medicine), a company doing anti-ischemia research (Critical Care Research), and a company developing the means to apply the research to standby and transport cryonics procedures (Suspended Animation, Inc).

In December1987 Kent brought his terminally ill mother (Dora Kent) into the Alcor facility where she deanimated. The body (without the head) was given to the local coroner (Dora Kent was a “neuro”). The coroner issued a death certificate which gave death as due to natural causes. Barbiturate had been given to Dora Kent after legal death to slow brain metabolism. The coroner’s office did not understand that circulation was artificially restarted after legal death, which distributed the barbiturate throughout the body.

After the autopsy, the coroner’s office changed the cause of death on the death certificate to homicide. In January1988 Alcor was raided by coroner’s deputies, a SWAT team, and UCLA police. The Alcor staff was taken to the police station in handcuffs and the Alcor facility was ransacked, with computers and records being seized. The coroner’s office wanted to seize Dora Kent’s head for autopsy, but the head had been removed from the Alcor facility and taken to a location that was never disclosed. Alcor later sued for false arrest and for illegal seizures, winning both court cases. (See Dora Kent: Questions and Answers)

Growth in Alcor membership was fairly slow and linear until the mid-1980s, following which there was a sharp increase in growth. Ironically, publicity surrounding the Dora Kent case is often cited as one of the reasons for the growth acceleration. Another reason often cited is the 1986 publication of ENGINES OF CREATION, a seminal book about nanotechnology which contained an entire chapter devoted to cryonics (the possibility that nanomachines could repair freezing damage). Hypothermic dog experiments associated with cryonics were also publicized in the mid-1980s. In the late 1980s Alcor Member Dick Clair who was dying of AIDS fought in court for the legal right to practice cryonics in California (a battle that was ultimately won). But the Cryonics Institute did not experience a growth spurt until the advent of the internet in the 1990s. The American Cryonics Society does not publish membership statistics.

Robert Ettinger, Saul Kent and Mike Darwin are arguably the three individuals who had the most powerful impact on the early history of cryonics. Having experimented with the effects of cold on organisms from the time he was a child, Darwin learned of cryonics at the Indiana State Science Fair in 1968. He was able to spend summers at the Cryonics Society of New York (living with Curtis Henderson). Darwin was given the responsibility of perfusing cryonics patients at the age of 17 in recognition of his technical skills.

Born “Michael Federowicz”, Mike chose to use his high school nickname “Darwin” as a cryonics surname when he began his career as a kidney dialysis technician. He had been given his nickname as a result of being known at school for arguing for evolution, against creationism. He is widely known in cryonics as “Mike Darwin”, although his legal surname remains Federowicz.

Not long after Alcor was founded, Darwin moved to California at the invitation of Fred and Linda Chamberlain. He spent a year as the world’s first full-time dedicated cryonics researcher until funding ran out. Returning to Indiana, Darwin (along with Steve Bridge) created a new cryonics organization that accumulated considerable equipment and technical capability.

In 1981 Darwin moved back to California, largely because of his desire to work with Jerry Leaf. In 1982 the Indiana organization merged with Alcor, and in 1983 Darwin was made President of Alcor. In California Darwin, Leaf and biochemist Hugh Hixon (who has considerable engineering skill) developed a blood substitute capable of sustaining life in dogs for at least 4hours at or below 9C . Leaf and Darwin had some nasty confrontations with members of the Society for Cryobiology over that organization’s 1985 refusal to publish their research. The Society for Cryobiology adopted a bylaw that prohibited cryonicists from belonging to the organization. Mike Darwin later wrote a summary of the conflicts between cryonicists and cryobiologists under the title Cold War. Similar experiments were done by Paul Segall and his associates, which generated a great deal of favorable media exposure for cryonics.

In 1988 Carlos Mondragon replaced Mike Darwin as Alcor President because Mondragon proved to be more capable of handling the stresses of the Dora Kent case. Darwin had vast medical knowledge (especially as it applies to cryonics), and possessed exceptional technical skills. He was a prolific and lucid writer much of the material in the Alcor website library was written by Mike Darwin. Darwin worked as Alcor’s Research Director from 1988 to 1992, during which time he developed a Transport Technician course in which he trained Alcor Members in the technical skills required to deliver the initial phases of cryonics service.

For undisclosed reasons, Darwin left Alcor in 1992, much to the distress of many Alcor Members who regarded Mike Darwin as by far the person in the world most capable of delivering competent cryonics technical service. In 1993 a new cryonics organization called CryoCare Foundation was created, largely so that people could benefit from Darwin’s technical skills. Another strongly disputed matter was the proposed move of Alcor from California to Arizona (implemented in February 1994).

About50 Alcor Members left Alcor to join and form CryoCare. Darwin delivered standby, transport and perfusion services as a subcontractor to CryoCare and the American Cryonics Society (ACS). Cryostorage services were contracted to CryoCare and ACS by Paul Wakfer. Darwin’s company was called BioPreservation and Wakfer’s company was called CryoSpan. Eventually, serious personality conflicts developed between Darwin and Wakfer. In 1999 Darwin stopped providing service to CryoCare and Wakfer turned CryoSpan over to Saul Kent. Kent then refused to accept additional cryonics patients at CryoSpan, and was determined to end CryoSpan in a way that would not harm the cryonics patients being stored there.

I (Ben Best) had been CryoCare Secretary, and became President of CryoCare in 1999 in an attempt to arrange alternate service providers for CryoCare. The Cryonics Institute agreed to provide cryostorage. Various contractors were found to provide the other services, but eventually CryoCare could not be sustained. In 2003 I became President of the Cryonics Institute. I assisted with the moving of CryoSpan’s two CryoCare patients to Alcor and CryoSpan’s ten ACS patients to the Cryonics Institute. In 2012 I resigned as President of the Cryonics Institute, and began working for the Life Extension Foundation. Dennis Kowalski became the new CI President.

Mike Darwin continued to work as a researcher at Saul Kent’s company Critical Care Research (CCR) until 2001. Darwin’s most notable accomplishment at CCR was his role in developing methods to sustain dogs without neurological damage following 17minutes of warm ischemia. Undisclosed conflicts with CCR management caused Darwin to leave CCR in 2001. He worked briefly with Alcor and Suspended Animation, and later did consulting work for the Cryonics Institute. But for the most part Darwin has been distanced from cryonics organizations.

The history of the Cryonics Institute (CI) has been less tumultuous than that of Alcor. CI has had primarily two Presidents: Robert Ettinger from April1976 to September2003, and Ben Best to June2012. (Andrea Foote was briefly President in 1994, but soon became ill with ovarian cancer.) Robert Ettinger decided to build fiberglass cryostats rather than buy dewars because CI’s Detroit facility was too small for dewars. Robert Ettinger’s mother became the first patient of the Cryonics Institute when she deanimated in 1977. She was placed in dry ice for about ten years until CI began using liquid nitrogen in 1987 (the same year that Robert Ettinger’s first wife became CI’s second patient). In 1994 CI acquired the Erfurt-Runkel Building in Clinton Township (a suburb northeast of Detroit) for about $300,000. This is roughly the same amount of money as had been bequeathed to CI by CI Member Jack Erfurt (who had deanimated in 1992). Erfurt’s wife (Andrea Foote who deanimated in 1995) also bequeathed $300,000 to CI. Andy Zawacki, nephew of Connie Ettinger (wife of Robert Ettinger’s son David), built a ten-person cryostat in the new facility. Fourteen patients were moved from the old Detroit facility to the new Cryonics Institute facility. Andy Zawacki is a man of many talents. He has been a CI employee since January1985 (when he was 19years old), handling office work (mostly Member sign-ups and contracts), building maintenance and equipment fabrication, but also patient perfusion and cool-down.

Throughout most of the history of cryonics glycerol has been the cryoprotectant used to perfuse cryonics patients. Glycerol reduces, but does not eliminate, ice formation. In the late 1990s research conducted at 21st Century Medicine and at UCLA under the direction of 21st Century Medicine confirmed that ice formation in brain tissue could be completely eliminated by a judiciously chosen vitrification mixture of cryoprotectants. In 2001 Alcor began vitrification perfusion of cryonics patients with a cryoprotectant mixture called B2C, and not long thereafter adopted a better mixture called M22. At the Cryonics Institute a vitrification mixture called CI-VM-1 was developed by CI staff cryobiologist Dr.Yuri Pichugin (who was employed at CI from 2001 to 2007). The first CI cryonics patient was vitrified in 2005.

In 2002 Alcor cryopreserved baseball legend Ted Williams. Two of the Williams children attested that their father wanted to be cryopreserved, but a third child protested bitterly. Journalists at Sports Illustrated wrote a sensationalistic expose of Alcor based on information supplied to them by Alcor employee Larry Johnson, who had surreptitiously tape-recorded many conversations in the facility. The ensuing media circus led to some nasty moves by politicians to incapacitate cryonics organizations. In Arizona, state representative Bob Stump attempted to put Alcor under the control of the Funeral Board. The Arizona Funeral Board Director told the New York Times “These companies need to be regulated or deregulated out of business”. Alcor fought hard, and in 2004 the legislation was withdrawn. Alcor hired a full-time lobbyist to watch after their interests in the Arizona legislature. Although the Cryonics Institute had not been involved in the Ted Williams case, the State of Michigan placed the organization under a “Cease and Desist” order for six months, ultimately classifying and regulating the Cryonics Institute as a cemetery in 2004. In the spirit of de-regulation, the new Republican Michigan government removed the cemetary designation for CI in 2012.

In 2002 Suspended Animation, Inc(SA) was created to do research on improved delivery of cryonics services, and to provide those services to other cryonics organizations. In 2003 SA perfused a cryonics patient for the American Cryonics Society, and the patient was stored at the Cryonics Institute. Alcor has long offered standby and transport services to its Members as an integral part of Membership, but the Cryonics Institute (CI) had not done so. In 2005 the CI Board of Directors approved contracts with SA which would allow CI Members the option of receiving SA standby and transport if they so chose. Several years later, all Alcor standby cases in the continental United States outside of Arizona were handled by SA, and SA COO Catherine Baldwin became an Alcor Director. Alcor has continued to do standby and stabilization in Arizona. Any Alcor Member who is diagnosed as being terminally ill with a prognosis of less than 90 days of life will be reimbursed $10,000 for moving to a hospice in the Phoenix, Arizona area. By 2014, over160 of the roughly 550CI Members who had arrangements for cryopreservation services from CI had opted to also have Standby, Stabilization and Transport(SST) from SA.

A Norwegian ACS Member named Trygve Bauge brought his deceased grandfather to the United States and stored the body at Trans Time from 1990 to 1993. Bauge then transported his grandfather to Nederland, Colorado in dry ice with the intention of starting his own cryonics company. But Bauge was deported back to Norway and the story of his grandfather created a media circus. The town outlawed cryonics, but had to “grandfather the grandfather” who has remained there on dry ice. After a “cooling-off period” locals turned the publicity to their advantage by creating an annual Frozen Dead Guy Days festival which features coffin races, snow sculptures, etc. Many cryonicists insist that dry ice is not cold enough for long-term cryopreservation and that the Nederland festival is negative publicity for cryonics.

After several years of management turnover at Alcor, money was donated to find a lasting President. In January 2011, Max More was selected as the new President and CEO of Alcor. In July 2011 Robert Ettinger was cryopreseved at CI after a standby organized by his son and daughter-in-law. In July 2012 Ben Best ended his 9-year service as CI President and CEO by going to work for the Life Extension Foundation as Director of Research Oversight. The Life Extension Foundation is the major source of cryonics-related research, including funding for 21st Century Medicine, Suspended Animation, Inc., and Advanced Neural Biosciences, and funds many anti-aging research projects as well. Dennis Kowalski became the new CI President. Ben Best retired as CI Director in September 2014.

In January 2011 CI shipped its vitrification solution (CI-VM-1) to the United Kingdom so that European cryonics patients could be vitrified before shipping in dry ice to the United States. This procedure was applied to the wife of UK cryonicist Alan Sinclair in May 2013. In the summer of 2014 Alcor began offering this “field vitrication” service to its members in Canada and overseas.

In 2006 the first cryonics organization to offer cryonics services outside of the United States was created in Russia. KrioRus has a facility in a Moscow suburb where many cryonics patients are being stored in liquid nitrogen. In 2014 Oregon Cryonics (created by former CI Director Jordan Sparks) began providing neuro(head or brain)-only services at low cost for cryopreservation and chemical preservation.

(For details on the current status of the different cryonics organizations, see Comparing Procedures and Policies.)

(return to contents)


Read the original:

A History of Cryonics – BEN BEST

Posted in Cryonics | Comments Off on A History of Cryonics – BEN BEST

Hedonistic Theories – Philosophy Home Page

Posted: September 18, 2016 at 8:14 am

Abstract: The refinement of hedonism as an ethical theory involves several surprising and important distinctions. Several counter-examples to hedonism are discussed.

I. Hedonistic theories are one possible answer to the question of “What is intrinsic goodness?”

Similar theories might involve enjoyment, satisfaction, happiness, as concepts substituted for pleasure. A major problem of hedonism is getting clear as of what pleasure and pain consist. Are pleasures events, properties, states, or some other kind of entity?

II. The hedonistic position can be substantially refined.

Some persons have mistakenly taken this distinction to mean that “Therefore, you can’t generalize about what actions should be done because they would differ for different people; hence, ethics is relative.”

Think about how this statement is logically related to C.L. Kleinke’s observation in his book Self-Perception that “What distinguishes emotions such as anger, fear, love, elation, anxiety, and disgust is not what is going on inside the body but rather what is happening in the outside environment.” (C.L. Kleinke, Self-Perception (San Francisco: W.H. Freeman, 1978), 2.)

III. The hedonist doesn’t seek pleasure constantlya constant indulgence of appetites makes people miserable in the long run.

When hungry, seek food; when poor, seek money; when restless, seek physical activity. We don’t seek pleasure in these situations. As John Stuart Mill stated, “Those only are happy who have their minds fixed on some object other than their own happiness Aiming thus at something else, they find happiness along the way.”

IV. John Hospers proposes three counter-examples to hedonism.

Recommended Sources

Hedonism:A discussion of hedonism from the Stanford Encyclopedia with some emphasis relating to egoism and utilitarianism by Andrew Moore.

Hedonism: An outline of some basic concepts hedonistic philosophy with brief mention of Epicurus, Bentham, Mill, and Freud from the Wikipedia.

Read more:

Hedonistic Theories – Philosophy Home Page

Posted in Hedonism | Comments Off on Hedonistic Theories – Philosophy Home Page

Clouds of Secrecy: The Army’s Germ Warfare Tests Over …

Posted: September 8, 2016 at 6:49 am

Format: Paperback

This book contains shocking but carefully documented details about germ warfare tests conducted by the U.S. Army in the 1960s. It is an eye opener about a range of Army experiments that exposed millions of Americans to various bacteria without their knowledge. The purpose supposedly was to see how vulnerable Americans would be to a germ attack. The book is clearly written and provides riveting descriptions of many of the tests. The most amazing thing about the tests was the number of American cities and their populations that were targeted. They included New York City, San Francisco, St. Louis and hundreds of other cities and towns. The germs were not true warfare agents like anthrax, but they apparently caused several people to become sick, some perhaps fatally. In the current climate of fear about terrorism, Clouds of Secrecy provides an invaluable reminder that secret government actions intended to protect the public may themselves create risks to its safety.

Read more from the original source:

Clouds of Secrecy: The Army’s Germ Warfare Tests Over …

Posted in Germ Warfare | Comments Off on Clouds of Secrecy: The Army’s Germ Warfare Tests Over …

History of artificial intelligence – Wikipedia, the free …

Posted: August 30, 2016 at 11:03 pm

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with “an ancient wish to forge the gods.”

The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The Turing test was proposed by British mathematician Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: “I propose to consider the question, ‘Can machines think?'” The term ‘Artificial Intelligence’ was created at a conference held at Dartmouth College in 1956.[2]Allen Newell, J. C. Shaw, and Herbert A. Simon pioneered the newly created artificial intelligence field with the Logic Theory Machine (1956), and the General Problem Solver in 1957.[3] In 1958, John McCarthy and Marvin Minsky started the MIT Artificial Intelligence lab with $50,000.[4] John McCarthy also created LISP in the summer of 1958, a programming language still important in artificial intelligence research.[5]

In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and withdrew funding again.

McCorduck (2004) writes “artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized,” expressed in humanity’s myths, legends, stories, speculation and clockwork automatons.

Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion’s Galatea.[7] In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as Jbir ibn Hayyn’s Takwin, Paracelsus’ homunculus and Rabbi Judah Loew’s Golem.[8] By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots), and speculation, such as Samuel Butler’s “Darwin among the Machines.” AI has continued to be an important element of science fiction into the present.

Realistic humanoid automatons were built by craftsman from every civilization, including Yan Shi,[11]Hero of Alexandria,[12]Al-Jazari and Wolfgang von Kempelen.[14] The oldest known automatons were the sacred statues of ancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotionHermes Trismegistus wrote that “by discovering the true nature of the gods, man has been able to reproduce it.”[15][16]

Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanicalor “formal”reasoning has a long history. Chinese, Indian and Greek philosophers all developed structured methods of formal deduction in the first millennium BCE. Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), Muslim mathematician al-Khwrizm (who developed algebra and gave his name to “algorithm”) and European scholastic philosophers such as William of Ockham and Duns Scotus.[17]

Majorcan philosopher Ramon Llull (12321315) developed several logical machines devoted to the production of knowledge by logical means;[18] Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge.[19] Llull’s work had a great influence on Gottfried Leibniz, who redeveloped his ideas.[20]

In the 17th century, Leibniz, Thomas Hobbes and Ren Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry.[21]Hobbes famously wrote in Leviathan: “reason is nothing but reckoning”.[22]Leibniz envisioned a universal language of reasoning (his characteristica universalis) which would reduce argumentation to calculation, so that “there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let us calculate.”[23] These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research.

In the 20th century, the study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible. The foundations had been set by such works as Boole’s The Laws of Thought and Frege’s Begriffsschrift. Building on Frege’s system, Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the Principia Mathematica in 1913. Inspired by Russell’s success, David Hilbert challenged mathematicians of the 1920s and 30s to answer this fundamental question: “can all of mathematical reasoning be formalized?”[17] His question was answered by Gdel’s incompleteness proof, Turing’s machine and Church’s Lambda calculus.[17][24] Their answer was surprising in two ways.

First, they proved that there were, in fact, limits to what mathematical logic could accomplish. But second (and more important for AI) their work suggested that, within these limits, any form of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. The key insight was the Turing machinea simple theoretical construct that captured the essence of abstract symbol manipulation. This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines.[17][26]

Calculating machines were built in antiquity and improved throughout history by many mathematicians, including (once again) philosopher Gottfried Leibniz. In the early 19th century, Charles Babbage designed a programmable computer (the Analytical Engine), although it was never built. Ada Lovelace speculated that the machine “might compose elaborate and scientific pieces of music of any degree of complexity or extent”.[27] (She is often credited as the first programmer because of a set of notes she wrote that completely detail a method for calculating Bernoulli numbers with the Engine.)

The first modern computers were the massive code breaking machines of the Second World War (such as Z3, ENIAC and Colossus). The latter two of these machines were based on the theoretical foundation laid by Alan Turing[28] and developed by John von Neumann.[29]

In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. The field of artificial intelligence research was founded as an academic discipline in 1956.

The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 30s, 40s and early 50s. Recent research in neurology had shown that the brain was an electrical network of neurons that fired in all-or-nothing pulses. Norbert Wiener’s cybernetics described control and stability in electrical networks. Claude Shannon’s information theory described digital signals (i.e., all-or-nothing signals). Alan Turing’s theory of computation showed that any form of computation could be described digitally. The close relationship between these ideas suggested that it might be possible to construct an electronic brain.[30]

Examples of work in this vein includes robots such as W. Grey Walter’s turtles and the Johns Hopkins Beast. These machines did not use computers, digital electronics or symbolic reasoning; they were controlled entirely by analog circuitry.[31]

Walter Pitts and Warren McCulloch analyzed networks of idealized artificial neurons and showed how they might perform simple logical functions. They were the first to describe what later researchers would call a neural network.[32] One of the students inspired by Pitts and McCulloch was a young Marvin Minsky, then a 24-year-old graduate student. In 1951 (with Dean Edmonds) he built the first neural net machine, the SNARC.[33]Minsky was to become one of the most important leaders and innovators in AI for the next 50 years.

In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of creating machines that think.[34] He noted that “thinking” is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was “thinking”. This simplified version of the problem allowed Turing to argue convincingly that a “thinking machine” was at least plausible and the paper answered all the most common objections to the proposition.[35] The Turing Test was the first serious proposal in the philosophy of artificial intelligence.

In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess.[36]Arthur Samuel’s checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur.[37]Game AI would continue to be used as a measure of progress in AI throughout its history.

When access to digital computers became possible in the middle fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines.[38]

In 1955, Allen Newell and (future Nobel Laureate) Herbert A. Simon created the “Logic Theorist” (with help from J. C. Shaw). The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead’s Principia Mathematica, and find new and more elegant proofs for some.[39] Simon said that they had “solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind.”[40] (This was an early statement of the philosophical position John Searle would later call “Strong AI”: that machines can contain minds just as human bodies do.)[41]

The Dartmouth Conference of 1956[42] was organized by Marvin Minsky, John McCarthy and two senior scientists: Claude Shannon and Nathan Rochester of IBM. The proposal for the conference included this assertion: “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it”.[43] The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would create important programs during the first decades of AI research.[44] At the conference Newell and Simon debuted the “Logic Theorist” and McCarthy persuaded the attendees to accept “Artificial Intelligence” as the name of the field.[45] The 1956 Dartmouth conference was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI.[46]

The years after the Dartmouth conference were an era of discovery, of sprinting across new ground. The programs that were developed during this time were, to most people, simply “astonishing”:[47] computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such “intelligent” behavior by machines was possible at all.[48] Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years.[49] Government agencies like ARPA poured money into the new field.[50]

There were many successful programs and new directions in the late 50s and 1960s. Among the most influential were these:

Many early AI programs used the same basic algorithm. To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze, backtracking whenever they reached a dead end. This paradigm was called “reasoning as search”.[51]

The principal difficulty was that, for many problems, the number of possible paths through the “maze” was simply astronomical (a situation known as a “combinatorial explosion”). Researchers would reduce the search space by using heuristics or “rules of thumb” that would eliminate those paths that were unlikely to lead to a solution.[52]

Newell and Simon tried to capture a general version of this algorithm in a program called the “General Problem Solver”.[53] Other “searching” programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Herbert Gelernter’s Geometry Theorem Prover (1958) and SAINT, written by Minsky’s student James Slagle (1961).[54] Other programs searched through goals and subgoals to plan actions, like the STRIPS system developed at Stanford to control the behavior of their robot Shakey.[55]

An important goal of AI research is to allow computers to communicate in natural languages like English. An early success was Daniel Bobrow’s program STUDENT, which could solve high school algebra word problems.[56]

A semantic net represents concepts (e.g. “house”,”door”) as nodes and relations among concepts (e.g. “has-a”) as links between the nodes. The first AI program to use a semantic net was written by Ross Quillian[57] and the most successful (and controversial) version was Roger Schank’s Conceptual dependency theory.[58]

Joseph Weizenbaum’s ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program. But in fact, ELIZA had no idea what she was talking about. She simply gave a canned response or repeated back what was said to her, rephrasing her response with a few grammar rules. ELIZA was the first chatterbot.[59]

In the late 60s, Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI research should focus on artificially simple situations known as micro-worlds. They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused on a “blocks world,” which consists of colored blocks of various shapes and sizes arrayed on a flat surface.[60]

This paradigm led to innovative work in machine vision by Gerald Sussman (who led the team), Adolfo Guzman, David Waltz (who invented “constraint propagation”), and especially Patrick Winston. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing the blocks world to life. The crowning achievement of the micro-world program was Terry Winograd’s SHRDLU. It could communicate in ordinary English sentences, plan operations and execute them.[61]

The first generation of AI researchers made these predictions about their work:

In June 1963, MIT received a $2.2 million grant from the newly created Advanced Research Projects Agency (later known as DARPA). The money was used to fund project MAC which subsumed the “AI Group” founded by Minsky and McCarthy five years earlier. DARPA continued to provide three million dollars a year until the 70s.[66]DARPA made similar grants to Newell and Simon’s program at CMU and to the Stanford AI Project (founded by John McCarthy in 1963).[67] Another important AI laboratory was established at Edinburgh University by Donald Michie in 1965.[68] These four institutions would continue to be the main centers of AI research (and funding) in academia for many years.[69]

The money was proffered with few strings attached: J. C. R. Licklider, then the director of ARPA, believed that his organization should “fund people, not projects!” and allowed researchers to pursue whatever directions might interest them.[70] This created a freewheeling atmosphere at MIT that gave birth to the hacker culture,[71] but this “hands off” approach would not last.

In the 70s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared.[72] At the same time, the field of connectionism (or neural nets) was shut down almost completely for 10 years by Marvin Minsky’s devastating criticism of perceptrons.[73] Despite the difficulties with public perception of AI in the late 70s, new ideas were explored in logic programming, commonsense reasoning and many other areas.[74]

In the early seventies, the capabilities of AI programs were limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, “toys”.[75] AI researchers had begun to run into several fundamental limits that could not be overcome in the 1970s. Although some of these limits would be conquered in later decades, others still stymie the field to this day.[76]

The agencies which funded AI research (such as the British government, DARPA and NRC) became frustrated with the lack of progress and eventually cut off almost all funding for undirected research into AI. The pattern began as early as 1966 when the ALPAC report appeared criticizing machine translation efforts. After spending 20 million dollars, the NRC ended all support.[84] In 1973, the Lighthill report on the state of AI research in England criticized the utter failure of AI to achieve its “grandiose objectives” and led to the dismantling of AI research in that country.[85] (The report specifically mentioned the combinatorial explosion problem as a reason for AI’s failings.)[86]DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at CMU and canceled an annual grant of three million dollars.[87] By 1974, funding for AI projects was hard to find.

Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues. “Many researchers were caught up in a web of increasing exaggeration.”[88] However, there was another issue: since the passage of the Mansfield Amendment in 1969, DARPA had been under increasing pressure to fund “mission-oriented direct research, rather than basic undirected research”. Funding for the creative, freewheeling exploration that had gone on in the 60s would not come from DARPA. Instead, the money was directed at specific projects with clear objectives, such as autonomous tanks and battle management systems.[89]

Several philosophers had strong objections to the claims being made by AI researchers. One of the earliest was John Lucas, who argued that Gdel’s incompleteness theorem showed that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could.[90]Hubert Dreyfus ridiculed the broken promises of the 60s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little “symbol processing” and a great deal of embodied, instinctive, unconscious “know how”.[91][92]John Searle’s Chinese Room argument, presented in 1980, attempted to show that a program could not be said to “understand” the symbols that it uses (a quality called “intentionality”). If the symbols have no meaning for the machine, Searle argued, then the machine can not be described as “thinking”.[93]

These critiques were not taken seriously by AI researchers, often because they seemed so far off the point. Problems like intractability and commonsense knowledge seemed much more immediate and serious. It was unclear what difference “know how” or “intentionality” made to an actual computer program. Minsky said of Dreyfus and Searle “they misunderstand, and should be ignored.”[94] Dreyfus, who taught at MIT, was given a cold shoulder: he later said that AI researchers “dared not be seen having lunch with me.”[95]Joseph Weizenbaum, the author of ELIZA, felt his colleagues’ treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus’ positions, he “deliberately made it plain that theirs was not the way to treat a human being.”[96]

Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote DOCTOR, a chatterbot therapist. Weizenbaum was disturbed that Colby saw his mindless program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby did not credit Weizenbaum for his contribution to the program. In 1976, Weizenbaum published Computer Power and Human Reason which argued that the misuse of artificial intelligence has the potential to devalue human life.[97]

A perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt, who had been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI researchers, he was optimistic about their power, predicting that “perceptron may eventually be able to learn, make decisions, and translate languages.” An active research program into the paradigm was carried out throughout the 60s but came to a sudden halt with the publication of Minsky and Papert’s 1969 book Perceptrons. It suggested that there were severe limitations to what perceptrons could do and that Frank Rosenblatt’s predictions had been grossly exaggerated. The effect of the book was devastating: virtually no research at all was done in connectionism for 10 years. Eventually, a new generation of researchers would revive the field and thereafter it would become a vital and useful part of artificial intelligence. Rosenblatt would not live to see this, as he died in a boating accident shortly after the book was published.[73]

Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal.[98] In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 60s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[99] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[100] Prolog uses a subset of logic (Horn clauses, closely related to “rules” and “production rules”) that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum’s expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[101]

Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[102] McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problemsnot machines that think as people do.[103]

Among the critics of McCarthy’s approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like “story understanding” and “object recognition” that required a machine to think like a person. In order to use ordinary concepts like “chair” or “restaurant” they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that “using precise language to describe essentially imprecise concepts doesn’t make them any more precise.”[104]Schank described their “anti-logic” approaches as “scruffy”, as opposed to the “neat” paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[105]

In 1975, in a seminal paper, Minsky noted that many of his fellow “scruffy” researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be “logical”, but these structured sets of assumptions are part of the context of everything we say and think. He called these structures “frames”. Schank used a version of frames he called “scripts” to successfully answer questions about short stories in English.[106] Many years later object-oriented programming would adopt the essential idea of “inheritance” from AI research on frames.

In the 1980s a form of AI program called “expert systems” was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart. Once again, AI had achieved success.

An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach.[107]

Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be useful: something that AI had not been able to achieve up to this point.[108]

In 1980, an expert system called XCON was completed at CMU for the Digital Equipment Corporation. It was an enormous success: it was saving the company 40 million dollars annually by 1986.[109] Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp and Aion.[110]

The power of expert systems came from the expert knowledge they contained. They were part of a new direction in AI research that had been gaining ground throughout the 70s. “AI researchers were beginning to suspectreluctantly, for it violated the scientific canon of parsimonythat intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways,”[111] writes Pamela McCorduck. “[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay”.[112]Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s.[113]

The 1980s also saw the birth of Cyc, the first attempt to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows. Douglas Lenat, who started and led the project, argued that there is no shortcut the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand. The project was not expected to be completed for many decades.[114]

Chess playing programs HiTech and Deep Thought defeated chess masters in 1989. Both were developed by Carnegie Mellon University; Deep Thought development paved the way for the Deep Blue.[115]

In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings.[116] Much to the chagrin of scruffies, they chose Prolog as the primary computer language for the project.[117]

Other countries responded with new programs of their own. The UK began the 350 million Alvey project. A consortium of American companies formed the Microelectronics and Computer Technology Corporation (or “MCC”) to fund large scale projects in AI and information technology.[118][119]DARPA responded as well, founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988.[120]

In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a “Hopfield net”) could learn and process information in a completely new way. Around the same time, David Rumelhart popularized a new method for training neural networks called “backpropagation” (discovered years earlier by Paul Werbos). These two discoveries revived the field of connectionism which had been largely abandoned since 1970.[119][121]

The new field was unified and inspired by the appearance of Parallel Distributed Processing in 1986a two volume collection of papers edited by Rumelhart and psychologist James McClelland. Neural networks would become commercially successful in the 1990s, when they began to be used as the engines driving programs like optical character recognition and speech recognition.[119][122]

The business community’s fascination with AI rose and fell in the 80s in the classic pattern of an economic bubble. The collapse was in the perception of AI by government agencies and investors the field continued to make advances despite the criticism. Rodney Brooks and Hans Moravec, researchers from the related field of robotics, argued for an entirely new approach to artificial intelligence.

The term “AI winter” was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow.[123] Their fears were well founded: in the late 80s and early 90s, AI suffered a series of financial setbacks.

The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight.[124]

Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were “brittle” (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts.[125]

In the late 80s, the Strategic Computing Initiative cut funding to AI “deeply and brutally.” New leadership at DARPA had decided that AI was not “the next wave” and directed funds towards projects that seemed more likely to produce immediate results.[126]

By 1991, the impressive list of goals penned in 1981 for Japan’s Fifth Generation Project had not been met. Indeed, some of them, like “carry on a casual conversation” had not been met by 2010.[127] As with other AI projects, expectations had run much higher than what was actually possible.[127]

In the late 80s, several researchers advocated a completely new approach to artificial intelligence, based on robotics.[128] They believed that, to show real intelligence, a machine needs to have a body it needs to perceive, move, survive and deal with the world. They argued that these sensorimotor skills are essential to higher level skills like commonsense reasoning and that abstract reasoning was actually the least interesting or important human skill (see Moravec’s paradox). They advocated building intelligence “from the bottom up.”[129]

The approach revived ideas from cybernetics and control theory that had been unpopular since the sixties. Another precursor was David Marr, who had come to MIT in the late 70s from a successful background in theoretical neuroscience to lead the group studying vision. He rejected all symbolic approaches (both McCarthy’s logic and Minsky’s frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place. (Marr’s work would be cut short by leukemia in 1980.)[130]

In a 1990 paper, “Elephants Don’t Play Chess,”[131] robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since “the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough.”[132] In the 80s and 90s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the embodied mind thesis.[133]

The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. Inside the field there was little agreement on the reasons for AI’s failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of “artificial intelligence”.[134] AI was both more cautious and more successful than it had ever been.

On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.[135] The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second. The event was broadcast live over the internet and received over 74 million hits.[136]

In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail.[137] Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.[138] In February 2011, in a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[139]

These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous power of computers today.[140] In fact, Deep Blue’s computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951.[141] This dramatic increase is measured by Moore’s law, which predicts that the speed and memory capacity of computers doubles every two years. The fundamental problem of “raw computer power” was slowly being overcome.

A new paradigm called “intelligent agents” became widely accepted during the 90s.[142] Although earlier researchers had proposed modular “divide and conquer” approaches to AI,[143] the intelligent agent did not reach its modern form until Judea Pearl, Allen Newell and others brought concepts from decision theory and economics into the study of AI.[144] When the economist’s definition of a rational agent was married to computer science’s definition of an object or module, the intelligent agent paradigm was complete.

An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. By this definition, simple programs that solve specific problems are “intelligent agents”, as are human beings and organizations of human beings, such as firms. The intelligent agent paradigm defines AI research as “the study of intelligent agents”. This is a generalization of some earlier definitions of AI: it goes beyond studying human intelligence; it studies all kinds of intelligence.[145]

The paradigm gave researchers license to study isolated problems and find solutions that were both verifiable and useful. It provided a common language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like economics and control theory. It was hoped that a complete agent architecture (like Newell’s SOAR) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents.[144][146]

AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past.[147] There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics, economics or operations research. The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable; AI had become a more rigorous “scientific” discipline. Russell & Norvig (2003) describe this as nothing less than a “revolution” and “the victory of the neats”.[148][149]

Judea Pearl’s highly influential 1988 book[150] brought probability and decision theory into AI. Among the many new tools in use were Bayesian networks, hidden Markov models, information theory, stochastic modeling and classical optimization. Precise mathematical descriptions were also developed for “computational intelligence” paradigms like neural networks and evolutionary algorithms.[148]

Algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems[151] and their solutions proved to be useful throughout the technology industry,[152] such as data mining, industrial robotics, logistics,[153]speech recognition,[154] banking software,[155] medical diagnosis[155] and Google’s search engine.[156]

The field of AI receives little or no credit for these successes. Many of AI’s greatest innovations have been reduced to the status of just another item in the tool chest of computer science.[157]Nick Bostrom explains “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.”[158]

Many researchers in AI in 1990s deliberately called their work by other names, such as informatics, knowledge-based systems, cognitive systems or computational intelligence. In part, this may be because they considered their field to be fundamentally different from AI, but also the new names help to procure funding. In the commercial world at least, the failed promises of the AI Winter continue to haunt AI research, as the New York Times reported in 2005: “Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.”[159][160][161]

In 1968, Arthur C. Clarke and Stanley Kubrick had imagined that by the year 2001, a machine would exist with an intelligence that matched or exceeded the capability of human beings. The character they created, HAL 9000, was based on a belief shared by many leading AI researchers that such a machine would exist by the year 2001.[162]

Marvin Minsky asks “So the question is why didn’t we get HAL in 2001?”[163] Minsky believes that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blames the qualification problem.[164] For Ray Kurzweil, the issue is computer power and, using Moore’s Law, he predicts that machines with human-level intelligence will appear by 2029.[165]Jeff Hawkins argues that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[166] There are many other explanations and for each there is a corresponding research program underway.


Go here to read the rest:

History of artificial intelligence – Wikipedia, the free …

Posted in Ai | Comments Off on History of artificial intelligence – Wikipedia, the free …