Tag Archives: book

A History of Cryonics – BEN BEST

Posted: September 22, 2016 at 7:51 pm

by Ben Best

Robert Ettinger is widely regarded as the “father of cryonics” (although he often said that he would rather be the grandson). Mr.Ettinger earned a Purple Heart in World WarII as a result of injury to his leg by an artillery shell. He subsequently became a college physics teacher after earning two Master’s Degrees from Wayne State University. (He has often been erroneously called “Doctor” and “Professor”.) Robert Ettinger was cryopreserved at the Cryonics Institute in July2011 at the age of92. See The Cryonics Institute’s 106th Patient Robert Ettinger for details.

A lifelong science fiction buff, Ettinger conceived the idea of cryonics upon reading a story called The Jameson Satellite in the July 1931 issue of Amazing Stories magazine. In 1948 Ettinger published a short story having a cryonics theme titled The Pentultimate Trump. In 1962 he self-published THE PROSPECT OF IMMORTALITY, a non-fictional book explaining in detail the methods and rationale for cryonics. He mailed the book to 200 people listed in WHO’S WHO IN AMERICA. Also in 1962, Evan Cooper independently self-published IMMORTALITY:PHYSICALLY, SCIENTIFICALLY, NOW, which is also a book advocating cryonics. In 1964 Isaac Asimov assured Doubleday that (although socially undesirable, in his opinion) cryonics is based on reasonable scientific assumptions. This allowed THE PROSPECT OF IMMORTALITY to be printed and distributed by a major publisher. The word “cryonics” had not been invented yet, but the concept was clearly established.

In December, 1963 Evan Cooper founded the world’s first cryonics organization, the Life Extension Society, intended to create a network of cryonics groups throughout the world. Cooper eventually became discouraged, however, and he dropped his cryonics-promoting activities to pursue his interest in sailing. His life was ended by being lost at sea. Cooper’s networking had not been in vain, however, because people who had become acquainted through his efforts formed cryonics organizations in northern and southern California as well as in New York.

In 1965 a New York industrial designer named Karl Werner coined the word “cryonics”. That same year Saul Kent, Curtis Henderson and Werner founded the Cryonics Society of New York. Werner soon drifted away from cryonics and became involved in Scientology, but Kent and Henderson remained devoted to cryonics. In 1966 the Cryonics Society of Michigan and the Cryonics Society of California were founded. Unlike the other two organizations, the Cryonics Society of Michigan was an educational and social group which had no intention to actually cryopreserve people and it exists today under the name Immortalist Society.

A TV repairman named Robert Nelson was the driving force behind the Cryonics Society of California. On January12, 1967 Nelson froze a psychology professor named James Bedford. Bedford was injected with multiple shots of DMSO, and a thumper was applied in an attempt to circulate the DMSO with chest compressions. Nelson recounted the story in his book WE FROZE THE FIRST MAN. Bedford’s wife and son took Bedford’s body from Nelson after six days and the family kept Dr.Bedford in cryogenic care until 1982 when he was transferred to Alcor. Of 17cryonics patients cryopreserved in the period between 1967 and 1973, only Bedford remains in liquid nitrogen.

In 1974 Curtis Henderson, who had been maintaining three cryonics patients for the Cryonics Society of New York, was told by the New York Department of Public Health that he must close down his cryonics facility immediately or be fined $1,000per day. The three cryonics patients were returned to their families.

In 1979 an attorney for relatives of one of the Cryonics Society of California patients led journalists to the Chatsworth, California cemetery where they entered the vault where the patients were being stored. None of the nine “cryonics patients” were being maintained in liquid nitrogen, and all were badly decomposed. Nelson and the funeral director in charge were both sued. The funeral director could pay (through his liability insurance), but Nelson had no money. Nelson had taken most of the patients as charity cases or on a “pay-as-you-go” basis where payments had not been continued. The Chatsworth Disaster is the greatest catastrophe in the history of cryonics.

In 1969 the Bay Area Cryonics Society(BACS) was founded by two physicians, with the assistance of others, notably Edgar Swank. BACS (which later changed its name to the American Cryonics Society) is now the cryonics organization with the longest continuous history in offering cryonics services. In 1972 Trans Time was founded as a for-profit perfusion service-provider for BACS. Both BACS and Alcor intended to store patients in New York, but in 1974 Trans Time was forced to create its own cryostorage facility due to the closure of the storage facility in New York. Until the 1980s all BACS and Alcor patients were stored in liquid nitrogen at Trans Time.

In 1977 Trans Time was contacted by a UCLA cardiothoracic surgeon and medical researcher named Jerry Leaf, who responded to an advertisement Trans Time had placed in REASON magazine. In 1978 Leaf created a company called Cryovita devoted to doing cryonics research and to providing perfusion services for both Alcor and Trans Time.

By the 1980s acrimony between Trans Time and BACS caused the organizations to disassociate. BACS was renamed the American Cryonics Society (ACS) in 1985. Jim Yount (who joined BACS in 1972 and became a Governor two years later) and Edgar Swank have been the principal activists in ACS into the 21st century.

For 26 years from the time of its inception until 1998 the President of Trans Time was Art Quaife. The name “Trans Time” was inspired by Trans World Airlines, which was then a very prominent airline. Also active in Trans Time was Paul Segall, a man who had been an active member of the Cryonics Society of New York. Segall obtained a PhD from the University of California at Berkeley, studying the life-extending effects of tryptophan deprivation. He wrote a book on life extension (which included a section on cryonics) entitled LIVING LONGER, GROWING YOUNGER. He founded a BioTech company called BioTime, which sells blood replacement products. In 2003 Segall deanimated due to an aortic hemorrhage. He was straight-frozen because his Trans Time associates didn’t think he could be perfused. The only other cryonics patients at Trans Time are two brains, which includes the brain of Luna Wilson, the murdered teenage daughter of Robert Anton Wilson. When Michael West (who is on the Alcor Scientific Advisory Board) became BioTime CEO, the company shifted its emphasis to stem cells.

Aside from Trans Time, the other four cryonics organizations in the world which are storing human patients in liquid nitrogen are the Alcor Life Extension Foundation (founded in 1972 by Fred and Linda Chamberlain), the Cryonics Institute (founded in 1976 by Robert Ettinger), KrioRus (located near Moscow in Russia, founded in 2006), and Oregon Cryonics (incorporated by former CI Director Jordan Sparks, and beginning service in May 2014).

Fred and Linda Chamberlain had been extremely active in the Cryonics Society of California until 1971 when they became distrustful of Robert Nelson because of (among other reasons) Nelson’s refusal to allow them to see where the organization’s patients were being stored. In 1972 the Chamberlains founded Alcor, named after a star in the Big Dipper used in ancient times as a test of visual acuity. Alcor’s first cryonics patient was Fred Chamberlain’s father who, in 1976, became the world’s first “neuro” (head-only) cryonics patient. (Two-thirds of Alcor patients are currently “neuros”). Trans Time provided cryostorage for Alcor until Alcor acquired its own storage capability in 1982.

After 1976 the Chamberlains encouraged others to run Alcor, beginning with a Los Angeles physician, who became Alcor President. The Chamberlains moved to Lake Tahoe, Nevada where they engaged in rental as well as property management and held annual Life Extension Festivals until 1986. They had to pay hefty legal fees to avoid being dragged into the Chatsworth lawsuits, a fact that increased their dislike of Robert Nelson. In 1997 they returned to Alcor when Fred became President and Linda was placed in charge of delivering cryonics service. Fred and Linda started two companies (Cells4Life and BioTransport) associated with Alcor, assuming responsibility for all unsecured debt of those companies. Financial disaster and an acrimonious dispute with Alcor management led to Fred and Linda leaving Alcor in 2001, filing for bankruptcy and temporarily joining the Cryonics Institute. They returned to Alcor in 2011, and Fred became an Alcor patient in 2012.

Saul Kent, one of the founders of the Cryonics Society of New York, became one of Alcor’s strongest supporters. He was a close associate of Pearson & Shaw, authors of the 1982 best-selling book LIFE EXTENSION. Pearson & Shaw were flooded with mail as a result of their many media appearances, and they gave the mail to Saul Kent. Kent used that mail to create a mailing list for a new mail-order business he created for selling supplements: the Life Extension Foundation(LEF). Millions of dollars earned from LEF have not only helped build Alcor, but have created and supported a company doing cryobiological research (21st Century Medicine), a company doing anti-ischemia research (Critical Care Research), and a company developing the means to apply the research to standby and transport cryonics procedures (Suspended Animation, Inc).

In December1987 Kent brought his terminally ill mother (Dora Kent) into the Alcor facility where she deanimated. The body (without the head) was given to the local coroner (Dora Kent was a “neuro”). The coroner issued a death certificate which gave death as due to natural causes. Barbiturate had been given to Dora Kent after legal death to slow brain metabolism. The coroner’s office did not understand that circulation was artificially restarted after legal death, which distributed the barbiturate throughout the body.

After the autopsy, the coroner’s office changed the cause of death on the death certificate to homicide. In January1988 Alcor was raided by coroner’s deputies, a SWAT team, and UCLA police. The Alcor staff was taken to the police station in handcuffs and the Alcor facility was ransacked, with computers and records being seized. The coroner’s office wanted to seize Dora Kent’s head for autopsy, but the head had been removed from the Alcor facility and taken to a location that was never disclosed. Alcor later sued for false arrest and for illegal seizures, winning both court cases. (See Dora Kent: Questions and Answers)

Growth in Alcor membership was fairly slow and linear until the mid-1980s, following which there was a sharp increase in growth. Ironically, publicity surrounding the Dora Kent case is often cited as one of the reasons for the growth acceleration. Another reason often cited is the 1986 publication of ENGINES OF CREATION, a seminal book about nanotechnology which contained an entire chapter devoted to cryonics (the possibility that nanomachines could repair freezing damage). Hypothermic dog experiments associated with cryonics were also publicized in the mid-1980s. In the late 1980s Alcor Member Dick Clair who was dying of AIDS fought in court for the legal right to practice cryonics in California (a battle that was ultimately won). But the Cryonics Institute did not experience a growth spurt until the advent of the internet in the 1990s. The American Cryonics Society does not publish membership statistics.

Robert Ettinger, Saul Kent and Mike Darwin are arguably the three individuals who had the most powerful impact on the early history of cryonics. Having experimented with the effects of cold on organisms from the time he was a child, Darwin learned of cryonics at the Indiana State Science Fair in 1968. He was able to spend summers at the Cryonics Society of New York (living with Curtis Henderson). Darwin was given the responsibility of perfusing cryonics patients at the age of 17 in recognition of his technical skills.

Born “Michael Federowicz”, Mike chose to use his high school nickname “Darwin” as a cryonics surname when he began his career as a kidney dialysis technician. He had been given his nickname as a result of being known at school for arguing for evolution, against creationism. He is widely known in cryonics as “Mike Darwin”, although his legal surname remains Federowicz.

Not long after Alcor was founded, Darwin moved to California at the invitation of Fred and Linda Chamberlain. He spent a year as the world’s first full-time dedicated cryonics researcher until funding ran out. Returning to Indiana, Darwin (along with Steve Bridge) created a new cryonics organization that accumulated considerable equipment and technical capability.

In 1981 Darwin moved back to California, largely because of his desire to work with Jerry Leaf. In 1982 the Indiana organization merged with Alcor, and in 1983 Darwin was made President of Alcor. In California Darwin, Leaf and biochemist Hugh Hixon (who has considerable engineering skill) developed a blood substitute capable of sustaining life in dogs for at least 4hours at or below 9C . Leaf and Darwin had some nasty confrontations with members of the Society for Cryobiology over that organization’s 1985 refusal to publish their research. The Society for Cryobiology adopted a bylaw that prohibited cryonicists from belonging to the organization. Mike Darwin later wrote a summary of the conflicts between cryonicists and cryobiologists under the title Cold War. Similar experiments were done by Paul Segall and his associates, which generated a great deal of favorable media exposure for cryonics.

In 1988 Carlos Mondragon replaced Mike Darwin as Alcor President because Mondragon proved to be more capable of handling the stresses of the Dora Kent case. Darwin had vast medical knowledge (especially as it applies to cryonics), and possessed exceptional technical skills. He was a prolific and lucid writer much of the material in the Alcor website library was written by Mike Darwin. Darwin worked as Alcor’s Research Director from 1988 to 1992, during which time he developed a Transport Technician course in which he trained Alcor Members in the technical skills required to deliver the initial phases of cryonics service.

For undisclosed reasons, Darwin left Alcor in 1992, much to the distress of many Alcor Members who regarded Mike Darwin as by far the person in the world most capable of delivering competent cryonics technical service. In 1993 a new cryonics organization called CryoCare Foundation was created, largely so that people could benefit from Darwin’s technical skills. Another strongly disputed matter was the proposed move of Alcor from California to Arizona (implemented in February 1994).

About50 Alcor Members left Alcor to join and form CryoCare. Darwin delivered standby, transport and perfusion services as a subcontractor to CryoCare and the American Cryonics Society (ACS). Cryostorage services were contracted to CryoCare and ACS by Paul Wakfer. Darwin’s company was called BioPreservation and Wakfer’s company was called CryoSpan. Eventually, serious personality conflicts developed between Darwin and Wakfer. In 1999 Darwin stopped providing service to CryoCare and Wakfer turned CryoSpan over to Saul Kent. Kent then refused to accept additional cryonics patients at CryoSpan, and was determined to end CryoSpan in a way that would not harm the cryonics patients being stored there.

I (Ben Best) had been CryoCare Secretary, and became President of CryoCare in 1999 in an attempt to arrange alternate service providers for CryoCare. The Cryonics Institute agreed to provide cryostorage. Various contractors were found to provide the other services, but eventually CryoCare could not be sustained. In 2003 I became President of the Cryonics Institute. I assisted with the moving of CryoSpan’s two CryoCare patients to Alcor and CryoSpan’s ten ACS patients to the Cryonics Institute. In 2012 I resigned as President of the Cryonics Institute, and began working for the Life Extension Foundation. Dennis Kowalski became the new CI President.

Mike Darwin continued to work as a researcher at Saul Kent’s company Critical Care Research (CCR) until 2001. Darwin’s most notable accomplishment at CCR was his role in developing methods to sustain dogs without neurological damage following 17minutes of warm ischemia. Undisclosed conflicts with CCR management caused Darwin to leave CCR in 2001. He worked briefly with Alcor and Suspended Animation, and later did consulting work for the Cryonics Institute. But for the most part Darwin has been distanced from cryonics organizations.

The history of the Cryonics Institute (CI) has been less tumultuous than that of Alcor. CI has had primarily two Presidents: Robert Ettinger from April1976 to September2003, and Ben Best to June2012. (Andrea Foote was briefly President in 1994, but soon became ill with ovarian cancer.) Robert Ettinger decided to build fiberglass cryostats rather than buy dewars because CI’s Detroit facility was too small for dewars. Robert Ettinger’s mother became the first patient of the Cryonics Institute when she deanimated in 1977. She was placed in dry ice for about ten years until CI began using liquid nitrogen in 1987 (the same year that Robert Ettinger’s first wife became CI’s second patient). In 1994 CI acquired the Erfurt-Runkel Building in Clinton Township (a suburb northeast of Detroit) for about $300,000. This is roughly the same amount of money as had been bequeathed to CI by CI Member Jack Erfurt (who had deanimated in 1992). Erfurt’s wife (Andrea Foote who deanimated in 1995) also bequeathed $300,000 to CI. Andy Zawacki, nephew of Connie Ettinger (wife of Robert Ettinger’s son David), built a ten-person cryostat in the new facility. Fourteen patients were moved from the old Detroit facility to the new Cryonics Institute facility. Andy Zawacki is a man of many talents. He has been a CI employee since January1985 (when he was 19years old), handling office work (mostly Member sign-ups and contracts), building maintenance and equipment fabrication, but also patient perfusion and cool-down.

Throughout most of the history of cryonics glycerol has been the cryoprotectant used to perfuse cryonics patients. Glycerol reduces, but does not eliminate, ice formation. In the late 1990s research conducted at 21st Century Medicine and at UCLA under the direction of 21st Century Medicine confirmed that ice formation in brain tissue could be completely eliminated by a judiciously chosen vitrification mixture of cryoprotectants. In 2001 Alcor began vitrification perfusion of cryonics patients with a cryoprotectant mixture called B2C, and not long thereafter adopted a better mixture called M22. At the Cryonics Institute a vitrification mixture called CI-VM-1 was developed by CI staff cryobiologist Dr.Yuri Pichugin (who was employed at CI from 2001 to 2007). The first CI cryonics patient was vitrified in 2005.

In 2002 Alcor cryopreserved baseball legend Ted Williams. Two of the Williams children attested that their father wanted to be cryopreserved, but a third child protested bitterly. Journalists at Sports Illustrated wrote a sensationalistic expose of Alcor based on information supplied to them by Alcor employee Larry Johnson, who had surreptitiously tape-recorded many conversations in the facility. The ensuing media circus led to some nasty moves by politicians to incapacitate cryonics organizations. In Arizona, state representative Bob Stump attempted to put Alcor under the control of the Funeral Board. The Arizona Funeral Board Director told the New York Times “These companies need to be regulated or deregulated out of business”. Alcor fought hard, and in 2004 the legislation was withdrawn. Alcor hired a full-time lobbyist to watch after their interests in the Arizona legislature. Although the Cryonics Institute had not been involved in the Ted Williams case, the State of Michigan placed the organization under a “Cease and Desist” order for six months, ultimately classifying and regulating the Cryonics Institute as a cemetery in 2004. In the spirit of de-regulation, the new Republican Michigan government removed the cemetary designation for CI in 2012.

In 2002 Suspended Animation, Inc(SA) was created to do research on improved delivery of cryonics services, and to provide those services to other cryonics organizations. In 2003 SA perfused a cryonics patient for the American Cryonics Society, and the patient was stored at the Cryonics Institute. Alcor has long offered standby and transport services to its Members as an integral part of Membership, but the Cryonics Institute (CI) had not done so. In 2005 the CI Board of Directors approved contracts with SA which would allow CI Members the option of receiving SA standby and transport if they so chose. Several years later, all Alcor standby cases in the continental United States outside of Arizona were handled by SA, and SA COO Catherine Baldwin became an Alcor Director. Alcor has continued to do standby and stabilization in Arizona. Any Alcor Member who is diagnosed as being terminally ill with a prognosis of less than 90 days of life will be reimbursed $10,000 for moving to a hospice in the Phoenix, Arizona area. By 2014, over160 of the roughly 550CI Members who had arrangements for cryopreservation services from CI had opted to also have Standby, Stabilization and Transport(SST) from SA.

A Norwegian ACS Member named Trygve Bauge brought his deceased grandfather to the United States and stored the body at Trans Time from 1990 to 1993. Bauge then transported his grandfather to Nederland, Colorado in dry ice with the intention of starting his own cryonics company. But Bauge was deported back to Norway and the story of his grandfather created a media circus. The town outlawed cryonics, but had to “grandfather the grandfather” who has remained there on dry ice. After a “cooling-off period” locals turned the publicity to their advantage by creating an annual Frozen Dead Guy Days festival which features coffin races, snow sculptures, etc. Many cryonicists insist that dry ice is not cold enough for long-term cryopreservation and that the Nederland festival is negative publicity for cryonics.

After several years of management turnover at Alcor, money was donated to find a lasting President. In January 2011, Max More was selected as the new President and CEO of Alcor. In July 2011 Robert Ettinger was cryopreseved at CI after a standby organized by his son and daughter-in-law. In July 2012 Ben Best ended his 9-year service as CI President and CEO by going to work for the Life Extension Foundation as Director of Research Oversight. The Life Extension Foundation is the major source of cryonics-related research, including funding for 21st Century Medicine, Suspended Animation, Inc., and Advanced Neural Biosciences, and funds many anti-aging research projects as well. Dennis Kowalski became the new CI President. Ben Best retired as CI Director in September 2014.

In January 2011 CI shipped its vitrification solution (CI-VM-1) to the United Kingdom so that European cryonics patients could be vitrified before shipping in dry ice to the United States. This procedure was applied to the wife of UK cryonicist Alan Sinclair in May 2013. In the summer of 2014 Alcor began offering this “field vitrication” service to its members in Canada and overseas.

In 2006 the first cryonics organization to offer cryonics services outside of the United States was created in Russia. KrioRus has a facility in a Moscow suburb where many cryonics patients are being stored in liquid nitrogen. In 2014 Oregon Cryonics (created by former CI Director Jordan Sparks) began providing neuro(head or brain)-only services at low cost for cryopreservation and chemical preservation.

(For details on the current status of the different cryonics organizations, see Comparing Procedures and Policies.)

(return to contents)

HOME PAGE

Read the original:

A History of Cryonics – BEN BEST

Posted in Cryonics | Comments Off on A History of Cryonics – BEN BEST

Hedonistic Theories – Philosophy Home Page

Posted: September 18, 2016 at 8:14 am

Abstract: The refinement of hedonism as an ethical theory involves several surprising and important distinctions. Several counter-examples to hedonism are discussed.

I. Hedonistic theories are one possible answer to the question of “What is intrinsic goodness?”

Similar theories might involve enjoyment, satisfaction, happiness, as concepts substituted for pleasure. A major problem of hedonism is getting clear as of what pleasure and pain consist. Are pleasures events, properties, states, or some other kind of entity?

II. The hedonistic position can be substantially refined.

Some persons have mistakenly taken this distinction to mean that “Therefore, you can’t generalize about what actions should be done because they would differ for different people; hence, ethics is relative.”

Think about how this statement is logically related to C.L. Kleinke’s observation in his book Self-Perception that “What distinguishes emotions such as anger, fear, love, elation, anxiety, and disgust is not what is going on inside the body but rather what is happening in the outside environment.” (C.L. Kleinke, Self-Perception (San Francisco: W.H. Freeman, 1978), 2.)

III. The hedonist doesn’t seek pleasure constantlya constant indulgence of appetites makes people miserable in the long run.

When hungry, seek food; when poor, seek money; when restless, seek physical activity. We don’t seek pleasure in these situations. As John Stuart Mill stated, “Those only are happy who have their minds fixed on some object other than their own happiness Aiming thus at something else, they find happiness along the way.”

IV. John Hospers proposes three counter-examples to hedonism.

Recommended Sources

Hedonism:A discussion of hedonism from the Stanford Encyclopedia with some emphasis relating to egoism and utilitarianism by Andrew Moore.

Hedonism: An outline of some basic concepts hedonistic philosophy with brief mention of Epicurus, Bentham, Mill, and Freud from the Wikipedia.

Read more:

Hedonistic Theories – Philosophy Home Page

Posted in Hedonism | Comments Off on Hedonistic Theories – Philosophy Home Page

Clouds of Secrecy: The Army’s Germ Warfare Tests Over …

Posted: September 8, 2016 at 6:49 am

Format: Paperback

This book contains shocking but carefully documented details about germ warfare tests conducted by the U.S. Army in the 1960s. It is an eye opener about a range of Army experiments that exposed millions of Americans to various bacteria without their knowledge. The purpose supposedly was to see how vulnerable Americans would be to a germ attack. The book is clearly written and provides riveting descriptions of many of the tests. The most amazing thing about the tests was the number of American cities and their populations that were targeted. They included New York City, San Francisco, St. Louis and hundreds of other cities and towns. The germs were not true warfare agents like anthrax, but they apparently caused several people to become sick, some perhaps fatally. In the current climate of fear about terrorism, Clouds of Secrecy provides an invaluable reminder that secret government actions intended to protect the public may themselves create risks to its safety.

Read more from the original source:

Clouds of Secrecy: The Army’s Germ Warfare Tests Over …

Posted in Germ Warfare | Comments Off on Clouds of Secrecy: The Army’s Germ Warfare Tests Over …

History of artificial intelligence – Wikipedia, the free …

Posted: August 30, 2016 at 11:03 pm

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with “an ancient wish to forge the gods.”

The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The Turing test was proposed by British mathematician Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: “I propose to consider the question, ‘Can machines think?'” The term ‘Artificial Intelligence’ was created at a conference held at Dartmouth College in 1956.[2]Allen Newell, J. C. Shaw, and Herbert A. Simon pioneered the newly created artificial intelligence field with the Logic Theory Machine (1956), and the General Problem Solver in 1957.[3] In 1958, John McCarthy and Marvin Minsky started the MIT Artificial Intelligence lab with $50,000.[4] John McCarthy also created LISP in the summer of 1958, a programming language still important in artificial intelligence research.[5]

In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and withdrew funding again.

McCorduck (2004) writes “artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized,” expressed in humanity’s myths, legends, stories, speculation and clockwork automatons.

Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion’s Galatea.[7] In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as Jbir ibn Hayyn’s Takwin, Paracelsus’ homunculus and Rabbi Judah Loew’s Golem.[8] By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots), and speculation, such as Samuel Butler’s “Darwin among the Machines.” AI has continued to be an important element of science fiction into the present.

Realistic humanoid automatons were built by craftsman from every civilization, including Yan Shi,[11]Hero of Alexandria,[12]Al-Jazari and Wolfgang von Kempelen.[14] The oldest known automatons were the sacred statues of ancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotionHermes Trismegistus wrote that “by discovering the true nature of the gods, man has been able to reproduce it.”[15][16]

Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanicalor “formal”reasoning has a long history. Chinese, Indian and Greek philosophers all developed structured methods of formal deduction in the first millennium BCE. Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), Muslim mathematician al-Khwrizm (who developed algebra and gave his name to “algorithm”) and European scholastic philosophers such as William of Ockham and Duns Scotus.[17]

Majorcan philosopher Ramon Llull (12321315) developed several logical machines devoted to the production of knowledge by logical means;[18] Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge.[19] Llull’s work had a great influence on Gottfried Leibniz, who redeveloped his ideas.[20]

In the 17th century, Leibniz, Thomas Hobbes and Ren Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry.[21]Hobbes famously wrote in Leviathan: “reason is nothing but reckoning”.[22]Leibniz envisioned a universal language of reasoning (his characteristica universalis) which would reduce argumentation to calculation, so that “there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let us calculate.”[23] These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research.

In the 20th century, the study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible. The foundations had been set by such works as Boole’s The Laws of Thought and Frege’s Begriffsschrift. Building on Frege’s system, Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the Principia Mathematica in 1913. Inspired by Russell’s success, David Hilbert challenged mathematicians of the 1920s and 30s to answer this fundamental question: “can all of mathematical reasoning be formalized?”[17] His question was answered by Gdel’s incompleteness proof, Turing’s machine and Church’s Lambda calculus.[17][24] Their answer was surprising in two ways.

First, they proved that there were, in fact, limits to what mathematical logic could accomplish. But second (and more important for AI) their work suggested that, within these limits, any form of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. The key insight was the Turing machinea simple theoretical construct that captured the essence of abstract symbol manipulation. This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines.[17][26]

Calculating machines were built in antiquity and improved throughout history by many mathematicians, including (once again) philosopher Gottfried Leibniz. In the early 19th century, Charles Babbage designed a programmable computer (the Analytical Engine), although it was never built. Ada Lovelace speculated that the machine “might compose elaborate and scientific pieces of music of any degree of complexity or extent”.[27] (She is often credited as the first programmer because of a set of notes she wrote that completely detail a method for calculating Bernoulli numbers with the Engine.)

The first modern computers were the massive code breaking machines of the Second World War (such as Z3, ENIAC and Colossus). The latter two of these machines were based on the theoretical foundation laid by Alan Turing[28] and developed by John von Neumann.[29]

In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. The field of artificial intelligence research was founded as an academic discipline in 1956.

The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 30s, 40s and early 50s. Recent research in neurology had shown that the brain was an electrical network of neurons that fired in all-or-nothing pulses. Norbert Wiener’s cybernetics described control and stability in electrical networks. Claude Shannon’s information theory described digital signals (i.e., all-or-nothing signals). Alan Turing’s theory of computation showed that any form of computation could be described digitally. The close relationship between these ideas suggested that it might be possible to construct an electronic brain.[30]

Examples of work in this vein includes robots such as W. Grey Walter’s turtles and the Johns Hopkins Beast. These machines did not use computers, digital electronics or symbolic reasoning; they were controlled entirely by analog circuitry.[31]

Walter Pitts and Warren McCulloch analyzed networks of idealized artificial neurons and showed how they might perform simple logical functions. They were the first to describe what later researchers would call a neural network.[32] One of the students inspired by Pitts and McCulloch was a young Marvin Minsky, then a 24-year-old graduate student. In 1951 (with Dean Edmonds) he built the first neural net machine, the SNARC.[33]Minsky was to become one of the most important leaders and innovators in AI for the next 50 years.

In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of creating machines that think.[34] He noted that “thinking” is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was “thinking”. This simplified version of the problem allowed Turing to argue convincingly that a “thinking machine” was at least plausible and the paper answered all the most common objections to the proposition.[35] The Turing Test was the first serious proposal in the philosophy of artificial intelligence.

In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess.[36]Arthur Samuel’s checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur.[37]Game AI would continue to be used as a measure of progress in AI throughout its history.

When access to digital computers became possible in the middle fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines.[38]

In 1955, Allen Newell and (future Nobel Laureate) Herbert A. Simon created the “Logic Theorist” (with help from J. C. Shaw). The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead’s Principia Mathematica, and find new and more elegant proofs for some.[39] Simon said that they had “solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind.”[40] (This was an early statement of the philosophical position John Searle would later call “Strong AI”: that machines can contain minds just as human bodies do.)[41]

The Dartmouth Conference of 1956[42] was organized by Marvin Minsky, John McCarthy and two senior scientists: Claude Shannon and Nathan Rochester of IBM. The proposal for the conference included this assertion: “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it”.[43] The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would create important programs during the first decades of AI research.[44] At the conference Newell and Simon debuted the “Logic Theorist” and McCarthy persuaded the attendees to accept “Artificial Intelligence” as the name of the field.[45] The 1956 Dartmouth conference was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI.[46]

The years after the Dartmouth conference were an era of discovery, of sprinting across new ground. The programs that were developed during this time were, to most people, simply “astonishing”:[47] computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such “intelligent” behavior by machines was possible at all.[48] Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years.[49] Government agencies like ARPA poured money into the new field.[50]

There were many successful programs and new directions in the late 50s and 1960s. Among the most influential were these:

Many early AI programs used the same basic algorithm. To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze, backtracking whenever they reached a dead end. This paradigm was called “reasoning as search”.[51]

The principal difficulty was that, for many problems, the number of possible paths through the “maze” was simply astronomical (a situation known as a “combinatorial explosion”). Researchers would reduce the search space by using heuristics or “rules of thumb” that would eliminate those paths that were unlikely to lead to a solution.[52]

Newell and Simon tried to capture a general version of this algorithm in a program called the “General Problem Solver”.[53] Other “searching” programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Herbert Gelernter’s Geometry Theorem Prover (1958) and SAINT, written by Minsky’s student James Slagle (1961).[54] Other programs searched through goals and subgoals to plan actions, like the STRIPS system developed at Stanford to control the behavior of their robot Shakey.[55]

An important goal of AI research is to allow computers to communicate in natural languages like English. An early success was Daniel Bobrow’s program STUDENT, which could solve high school algebra word problems.[56]

A semantic net represents concepts (e.g. “house”,”door”) as nodes and relations among concepts (e.g. “has-a”) as links between the nodes. The first AI program to use a semantic net was written by Ross Quillian[57] and the most successful (and controversial) version was Roger Schank’s Conceptual dependency theory.[58]

Joseph Weizenbaum’s ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program. But in fact, ELIZA had no idea what she was talking about. She simply gave a canned response or repeated back what was said to her, rephrasing her response with a few grammar rules. ELIZA was the first chatterbot.[59]

In the late 60s, Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI research should focus on artificially simple situations known as micro-worlds. They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused on a “blocks world,” which consists of colored blocks of various shapes and sizes arrayed on a flat surface.[60]

This paradigm led to innovative work in machine vision by Gerald Sussman (who led the team), Adolfo Guzman, David Waltz (who invented “constraint propagation”), and especially Patrick Winston. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing the blocks world to life. The crowning achievement of the micro-world program was Terry Winograd’s SHRDLU. It could communicate in ordinary English sentences, plan operations and execute them.[61]

The first generation of AI researchers made these predictions about their work:

In June 1963, MIT received a $2.2 million grant from the newly created Advanced Research Projects Agency (later known as DARPA). The money was used to fund project MAC which subsumed the “AI Group” founded by Minsky and McCarthy five years earlier. DARPA continued to provide three million dollars a year until the 70s.[66]DARPA made similar grants to Newell and Simon’s program at CMU and to the Stanford AI Project (founded by John McCarthy in 1963).[67] Another important AI laboratory was established at Edinburgh University by Donald Michie in 1965.[68] These four institutions would continue to be the main centers of AI research (and funding) in academia for many years.[69]

The money was proffered with few strings attached: J. C. R. Licklider, then the director of ARPA, believed that his organization should “fund people, not projects!” and allowed researchers to pursue whatever directions might interest them.[70] This created a freewheeling atmosphere at MIT that gave birth to the hacker culture,[71] but this “hands off” approach would not last.

In the 70s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared.[72] At the same time, the field of connectionism (or neural nets) was shut down almost completely for 10 years by Marvin Minsky’s devastating criticism of perceptrons.[73] Despite the difficulties with public perception of AI in the late 70s, new ideas were explored in logic programming, commonsense reasoning and many other areas.[74]

In the early seventies, the capabilities of AI programs were limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, “toys”.[75] AI researchers had begun to run into several fundamental limits that could not be overcome in the 1970s. Although some of these limits would be conquered in later decades, others still stymie the field to this day.[76]

The agencies which funded AI research (such as the British government, DARPA and NRC) became frustrated with the lack of progress and eventually cut off almost all funding for undirected research into AI. The pattern began as early as 1966 when the ALPAC report appeared criticizing machine translation efforts. After spending 20 million dollars, the NRC ended all support.[84] In 1973, the Lighthill report on the state of AI research in England criticized the utter failure of AI to achieve its “grandiose objectives” and led to the dismantling of AI research in that country.[85] (The report specifically mentioned the combinatorial explosion problem as a reason for AI’s failings.)[86]DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at CMU and canceled an annual grant of three million dollars.[87] By 1974, funding for AI projects was hard to find.

Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues. “Many researchers were caught up in a web of increasing exaggeration.”[88] However, there was another issue: since the passage of the Mansfield Amendment in 1969, DARPA had been under increasing pressure to fund “mission-oriented direct research, rather than basic undirected research”. Funding for the creative, freewheeling exploration that had gone on in the 60s would not come from DARPA. Instead, the money was directed at specific projects with clear objectives, such as autonomous tanks and battle management systems.[89]

Several philosophers had strong objections to the claims being made by AI researchers. One of the earliest was John Lucas, who argued that Gdel’s incompleteness theorem showed that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could.[90]Hubert Dreyfus ridiculed the broken promises of the 60s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little “symbol processing” and a great deal of embodied, instinctive, unconscious “know how”.[91][92]John Searle’s Chinese Room argument, presented in 1980, attempted to show that a program could not be said to “understand” the symbols that it uses (a quality called “intentionality”). If the symbols have no meaning for the machine, Searle argued, then the machine can not be described as “thinking”.[93]

These critiques were not taken seriously by AI researchers, often because they seemed so far off the point. Problems like intractability and commonsense knowledge seemed much more immediate and serious. It was unclear what difference “know how” or “intentionality” made to an actual computer program. Minsky said of Dreyfus and Searle “they misunderstand, and should be ignored.”[94] Dreyfus, who taught at MIT, was given a cold shoulder: he later said that AI researchers “dared not be seen having lunch with me.”[95]Joseph Weizenbaum, the author of ELIZA, felt his colleagues’ treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus’ positions, he “deliberately made it plain that theirs was not the way to treat a human being.”[96]

Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote DOCTOR, a chatterbot therapist. Weizenbaum was disturbed that Colby saw his mindless program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby did not credit Weizenbaum for his contribution to the program. In 1976, Weizenbaum published Computer Power and Human Reason which argued that the misuse of artificial intelligence has the potential to devalue human life.[97]

A perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt, who had been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI researchers, he was optimistic about their power, predicting that “perceptron may eventually be able to learn, make decisions, and translate languages.” An active research program into the paradigm was carried out throughout the 60s but came to a sudden halt with the publication of Minsky and Papert’s 1969 book Perceptrons. It suggested that there were severe limitations to what perceptrons could do and that Frank Rosenblatt’s predictions had been grossly exaggerated. The effect of the book was devastating: virtually no research at all was done in connectionism for 10 years. Eventually, a new generation of researchers would revive the field and thereafter it would become a vital and useful part of artificial intelligence. Rosenblatt would not live to see this, as he died in a boating accident shortly after the book was published.[73]

Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal.[98] In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 60s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[99] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[100] Prolog uses a subset of logic (Horn clauses, closely related to “rules” and “production rules”) that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum’s expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[101]

Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[102] McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problemsnot machines that think as people do.[103]

Among the critics of McCarthy’s approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like “story understanding” and “object recognition” that required a machine to think like a person. In order to use ordinary concepts like “chair” or “restaurant” they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that “using precise language to describe essentially imprecise concepts doesn’t make them any more precise.”[104]Schank described their “anti-logic” approaches as “scruffy”, as opposed to the “neat” paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[105]

In 1975, in a seminal paper, Minsky noted that many of his fellow “scruffy” researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be “logical”, but these structured sets of assumptions are part of the context of everything we say and think. He called these structures “frames”. Schank used a version of frames he called “scripts” to successfully answer questions about short stories in English.[106] Many years later object-oriented programming would adopt the essential idea of “inheritance” from AI research on frames.

In the 1980s a form of AI program called “expert systems” was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart. Once again, AI had achieved success.

An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach.[107]

Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be useful: something that AI had not been able to achieve up to this point.[108]

In 1980, an expert system called XCON was completed at CMU for the Digital Equipment Corporation. It was an enormous success: it was saving the company 40 million dollars annually by 1986.[109] Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp and Aion.[110]

The power of expert systems came from the expert knowledge they contained. They were part of a new direction in AI research that had been gaining ground throughout the 70s. “AI researchers were beginning to suspectreluctantly, for it violated the scientific canon of parsimonythat intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways,”[111] writes Pamela McCorduck. “[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay”.[112]Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s.[113]

The 1980s also saw the birth of Cyc, the first attempt to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows. Douglas Lenat, who started and led the project, argued that there is no shortcut the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand. The project was not expected to be completed for many decades.[114]

Chess playing programs HiTech and Deep Thought defeated chess masters in 1989. Both were developed by Carnegie Mellon University; Deep Thought development paved the way for the Deep Blue.[115]

In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings.[116] Much to the chagrin of scruffies, they chose Prolog as the primary computer language for the project.[117]

Other countries responded with new programs of their own. The UK began the 350 million Alvey project. A consortium of American companies formed the Microelectronics and Computer Technology Corporation (or “MCC”) to fund large scale projects in AI and information technology.[118][119]DARPA responded as well, founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988.[120]

In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a “Hopfield net”) could learn and process information in a completely new way. Around the same time, David Rumelhart popularized a new method for training neural networks called “backpropagation” (discovered years earlier by Paul Werbos). These two discoveries revived the field of connectionism which had been largely abandoned since 1970.[119][121]

The new field was unified and inspired by the appearance of Parallel Distributed Processing in 1986a two volume collection of papers edited by Rumelhart and psychologist James McClelland. Neural networks would become commercially successful in the 1990s, when they began to be used as the engines driving programs like optical character recognition and speech recognition.[119][122]

The business community’s fascination with AI rose and fell in the 80s in the classic pattern of an economic bubble. The collapse was in the perception of AI by government agencies and investors the field continued to make advances despite the criticism. Rodney Brooks and Hans Moravec, researchers from the related field of robotics, argued for an entirely new approach to artificial intelligence.

The term “AI winter” was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow.[123] Their fears were well founded: in the late 80s and early 90s, AI suffered a series of financial setbacks.

The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight.[124]

Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were “brittle” (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts.[125]

In the late 80s, the Strategic Computing Initiative cut funding to AI “deeply and brutally.” New leadership at DARPA had decided that AI was not “the next wave” and directed funds towards projects that seemed more likely to produce immediate results.[126]

By 1991, the impressive list of goals penned in 1981 for Japan’s Fifth Generation Project had not been met. Indeed, some of them, like “carry on a casual conversation” had not been met by 2010.[127] As with other AI projects, expectations had run much higher than what was actually possible.[127]

In the late 80s, several researchers advocated a completely new approach to artificial intelligence, based on robotics.[128] They believed that, to show real intelligence, a machine needs to have a body it needs to perceive, move, survive and deal with the world. They argued that these sensorimotor skills are essential to higher level skills like commonsense reasoning and that abstract reasoning was actually the least interesting or important human skill (see Moravec’s paradox). They advocated building intelligence “from the bottom up.”[129]

The approach revived ideas from cybernetics and control theory that had been unpopular since the sixties. Another precursor was David Marr, who had come to MIT in the late 70s from a successful background in theoretical neuroscience to lead the group studying vision. He rejected all symbolic approaches (both McCarthy’s logic and Minsky’s frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place. (Marr’s work would be cut short by leukemia in 1980.)[130]

In a 1990 paper, “Elephants Don’t Play Chess,”[131] robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since “the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough.”[132] In the 80s and 90s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the embodied mind thesis.[133]

The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. Inside the field there was little agreement on the reasons for AI’s failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of “artificial intelligence”.[134] AI was both more cautious and more successful than it had ever been.

On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.[135] The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second. The event was broadcast live over the internet and received over 74 million hits.[136]

In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail.[137] Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.[138] In February 2011, in a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[139]

These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous power of computers today.[140] In fact, Deep Blue’s computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951.[141] This dramatic increase is measured by Moore’s law, which predicts that the speed and memory capacity of computers doubles every two years. The fundamental problem of “raw computer power” was slowly being overcome.

A new paradigm called “intelligent agents” became widely accepted during the 90s.[142] Although earlier researchers had proposed modular “divide and conquer” approaches to AI,[143] the intelligent agent did not reach its modern form until Judea Pearl, Allen Newell and others brought concepts from decision theory and economics into the study of AI.[144] When the economist’s definition of a rational agent was married to computer science’s definition of an object or module, the intelligent agent paradigm was complete.

An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. By this definition, simple programs that solve specific problems are “intelligent agents”, as are human beings and organizations of human beings, such as firms. The intelligent agent paradigm defines AI research as “the study of intelligent agents”. This is a generalization of some earlier definitions of AI: it goes beyond studying human intelligence; it studies all kinds of intelligence.[145]

The paradigm gave researchers license to study isolated problems and find solutions that were both verifiable and useful. It provided a common language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like economics and control theory. It was hoped that a complete agent architecture (like Newell’s SOAR) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents.[144][146]

AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past.[147] There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics, economics or operations research. The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable; AI had become a more rigorous “scientific” discipline. Russell & Norvig (2003) describe this as nothing less than a “revolution” and “the victory of the neats”.[148][149]

Judea Pearl’s highly influential 1988 book[150] brought probability and decision theory into AI. Among the many new tools in use were Bayesian networks, hidden Markov models, information theory, stochastic modeling and classical optimization. Precise mathematical descriptions were also developed for “computational intelligence” paradigms like neural networks and evolutionary algorithms.[148]

Algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems[151] and their solutions proved to be useful throughout the technology industry,[152] such as data mining, industrial robotics, logistics,[153]speech recognition,[154] banking software,[155] medical diagnosis[155] and Google’s search engine.[156]

The field of AI receives little or no credit for these successes. Many of AI’s greatest innovations have been reduced to the status of just another item in the tool chest of computer science.[157]Nick Bostrom explains “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.”[158]

Many researchers in AI in 1990s deliberately called their work by other names, such as informatics, knowledge-based systems, cognitive systems or computational intelligence. In part, this may be because they considered their field to be fundamentally different from AI, but also the new names help to procure funding. In the commercial world at least, the failed promises of the AI Winter continue to haunt AI research, as the New York Times reported in 2005: “Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.”[159][160][161]

In 1968, Arthur C. Clarke and Stanley Kubrick had imagined that by the year 2001, a machine would exist with an intelligence that matched or exceeded the capability of human beings. The character they created, HAL 9000, was based on a belief shared by many leading AI researchers that such a machine would exist by the year 2001.[162]

Marvin Minsky asks “So the question is why didn’t we get HAL in 2001?”[163] Minsky believes that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blames the qualification problem.[164] For Ray Kurzweil, the issue is computer power and, using Moore’s Law, he predicts that machines with human-level intelligence will appear by 2029.[165]Jeff Hawkins argues that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[166] There are many other explanations and for each there is a corresponding research program underway.

.

Go here to read the rest:

History of artificial intelligence – Wikipedia, the free …

Posted in Ai | Comments Off on History of artificial intelligence – Wikipedia, the free …

Freedom in the 50 States 2015-2016 | Overall Freedom …

Posted: August 25, 2016 at 4:35 pm

William P. Ruger

William P. Ruger is Vice President of Policy and Research at the Charles Koch Institute and Charles Koch Foundation. Ruger is the author of the biography Milton Friedman and a coauthor of The State of Texas: Government, Politics, and Policy. His work has been published in International Studies Quarterly, State Politics and Policy Quarterly, Armed Forces and Society, and other outlets. Ruger earned an AB from the College of William and Mary and a PhD in politics from Brandeis University. He is a veteran of the war in Afghanistan.

Jason Sorens is Lecturer in the Department of Government at Dartmouth College. His primary research interests include fiscal federalism, public policy in federal systems, secessionism, and ethnic politics. His work has been published in International Studies Quarterly, Comparative Political Studies, Journal of Peace Research, State Politics and Policy Quarterly, and other academic journals, and his book Secessionism: Identity, Interest, and Strategy was published by McGill-Queens University Press in 2012. Sorens received his BA in economics and philosophy, with honors, from Washington and Lee University and his PhD in political science from Yale University.

Excerpt from:

Freedom in the 50 States 2015-2016 | Overall Freedom …

Posted in Fiscal Freedom | Comments Off on Freedom in the 50 States 2015-2016 | Overall Freedom …

Minerva – MicroWiki – Wikia

Posted: August 23, 2016 at 9:32 am

The Republic of Minerva was one of the few modern attempts at creating a sovereign micronation on the reclaimed land of an artificial island in 1972.

Landing on Minerva, years after the confrontation.

More people walking on Minerva.

It is not known when the reefs were first discovered but had been marked on charts as “Nicholson’s Shoal” since the late 1820s. Capt H. M. Denham of the HMS Herald surveyed the reefs in 1854 and renamed them after the Australian whaler Minerva which collided with South Minerva Reef on 9 September 1829.

In 1971, barges loaded with sand arrived from Australia, bringing the reef level above the water and allowing construction of a small tower and flag. The Republic of Minerva issued a declaration of independence on 19 January 1972, in letters to neighboring countries and even created their own currency. In February 1972, Morris C. Davis was elected as Provisional President of the Republic of Minerva.

The declaration of independence, however, was greeted with great suspicion by other countries in the area. A conference of the neighboring countries (Australia, New Zealand, Tonga, Fiji, Nauru, Samoa, and territory of Cook Islands) met on 24 February 1972 at which Tonga made a claim over the Minerva Reefs and the rest of the states recognized its claim.

On 15 June 1972, the following proclamation was published in a Tongan government gazette:

A Tongan expedition was sent to enforce the claim. Tongas claim was recognized by the South Pacific Forum in September 1972. Meanwhile, Provisional President Davis was fired by founder Michael Oliver and the project collapsed in confusion. Nevertheless, Minerva was referred to in O. T. Nelson’s post-apocalyptic children’s novel The Girl Who Owned a City, published in 1975, as an example of an invented utopia that the book’s protagonists could try to emulate.

In 1982, a group of Americans led again by Morris C. Bud Davis tried to occupy the reefs, but were forced off by Tongan troops after three weeks.

In recent years several groups have allegedly sought to re-establish Minerva. No claimant group has to date made any attempt to take possession of the Minerva Reefs territory.

In November 2005, Fiji lodged a complaint with the International Seabed Authority concerning territorial claim over Minerva.

Tonga has lodged a counter claim. The Minerva “principality” group also claims to have lodged a counter claim.

Minerva Reefs

Area: North Reef diameter about 5.6km, South Reef diameter of about 4.8km. Cities: CapitalPort Victoria. Terrain: 2 island atollsmainly raised coral complexes on dormant volcanic islands.

Both Minerva Reefs are about 435km southwest of the Tongatapu Group. The atolls are on a common submarine platform from 549to1097meters (1800to3600feet) below the surface of the sea. Cardea is circular in shape and has a diameter of about 5.6km. There is a small island around the atoll, with a small entrance into the flat lagoon with a somewhat deep harbor. Aurora is parted into The East Reef and the West Reef, both circular with a diameter of about 4.8km. Around both atolls are two small sandy cays, vegetated by low scrub and some trees. Several iron towers and platforms are reported to stand near the atolls, along with an unused light tower on Aurora, erected by the Americans during World War II. Geologically the Minervan islands are of a limestone base formed from uplifted coral formations elevated by now-dormant volcanic activity.

The climate is basically subtropical with a distinct warm period (DecemberApril), during which the temperatures rise above 32C (90F), and a cooler period (MayNovember), with temperatures rarely rising above 27C (80F). The temperature increases from 23C to 27C (74F to 80F), and the annual rainfall is from 170 to 297 centimeters (67-117 in.) as one moves from Cardea in the south to the more northerly islands closer to the Equator. The mean daily humidity is 80%.

Originally posted here:

Minerva – MicroWiki – Wikia

Posted in Minerva Reefs | Comments Off on Minerva – MicroWiki – Wikia

Historical Satanism – dpjs.co.uk/historical.html

Posted: August 19, 2016 at 4:10 am

Before Anton LaVey compiled the philosophy of Satanism and founded the Church of Satan in 1966, who upheld its values? It is always debated whether or not these people were or were not Satanists and what they would have thought of Satanism if it existed during their lives. In The Satanic Bible, Book of Lucifer 12, it name-drops many of these groups and mentions many specific people, times and dates. I do not want to quote it all here, so if you’re interested in more of the specifics buy the damned book from Amazon, already. These are the unwitting potential predecessors of Satanism.

The Satanic Bible opens with a few references to groups that are associated with historical Satanism.

In eighteenth-century England a Hell-Fire Club, with connections to the American colonies through Benjamin Franklin, gained some brief notoriety. During the early part of the twentieth century, the press publicized Aleister Crowley as the “wickedest man in the world”. And there were hints in the 1920s and ’30s of a “black order” in Germany.

To this seemingly old story LaVey and his organization of contemporary Faustians offered two strikingly new chapters. First, they blasphemously represented themselves as a “church”, a term previously confined to the branches of Christianity, instead of the traditional coven of Satanism and witchcraft lore. Second, they practiced their black magic openly instead of underground. […]

[Anton LaVey] had accumulated a library of works that described the Black Mass and other infamous ceremonies conducted by groups such as the Knights Templar in fourteenth-century France, the Hell-Fire club and the Golden Dawn in eighteenth- and nineteenth-century England.

Burton Wolfe’s introduction to “The Satanic Bible” by Anton LaVey (1969)

This page looks at some groups, some individuals, but is nowhere near a comprehensive look at the subject, just a small window into which you might see some of the rich, convoluted history of the dark, murky development of the philosophies that support Satanism.

There is a saying that history is written by the winners. The victors of a war are the ones who get to write the school books: they write that the defeated are always the enemy of mankind, the evil ones, the monsters. The victors are always fighting desperately for just causes. This trend is historically important in Satanism. As one religion takes over the ground and the demographics of a losing religion, the loser has its gods demonized and its holy places reclaimed. For example the Vatican was housed on an old Mithraist temple, and Gaelic spirits became monsters as Christianity brutalized Europe with its religious propaganda.

There are groups, therefore, that were wiped out by the Christians. The Spanish Inquisition forced, in duress and torture, many confessions out of its victims, confessions of every kind of devil worship. Likewise its larger wars against Muslims, science, freethought, etc, were all done under the guise of fighting against the devil. In cases where their victims left no records of their own we will never know what their true beliefs were. So the legacy of Christian violence has left us with many associations between various people and Devil Worship, and we know that most of these accounts are wrong, barbaric and the truth is grotesquely forced in them.

We know now that most the Christian Churches’ previous campaigns were unjustified. Various groups and individuals through have become called Satanists. Such claims are nearly always a result of rumours, mass paranoia and slanderous libel. The dark age victims of this kind of Christian paranoia were largely not actually Satanists, but merely those who didn’t believe what the orthodox Church wanted them to believe. Thus, history can be misleading especially when you rely on the religious views of one group, who are clearly biased against competing beliefs!

The Knights Templar were founded in 1118 in the growing shadow of the Dark Ages. They were the most powerful military religious order of the Middle Ages. They built Europe’s most impressive ancient Cathedrals and were the bankers “for practically every throne in Europe”1. Some historians trace the history of all globalised multinationals to the banking practices of the Knights Templar2. They had strong presence in multiple countries; Portugal, England, Spain, Scotland, Africa (i.e. Ethiopia) and France. They were rich and powerful, with members in royal families and the highest places including Kings. King John II of Portugal was once Grand Master of the Order. They explored the oceans, built roads and trade routes and policed them, created the first banking system, sanctioned castles, built glorious buildings, and had adequate forces to protect their prized holy places and objects. Their fleet was world-faring, and their masterly knightly battle skills were invaluable to any who could befriend them or afford their mercenary services.

The Knights Templar fell into disrepute with the powerful Catholic Church and the French kingdom, and the Catholics ran a long campaign against them, accusing them of devil worship, of immorality, subversion, and accused them of practicing magic and every kind of occult art. The organisation was finally destroyed and its members burned from 1310. Nowadays, although the accusations are thoroughly discredited, they are still equated with the Occult and sometimes with Satanism, sometimes even by practitioners of those arts themselves.

“The Knights Templar: 1. The Rise of the Knights Templar” by Vexen Crabtree (2004)

The Satanism-for-fun-and-games fad next appeared in England in the middle 18th Century in the form of Sir Francis Dashwood’s Order of the Medmanham Fanciscans, popularly called The Hell-Fire Club. While eliminating the blood, gore, and baby-fat candles of the previous century’s masses, Sir Francis managed to conduct rituals replete with good dirty fun, and certainly provided a colorful and harmless form of psychodrama for many of the leading lights of the period. An interesting sideline of Sir Francis, which lends a clue to the climate of the Hell-Fire Club, was a group called the Dilettanti Club, of which he was the founder.

“The Satanic Bible” by Anton LaVey (1969)

The Hell-Fire Clubs conjure up images of aristocratic rakes outraging respectability at every turn, cutting a swath through the village maidens and celebrating Black Masses. While all this is true, it is not the whole story. The author of this volume has assembled an account of the Clubs and of their antecedents and descendants. At the centre of the book is the principal brotherhood, known by the Hell-Fire name – Sir Francis Dashwood’s notorious Monks of Medmenham, with their strange rituals and initiation rites, library of erotica and nun companions recruited from the brothels of London. From this maverick group flow such notable literary libertines as Horace Walpole and Lord Byron. Pre-dating Medmenham are the figures of Rabelais and John Dee, both expounding philosophies of “do what you will” or “anything goes”. Geoffrey Ashe traces the influence of libertarian philosophies on the world of the Enlightenment, showing how they met the need for a secular morality at a time when Christianity faced the onslaught of rationalism and empiricism. He follows the libertarian tradition through de Sade and into the 20th century, with discussions of Aleister Crowley, Charles Manson and Timothy Leary, delving below the scandals to reveal the social and political impact of “doing your own thing” which has roots far deeper than the post-war permissive society.

Amazon Review of The Hell-fire Clubs: A History of Anti-morality by Geoffrey Ashe

An informal network of Hellfire Clubs thrived in Britain during the eighteenth century, dedicated to debauchery and blasphemy. With members drawn from the cream of the political, artistic and literary establishments, they became sufficiently scandalous to inspire a number of Acts of Parliament aimed at their suppression. Historians have been inclined to dismiss the Hellfire Clubs as nothing more than riotous drinking societies, but the significance of many of the nation’s most powerful and brilliant men dedicating themselves to Satan is difficult to ignore. That they did so with laughter on their lips, and a drink in their hands, does not diminish the gesture so much as place them more firmly in the Satanic tradition.

The inspiration for the Hellfire Clubs [also] drew heavily from profane literature – such as Gargantua, an unusual work combining folklore, satire, coarse humour and light-hearted philosophy written in the sixteenth century by a renegade monk named Francois Rabelais. One section of the book concerned a monk who […] has an abbey built that he names Thelema [which is] dedicated to the pleasures of the flesh. Only the brightest, most beautiful and best are permitted within its walls, and its motto is ‘Fait Ce Que Vouldras’ (‘Do What You Will’).

“Lucifer Rising” by Gavin Baddeley (1999)3

Gavin Baddeley’s book opens with a long, fascinating and awe-inspiring chapter on histories Satanic traditions, following such trends through enlightenment, the decadents, through art, aristocracy and nobility, before concentrating the rest of the book on modern rock and roll devilry. It is a highly recommended book!

The magical and occult elements of Satanism have parallels with previous groups and teachings. Frequent references and commentary are made on certain sources. None of those listed here were Satanists except possibly Crowley:

The Knights Templar (11th-14th Centuries; France, Portugal, Europe) have contributed some symbolism and methodology but not much in the way of teachings.

Chaos Magic has contributed magical theory and psychological techniques to magical practices.

Quantum Physics has contributed high-brow theory on such areas as how consciousness may be able manipulate events.

The New Age (1900s+) has contributed some of the less respectable pop-magic aspects to Satanism such as Tarot, Divination, etc. Although Satanism was in part a reaction against the new age, some aspects of it have been generally adopted.

John Dee and Kelly (17th Century) created the Enochian system of speech used for emoting (‘sonic tarot’) and pronounciation in any way the user sees fit. LaVey adopted the Enochian Keys for rituals and includes his translation of them in The Satanic Bible.

Aleister Crowley (1875-1947, England) was an infamous occultist and magician, and has lent a large portion of his techniques and general character to magical practice and psychology, as well as chunks of philosophy and teachings on magic and life in general.

The Kabballah, as the mother-text of nearly all the occult arts, has indirectly influenced Satanism, lending all kinds of esoteric thoughts, geometry, procedures, general ideas and some specifics to all occult practices.

See:

Friedrich Nietzsche, 1844 Oct 15 – 1900 Aug 25, was a German philosopher who challenged the foundations of morality and promoted life affirmation and individualism. He was one of the first existentialist philosophers. Some of Nietzsche’s philosophies have surfaced as those upheld by Satanists.

Life: 1875 – 1947. Scotland, United Kingdom.

Infamous occultist and hedonist and influential on modern Satanism. Some hate him and think him a contentless, drug-addled, meaningless diabolicist with little depth except obscurantism. Others consider him an eye-opening Satanic mystic who changed the course of history. His general attitude is one found frequently amongst Satanists and his experimental, extreme, party-animal life is either stupidly self-destructive or a model of candle-burning perfection, depending on what type of Satanist you ask.

Some Satanists are quite well-read of Crowley and his groups. His magical theories, techniques and style have definitely influenced the way many Satanists think about ritual and magic.

As far as Satanism is concerned, the closest outward signs of this were the neo-Pagan rites conducted by MacGregor Mathers’ Hermetic Order of the Golden Dawn, and Aleister Crowley’s later Order of the Silver Star (A… A… – Argentinum Astrum) and Order of Oriental Templars (O.T.O.), which paranoiacally denied any association with Satanism, despite Crowley’s self-imposed image of the beast of revelation. Aside from some rather charming poetry and a smattering of magical bric-a-brac, when not climbing mountains Crowley spent most of his time as a poseur par excellence and worked overtime to be wicked. Like his contemporary, Rev.(?) Mantague Summers, Crowley obviously spent a large part of his life with his tongue jammed firmly into his cheek, but his followers, today, are somehow able to read esoteric meaning into his every word.

Book of Air 12 “The Satanic Bible” by Anton LaVey (1969)

Links to other sites:

Europe has had a history of powerful indulgent groups espounding Satanic philosophies; with the occassional rich group emerging from the underground to terrorize traditionalist, stifling morals of their respective times, these groups have led progressive changes in society in the West. Satanists to this day employ shock tactics, public horror and outrage in order to blitzkreig their progressive freethought messages behind the barriers of traditionalist mental prisons.

When such movements surfaced in the USA in the guise of the Church of Satan, it was a little more commercialist than others. Previous European groups have also been successful businesses, the Knights Templar and resultant Masons, etc, being profound examples of the occassional success of left hand path commerce. The modern-day Church of Satan is a little more subdued as society has moved in a more acceptable, accepting, direction since the Hellfire Clubs. As science rules in the West, and occultism is public, there is no place for secretive initiatory Knights Templar or gnostic movements; the Church of Satan is a stable and quiet beacon rather than a reactionary explosion of decadence.

It is the first permanent non-European (but still Western) Satanic-ethos group to openly publish its pro-self doctrines, reflecting the general trends of society towards honesty and dissatisfaction with anti-science and anti-truth white light religions.

Popular press and popular opinion are the worst sources of information. This holds especially true with the case of Satanism. Especially given that the exterior of Satanism projects imagery that is almost intentionally confusing to anyone unintiated. From time to time public paranoia arises, especially in the USA, claiming some company, person or event is “Satanic”. The public are nearly always wrong and nearly always acting out of irrational fear, sheepish ignorance and gullibility. Public outcries are nearly always erroneous when they claim that a particular group, historical or present, are Satanic.

Similar to this is the relatively large Christian genre of writing that deals with everything unChristian. The likes of Dennis Wheatley, Eliphas Levi, etc, churn out countless books all based on the assumption that anything non-Christian is Satanic, and describe many religious practices as such. These books would be misleading if they had any plausibility, but thankfully all readers except their already-deluded Christian extremist audience cannot take them seriously. Nevertheless occasionally they contribute to public paranoia about Satanism.

In the press and sociology, the phenomenon of public paranoia about criminal activities of assumed Satanic groups is called Satanic Ritual Abuse (SRA) Panic. SRA claims are equal to UFO, abduction, faeries and monsters in both the character profile of the manics involved and the lack of all evidence (despite extensive searching!) to actually uncover such groups.

More:

Historical Satanism – dpjs.co.uk/historical.html

Posted in Modern Satanism | Comments Off on Historical Satanism – dpjs.co.uk/historical.html

Alan Watt On Eugenics & Charles Galton Darwin’s "The Next …

Posted: August 12, 2016 at 2:44 pm

This is Alan Watt’s RBN broadcast from November 19, 2008:

mp3 – http://www.cuttingthroughthematrix.co… transcript – http://www.cuttingthroughthematrix.co…

Nov. 19, 2008 Alan Watt “Cutting Through The Matrix” LIVE on RBN:

Class Arrogance and Darwinian Agenda: “Eugenics, Bioethics, Genome Titles Charmin’, Creation of Master Breed by Charles Galton Darwin, To ‘The Next Million Years,’ Raise Glass, Toast, ‘We’ll Still Be Controllers,’ The Elite do Boast, Darwin’s Hereditary is All Rather Murky, Inbred with Wedgwood, Galton and Huxley, John Maynard Keynes and Others Since Then, Producing a Breed of Superior Men, Who’ll Rule O’er a World, a Purpose-Made Race, Once They’ve Killed Off the Old Man, Bloody Disgrace”

(BOOK: “The Next Million Years” by Charles Galton Darwin.)

Topics discussed: The Matrix – Eugenics, Intermarriage, Elite – Britain, Class Distinction, India, Caste System – Charles Darwin – Royal Society – Inbreeding for Traits – Galton – Forced Sterilization. C.G. Darwin – Huxley Family, Aldous, Julian, Thomas – Malthus, Poorhouses – Inbred “Clones”. Keynes, New Economic System – UNESCO – Germany, Natzis – “Master Breed” – Plato’s “Republic” – Rulers, Technocrats, Parallel Government – Survival Capabilities, Domestication. Population Control – Use of Hormones, Drugs – Synthetic Estrogen, Male Infertility – Passing on Information – Materialism. Human Genome Project – Willing Fools – Genetics, Enhancement. Zarathustrian Technique, Priests, Distortion of Perception – Downloaded Conclusions – Control of Mind – Personal Integrity.

Topics of show covered in following links:

“Charles Galton Darwin” Wikipedia (wikipedia.org) – http://en.wikipedia.org/wiki/Charles_…

“Order of the British Empire” Wikipedia (wikipedia.org) – http://en.wikipedia.org/wiki/Order_of…

“The Ethical, Legal and Social Implications (ELSI) Research Program” (genome.gov) – http://www.genome.gov/ELSI/

—–

See also Alan Watt: The Neo-Eugenics War On Humanity – http://www.youtube.com/watch?v=dQbvcx… transcript – http://www.cuttingthroughthematrix.co…

Alan Watt’s RBN broadcast from November 20, 2008:

mp3 – http://cuttingthrough.jenkness.com/CT… transcript – http://www.cuttingthroughthematrix.co… Video – http://www.youtube.com/watch?v=1DanzW…

—–

Alan Watt also discusses Charles Galton Darwin’s book “The Next Million Years” in his RBN broadcasts from February 3, 4 and 6, 2009:

Feb. 3, 2009 mp3 – http://cuttingthrough.jenkness.com/CT… transcript – http://www.cuttingthroughthematrix.co… Video – http://www.youtube.com/watch?v=aoW74Z…

Feb. 4, 2009 mp3 – http://cuttingthrough.jenkness.com/CT… transcript – http://www.cuttingthroughthematrix.co… Video – http://www.youtube.com/watch?v=NbxFyB…

Feb. 6, 2009 mp3 – http://www.cuttingthroughthematrix.us… transcript – http://www.cuttingthroughthematrix.co… Video – http://www.youtube.com/watch?v=z6ylKN…

—–

For more free downloadable audios by Alan Watt see – http://www.cuttingthroughthematrix.co…

If you gain anything at all from these audios, then please consider making a donation to Alan Watt – http://www.cuttingthroughthematrix.co…

Watt also has a few items for sale on his website:

Books – http://www.cuttingthroughthematrix.co… DVDs – http://www.cuttingthroughthematrix.co… CDs – http://www.cuttingthroughthematrix.co…

More here:

Alan Watt On Eugenics & Charles Galton Darwin’s "The Next …

Posted in Neo-eugenics | Comments Off on Alan Watt On Eugenics & Charles Galton Darwin’s "The Next …

Wiley: Posthumanism – Pramod K. Nayar

Posted: July 29, 2016 at 3:10 am

This timely book examines the rise of posthumanism as both a material condition and a developing philosophical-ethical project in the age of cloning, gene engineering, organ transplants and implants.

Nayar first maps the political and philosophical critiques of traditional humanism, revealing its exclusionary and speciesist politics that position the human as a distinctive and dominant life form. He then contextualizes the posthumanist vision which, drawing upon biomedical, engineering and techno-scientific studies, concludes that human consciousness is shaped by its co-evolution with other life forms, and our human form inescapably influenced by tools and technology. Finally the book explores posthumanisms roots in disability studies, animal studies and bioethics to underscore the constructed nature of normalcy in bodies, and the singularity of species and life itself.

As this book powerfully demonstrates, posthumanism marks a radical reassessment of the human as constituted by symbiosis, assimilation, difference and dependence upon and with other species. Mapping the terrain of these far-reaching debates, Posthumanism will be an invaluable companion to students of cultural studies and modern and contemporary literature.

Read more:

Wiley: Posthumanism – Pramod K. Nayar

Posted in Posthumanism | Comments Off on Wiley: Posthumanism – Pramod K. Nayar

Conscious Evolution: Awakening the Power of Our Social …

Posted: at 3:09 am

Conscious Evolution by Barbara Marx Hubbard is a book that will perhaps change the way you look at evolution and the way our future as a civilization is headed for the better. In this era of unrest and fears of global catastrophes, Hubbard explains that these may just be part of the storm before a big change in a conscious evolution.

Conscious Evolution is a really deep book that is an important text for humanity. It has been updated by the author and includes additional response to the biggest challenges and opportunities that we are currently seeing at this point in the history of our world.

Hubbard takes us not just through the human potential movement, but into the social potential movement that is helping to evolve our world into social synergy, interconnectivity, and spirit based compassion for all of humanity.

This book is composed of four parts: The New Story of Creation, Conscious Evolution: A New Worldview, The Social Potential Movement, and The Great Awakening.

I really like the overall message that instead of working on our own selfish desires in our lives, we should instead be working more towards a better future for all of humanity. I definitely recommend it.

* Thank you to the publisher of Conscious Evolution, New World Library, for providing me with a copy of this book for review. All opinions expressed are my own.

Excerpt from:
Conscious Evolution: Awakening the Power of Our Social …

Posted in Conscious Evolution | Comments Off on Conscious Evolution: Awakening the Power of Our Social …