Tag Archives: speech

History of artificial intelligence – Wikipedia, the free …

Posted: August 30, 2016 at 11:03 pm

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with “an ancient wish to forge the gods.”

The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The Turing test was proposed by British mathematician Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: “I propose to consider the question, ‘Can machines think?'” The term ‘Artificial Intelligence’ was created at a conference held at Dartmouth College in 1956.[2]Allen Newell, J. C. Shaw, and Herbert A. Simon pioneered the newly created artificial intelligence field with the Logic Theory Machine (1956), and the General Problem Solver in 1957.[3] In 1958, John McCarthy and Marvin Minsky started the MIT Artificial Intelligence lab with $50,000.[4] John McCarthy also created LISP in the summer of 1958, a programming language still important in artificial intelligence research.[5]

In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and withdrew funding again.

McCorduck (2004) writes “artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized,” expressed in humanity’s myths, legends, stories, speculation and clockwork automatons.

Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion’s Galatea.[7] In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as Jbir ibn Hayyn’s Takwin, Paracelsus’ homunculus and Rabbi Judah Loew’s Golem.[8] By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots), and speculation, such as Samuel Butler’s “Darwin among the Machines.” AI has continued to be an important element of science fiction into the present.

Realistic humanoid automatons were built by craftsman from every civilization, including Yan Shi,[11]Hero of Alexandria,[12]Al-Jazari and Wolfgang von Kempelen.[14] The oldest known automatons were the sacred statues of ancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotionHermes Trismegistus wrote that “by discovering the true nature of the gods, man has been able to reproduce it.”[15][16]

Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanicalor “formal”reasoning has a long history. Chinese, Indian and Greek philosophers all developed structured methods of formal deduction in the first millennium BCE. Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), Muslim mathematician al-Khwrizm (who developed algebra and gave his name to “algorithm”) and European scholastic philosophers such as William of Ockham and Duns Scotus.[17]

Majorcan philosopher Ramon Llull (12321315) developed several logical machines devoted to the production of knowledge by logical means;[18] Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge.[19] Llull’s work had a great influence on Gottfried Leibniz, who redeveloped his ideas.[20]

In the 17th century, Leibniz, Thomas Hobbes and Ren Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry.[21]Hobbes famously wrote in Leviathan: “reason is nothing but reckoning”.[22]Leibniz envisioned a universal language of reasoning (his characteristica universalis) which would reduce argumentation to calculation, so that “there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let us calculate.”[23] These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research.

In the 20th century, the study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible. The foundations had been set by such works as Boole’s The Laws of Thought and Frege’s Begriffsschrift. Building on Frege’s system, Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the Principia Mathematica in 1913. Inspired by Russell’s success, David Hilbert challenged mathematicians of the 1920s and 30s to answer this fundamental question: “can all of mathematical reasoning be formalized?”[17] His question was answered by Gdel’s incompleteness proof, Turing’s machine and Church’s Lambda calculus.[17][24] Their answer was surprising in two ways.

First, they proved that there were, in fact, limits to what mathematical logic could accomplish. But second (and more important for AI) their work suggested that, within these limits, any form of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. The key insight was the Turing machinea simple theoretical construct that captured the essence of abstract symbol manipulation. This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines.[17][26]

Calculating machines were built in antiquity and improved throughout history by many mathematicians, including (once again) philosopher Gottfried Leibniz. In the early 19th century, Charles Babbage designed a programmable computer (the Analytical Engine), although it was never built. Ada Lovelace speculated that the machine “might compose elaborate and scientific pieces of music of any degree of complexity or extent”.[27] (She is often credited as the first programmer because of a set of notes she wrote that completely detail a method for calculating Bernoulli numbers with the Engine.)

The first modern computers were the massive code breaking machines of the Second World War (such as Z3, ENIAC and Colossus). The latter two of these machines were based on the theoretical foundation laid by Alan Turing[28] and developed by John von Neumann.[29]

In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. The field of artificial intelligence research was founded as an academic discipline in 1956.

The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 30s, 40s and early 50s. Recent research in neurology had shown that the brain was an electrical network of neurons that fired in all-or-nothing pulses. Norbert Wiener’s cybernetics described control and stability in electrical networks. Claude Shannon’s information theory described digital signals (i.e., all-or-nothing signals). Alan Turing’s theory of computation showed that any form of computation could be described digitally. The close relationship between these ideas suggested that it might be possible to construct an electronic brain.[30]

Examples of work in this vein includes robots such as W. Grey Walter’s turtles and the Johns Hopkins Beast. These machines did not use computers, digital electronics or symbolic reasoning; they were controlled entirely by analog circuitry.[31]

Walter Pitts and Warren McCulloch analyzed networks of idealized artificial neurons and showed how they might perform simple logical functions. They were the first to describe what later researchers would call a neural network.[32] One of the students inspired by Pitts and McCulloch was a young Marvin Minsky, then a 24-year-old graduate student. In 1951 (with Dean Edmonds) he built the first neural net machine, the SNARC.[33]Minsky was to become one of the most important leaders and innovators in AI for the next 50 years.

In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of creating machines that think.[34] He noted that “thinking” is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was “thinking”. This simplified version of the problem allowed Turing to argue convincingly that a “thinking machine” was at least plausible and the paper answered all the most common objections to the proposition.[35] The Turing Test was the first serious proposal in the philosophy of artificial intelligence.

In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess.[36]Arthur Samuel’s checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur.[37]Game AI would continue to be used as a measure of progress in AI throughout its history.

When access to digital computers became possible in the middle fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines.[38]

In 1955, Allen Newell and (future Nobel Laureate) Herbert A. Simon created the “Logic Theorist” (with help from J. C. Shaw). The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead’s Principia Mathematica, and find new and more elegant proofs for some.[39] Simon said that they had “solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind.”[40] (This was an early statement of the philosophical position John Searle would later call “Strong AI”: that machines can contain minds just as human bodies do.)[41]

The Dartmouth Conference of 1956[42] was organized by Marvin Minsky, John McCarthy and two senior scientists: Claude Shannon and Nathan Rochester of IBM. The proposal for the conference included this assertion: “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it”.[43] The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would create important programs during the first decades of AI research.[44] At the conference Newell and Simon debuted the “Logic Theorist” and McCarthy persuaded the attendees to accept “Artificial Intelligence” as the name of the field.[45] The 1956 Dartmouth conference was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI.[46]

The years after the Dartmouth conference were an era of discovery, of sprinting across new ground. The programs that were developed during this time were, to most people, simply “astonishing”:[47] computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such “intelligent” behavior by machines was possible at all.[48] Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years.[49] Government agencies like ARPA poured money into the new field.[50]

There were many successful programs and new directions in the late 50s and 1960s. Among the most influential were these:

Many early AI programs used the same basic algorithm. To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze, backtracking whenever they reached a dead end. This paradigm was called “reasoning as search”.[51]

The principal difficulty was that, for many problems, the number of possible paths through the “maze” was simply astronomical (a situation known as a “combinatorial explosion”). Researchers would reduce the search space by using heuristics or “rules of thumb” that would eliminate those paths that were unlikely to lead to a solution.[52]

Newell and Simon tried to capture a general version of this algorithm in a program called the “General Problem Solver”.[53] Other “searching” programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Herbert Gelernter’s Geometry Theorem Prover (1958) and SAINT, written by Minsky’s student James Slagle (1961).[54] Other programs searched through goals and subgoals to plan actions, like the STRIPS system developed at Stanford to control the behavior of their robot Shakey.[55]

An important goal of AI research is to allow computers to communicate in natural languages like English. An early success was Daniel Bobrow’s program STUDENT, which could solve high school algebra word problems.[56]

A semantic net represents concepts (e.g. “house”,”door”) as nodes and relations among concepts (e.g. “has-a”) as links between the nodes. The first AI program to use a semantic net was written by Ross Quillian[57] and the most successful (and controversial) version was Roger Schank’s Conceptual dependency theory.[58]

Joseph Weizenbaum’s ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program. But in fact, ELIZA had no idea what she was talking about. She simply gave a canned response or repeated back what was said to her, rephrasing her response with a few grammar rules. ELIZA was the first chatterbot.[59]

In the late 60s, Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI research should focus on artificially simple situations known as micro-worlds. They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused on a “blocks world,” which consists of colored blocks of various shapes and sizes arrayed on a flat surface.[60]

This paradigm led to innovative work in machine vision by Gerald Sussman (who led the team), Adolfo Guzman, David Waltz (who invented “constraint propagation”), and especially Patrick Winston. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing the blocks world to life. The crowning achievement of the micro-world program was Terry Winograd’s SHRDLU. It could communicate in ordinary English sentences, plan operations and execute them.[61]

The first generation of AI researchers made these predictions about their work:

In June 1963, MIT received a $2.2 million grant from the newly created Advanced Research Projects Agency (later known as DARPA). The money was used to fund project MAC which subsumed the “AI Group” founded by Minsky and McCarthy five years earlier. DARPA continued to provide three million dollars a year until the 70s.[66]DARPA made similar grants to Newell and Simon’s program at CMU and to the Stanford AI Project (founded by John McCarthy in 1963).[67] Another important AI laboratory was established at Edinburgh University by Donald Michie in 1965.[68] These four institutions would continue to be the main centers of AI research (and funding) in academia for many years.[69]

The money was proffered with few strings attached: J. C. R. Licklider, then the director of ARPA, believed that his organization should “fund people, not projects!” and allowed researchers to pursue whatever directions might interest them.[70] This created a freewheeling atmosphere at MIT that gave birth to the hacker culture,[71] but this “hands off” approach would not last.

In the 70s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared.[72] At the same time, the field of connectionism (or neural nets) was shut down almost completely for 10 years by Marvin Minsky’s devastating criticism of perceptrons.[73] Despite the difficulties with public perception of AI in the late 70s, new ideas were explored in logic programming, commonsense reasoning and many other areas.[74]

In the early seventies, the capabilities of AI programs were limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, “toys”.[75] AI researchers had begun to run into several fundamental limits that could not be overcome in the 1970s. Although some of these limits would be conquered in later decades, others still stymie the field to this day.[76]

The agencies which funded AI research (such as the British government, DARPA and NRC) became frustrated with the lack of progress and eventually cut off almost all funding for undirected research into AI. The pattern began as early as 1966 when the ALPAC report appeared criticizing machine translation efforts. After spending 20 million dollars, the NRC ended all support.[84] In 1973, the Lighthill report on the state of AI research in England criticized the utter failure of AI to achieve its “grandiose objectives” and led to the dismantling of AI research in that country.[85] (The report specifically mentioned the combinatorial explosion problem as a reason for AI’s failings.)[86]DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at CMU and canceled an annual grant of three million dollars.[87] By 1974, funding for AI projects was hard to find.

Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues. “Many researchers were caught up in a web of increasing exaggeration.”[88] However, there was another issue: since the passage of the Mansfield Amendment in 1969, DARPA had been under increasing pressure to fund “mission-oriented direct research, rather than basic undirected research”. Funding for the creative, freewheeling exploration that had gone on in the 60s would not come from DARPA. Instead, the money was directed at specific projects with clear objectives, such as autonomous tanks and battle management systems.[89]

Several philosophers had strong objections to the claims being made by AI researchers. One of the earliest was John Lucas, who argued that Gdel’s incompleteness theorem showed that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could.[90]Hubert Dreyfus ridiculed the broken promises of the 60s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little “symbol processing” and a great deal of embodied, instinctive, unconscious “know how”.[91][92]John Searle’s Chinese Room argument, presented in 1980, attempted to show that a program could not be said to “understand” the symbols that it uses (a quality called “intentionality”). If the symbols have no meaning for the machine, Searle argued, then the machine can not be described as “thinking”.[93]

These critiques were not taken seriously by AI researchers, often because they seemed so far off the point. Problems like intractability and commonsense knowledge seemed much more immediate and serious. It was unclear what difference “know how” or “intentionality” made to an actual computer program. Minsky said of Dreyfus and Searle “they misunderstand, and should be ignored.”[94] Dreyfus, who taught at MIT, was given a cold shoulder: he later said that AI researchers “dared not be seen having lunch with me.”[95]Joseph Weizenbaum, the author of ELIZA, felt his colleagues’ treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus’ positions, he “deliberately made it plain that theirs was not the way to treat a human being.”[96]

Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote DOCTOR, a chatterbot therapist. Weizenbaum was disturbed that Colby saw his mindless program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby did not credit Weizenbaum for his contribution to the program. In 1976, Weizenbaum published Computer Power and Human Reason which argued that the misuse of artificial intelligence has the potential to devalue human life.[97]

A perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt, who had been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI researchers, he was optimistic about their power, predicting that “perceptron may eventually be able to learn, make decisions, and translate languages.” An active research program into the paradigm was carried out throughout the 60s but came to a sudden halt with the publication of Minsky and Papert’s 1969 book Perceptrons. It suggested that there were severe limitations to what perceptrons could do and that Frank Rosenblatt’s predictions had been grossly exaggerated. The effect of the book was devastating: virtually no research at all was done in connectionism for 10 years. Eventually, a new generation of researchers would revive the field and thereafter it would become a vital and useful part of artificial intelligence. Rosenblatt would not live to see this, as he died in a boating accident shortly after the book was published.[73]

Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal.[98] In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 60s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[99] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[100] Prolog uses a subset of logic (Horn clauses, closely related to “rules” and “production rules”) that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum’s expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[101]

Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[102] McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problemsnot machines that think as people do.[103]

Among the critics of McCarthy’s approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like “story understanding” and “object recognition” that required a machine to think like a person. In order to use ordinary concepts like “chair” or “restaurant” they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that “using precise language to describe essentially imprecise concepts doesn’t make them any more precise.”[104]Schank described their “anti-logic” approaches as “scruffy”, as opposed to the “neat” paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[105]

In 1975, in a seminal paper, Minsky noted that many of his fellow “scruffy” researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be “logical”, but these structured sets of assumptions are part of the context of everything we say and think. He called these structures “frames”. Schank used a version of frames he called “scripts” to successfully answer questions about short stories in English.[106] Many years later object-oriented programming would adopt the essential idea of “inheritance” from AI research on frames.

In the 1980s a form of AI program called “expert systems” was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart. Once again, AI had achieved success.

An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach.[107]

Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be useful: something that AI had not been able to achieve up to this point.[108]

In 1980, an expert system called XCON was completed at CMU for the Digital Equipment Corporation. It was an enormous success: it was saving the company 40 million dollars annually by 1986.[109] Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp and Aion.[110]

The power of expert systems came from the expert knowledge they contained. They were part of a new direction in AI research that had been gaining ground throughout the 70s. “AI researchers were beginning to suspectreluctantly, for it violated the scientific canon of parsimonythat intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways,”[111] writes Pamela McCorduck. “[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay”.[112]Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s.[113]

The 1980s also saw the birth of Cyc, the first attempt to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows. Douglas Lenat, who started and led the project, argued that there is no shortcut the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand. The project was not expected to be completed for many decades.[114]

Chess playing programs HiTech and Deep Thought defeated chess masters in 1989. Both were developed by Carnegie Mellon University; Deep Thought development paved the way for the Deep Blue.[115]

In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings.[116] Much to the chagrin of scruffies, they chose Prolog as the primary computer language for the project.[117]

Other countries responded with new programs of their own. The UK began the 350 million Alvey project. A consortium of American companies formed the Microelectronics and Computer Technology Corporation (or “MCC”) to fund large scale projects in AI and information technology.[118][119]DARPA responded as well, founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988.[120]

In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a “Hopfield net”) could learn and process information in a completely new way. Around the same time, David Rumelhart popularized a new method for training neural networks called “backpropagation” (discovered years earlier by Paul Werbos). These two discoveries revived the field of connectionism which had been largely abandoned since 1970.[119][121]

The new field was unified and inspired by the appearance of Parallel Distributed Processing in 1986a two volume collection of papers edited by Rumelhart and psychologist James McClelland. Neural networks would become commercially successful in the 1990s, when they began to be used as the engines driving programs like optical character recognition and speech recognition.[119][122]

The business community’s fascination with AI rose and fell in the 80s in the classic pattern of an economic bubble. The collapse was in the perception of AI by government agencies and investors the field continued to make advances despite the criticism. Rodney Brooks and Hans Moravec, researchers from the related field of robotics, argued for an entirely new approach to artificial intelligence.

The term “AI winter” was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow.[123] Their fears were well founded: in the late 80s and early 90s, AI suffered a series of financial setbacks.

The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight.[124]

Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were “brittle” (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts.[125]

In the late 80s, the Strategic Computing Initiative cut funding to AI “deeply and brutally.” New leadership at DARPA had decided that AI was not “the next wave” and directed funds towards projects that seemed more likely to produce immediate results.[126]

By 1991, the impressive list of goals penned in 1981 for Japan’s Fifth Generation Project had not been met. Indeed, some of them, like “carry on a casual conversation” had not been met by 2010.[127] As with other AI projects, expectations had run much higher than what was actually possible.[127]

In the late 80s, several researchers advocated a completely new approach to artificial intelligence, based on robotics.[128] They believed that, to show real intelligence, a machine needs to have a body it needs to perceive, move, survive and deal with the world. They argued that these sensorimotor skills are essential to higher level skills like commonsense reasoning and that abstract reasoning was actually the least interesting or important human skill (see Moravec’s paradox). They advocated building intelligence “from the bottom up.”[129]

The approach revived ideas from cybernetics and control theory that had been unpopular since the sixties. Another precursor was David Marr, who had come to MIT in the late 70s from a successful background in theoretical neuroscience to lead the group studying vision. He rejected all symbolic approaches (both McCarthy’s logic and Minsky’s frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place. (Marr’s work would be cut short by leukemia in 1980.)[130]

In a 1990 paper, “Elephants Don’t Play Chess,”[131] robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since “the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough.”[132] In the 80s and 90s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the embodied mind thesis.[133]

The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. Inside the field there was little agreement on the reasons for AI’s failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of “artificial intelligence”.[134] AI was both more cautious and more successful than it had ever been.

On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.[135] The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second. The event was broadcast live over the internet and received over 74 million hits.[136]

In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail.[137] Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.[138] In February 2011, in a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[139]

These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous power of computers today.[140] In fact, Deep Blue’s computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951.[141] This dramatic increase is measured by Moore’s law, which predicts that the speed and memory capacity of computers doubles every two years. The fundamental problem of “raw computer power” was slowly being overcome.

A new paradigm called “intelligent agents” became widely accepted during the 90s.[142] Although earlier researchers had proposed modular “divide and conquer” approaches to AI,[143] the intelligent agent did not reach its modern form until Judea Pearl, Allen Newell and others brought concepts from decision theory and economics into the study of AI.[144] When the economist’s definition of a rational agent was married to computer science’s definition of an object or module, the intelligent agent paradigm was complete.

An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. By this definition, simple programs that solve specific problems are “intelligent agents”, as are human beings and organizations of human beings, such as firms. The intelligent agent paradigm defines AI research as “the study of intelligent agents”. This is a generalization of some earlier definitions of AI: it goes beyond studying human intelligence; it studies all kinds of intelligence.[145]

The paradigm gave researchers license to study isolated problems and find solutions that were both verifiable and useful. It provided a common language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like economics and control theory. It was hoped that a complete agent architecture (like Newell’s SOAR) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents.[144][146]

AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past.[147] There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics, economics or operations research. The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable; AI had become a more rigorous “scientific” discipline. Russell & Norvig (2003) describe this as nothing less than a “revolution” and “the victory of the neats”.[148][149]

Judea Pearl’s highly influential 1988 book[150] brought probability and decision theory into AI. Among the many new tools in use were Bayesian networks, hidden Markov models, information theory, stochastic modeling and classical optimization. Precise mathematical descriptions were also developed for “computational intelligence” paradigms like neural networks and evolutionary algorithms.[148]

Algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems[151] and their solutions proved to be useful throughout the technology industry,[152] such as data mining, industrial robotics, logistics,[153]speech recognition,[154] banking software,[155] medical diagnosis[155] and Google’s search engine.[156]

The field of AI receives little or no credit for these successes. Many of AI’s greatest innovations have been reduced to the status of just another item in the tool chest of computer science.[157]Nick Bostrom explains “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.”[158]

Many researchers in AI in 1990s deliberately called their work by other names, such as informatics, knowledge-based systems, cognitive systems or computational intelligence. In part, this may be because they considered their field to be fundamentally different from AI, but also the new names help to procure funding. In the commercial world at least, the failed promises of the AI Winter continue to haunt AI research, as the New York Times reported in 2005: “Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.”[159][160][161]

In 1968, Arthur C. Clarke and Stanley Kubrick had imagined that by the year 2001, a machine would exist with an intelligence that matched or exceeded the capability of human beings. The character they created, HAL 9000, was based on a belief shared by many leading AI researchers that such a machine would exist by the year 2001.[162]

Marvin Minsky asks “So the question is why didn’t we get HAL in 2001?”[163] Minsky believes that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blames the qualification problem.[164] For Ray Kurzweil, the issue is computer power and, using Moore’s Law, he predicts that machines with human-level intelligence will appear by 2029.[165]Jeff Hawkins argues that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[166] There are many other explanations and for each there is a corresponding research program underway.

.

Go here to read the rest:

History of artificial intelligence – Wikipedia, the free …

Posted in Ai | Comments Off on History of artificial intelligence – Wikipedia, the free …

First Amendment – Watchdog.org

Posted: August 25, 2016 at 4:20 pm

By M.D. Kittle / August 14, 2016 / First Amendment, Free Speech, News, Power Abuse, Wisconsin / No Comments

There is a vital need for citizens to have an effective remedy against government officials who investigate them principally because of their partisan affiliation and political speech.

By M.D. Kittle / August 8, 2016 / Commentary, First Amendment, Free Speech, National, Wisconsin / No Comments

Thats precisely what I expected from a party whose platform includes rewriting the First Amendment

By M.D. Kittle / August 3, 2016 / First Amendment, Free Speech, News, Power Abuse, Wisconsin / No Comments

The question that arises is do conservatives have civil rights before Judge Lynn Adelman?

By M.D. Kittle / August 2, 2016 / First Amendment, News, Power Abuse, Wisconsin / No Comments

Now, years after defendants unlawfully seized and catalogued millions of our sensitive documents, we ask the court to vindicate our rights under federal law.

By M.D. Kittle / July 25, 2016 / First Amendment, National, News, Politics & Elections, Wisconsin / No Comments

Moore has uttered some of the more inflammatory, ill-informed statements in Congress.

By M.D. Kittle / July 14, 2016 / First Amendment, Judiciary, News, Power Abuse, Wisconsin / No Comments

The process continues to be the punishment for people who were found wholly innocent of any wrongdoing, she said.

View post:
First Amendment – Watchdog.org

Posted in First Amendment | Comments Off on First Amendment – Watchdog.org

Trump: Maybe ‘2nd Amendment People’ Can Stop Clinton’s …

Posted: August 10, 2016 at 9:08 pm

Republican presidential nominee Donald Trump raised eyebrows Tuesday when he suggested there is “nothing” that can be done to stop Hillary Clinton’s Supreme Court picks, except “maybe” the “Second Amendment people.”

“Hillary wants to abolish, essentially abolish the Second Amendment,” Trump said to the crowd of supporters gathered in the Trask Coliseum at North Carolina University in Wilmington. “If she gets to pick her judges, nothing you can do, folks.

“Although the Second Amendment people, maybe there is. I don’t know.”

After the speech, Clinton’s campaign seized on the remarks.

“This is simple what Trump is saying is dangerous,” read a statement from campaign manager Robby Mook. “A person seeking to be president of the United States should not suggest violence in any way.”

ABC News reached out to the Secret Service for response to Trump’s comment, and the agency said it was aware of the remarks.

The Trump campaign insisted the candidate’s words referred to the power of “Second Amendment people” to unify.

“It’s called the power of unification 2nd Amendment people have amazing spirit and are tremendously unified, which gives them great political power,” read a statement, titled “Trump Campaign Statement Against Dishonest Media,” from senior communications adviser Jason Miller.

In a tweet Tuesday night, Trump tried to explain his remarks.

And in an interview with Fox News Tuesday night, Trump told the network: “This is a strong, powerful movement, the Second Amendment” and called the NRA “terrific people.”

“There can be no other interpretation,” he said of his earlier remarks. “I mean, give me a break.”

Trump’s running mate Mike Pence rose to the candidate’s defense and said Trump was not insinuating that there should be violence against Clinton.

“Donald Trump is clearly saying is that people who cherish that right, who believe that firearms in the hands of law-abiding citizens makes our communities more safe, not less safe, should be involved in the political process and let their voice be heard,” Pence said today in an interview with NBC10, a local Philadelphia TV station.

Clinton’s running mate, Virginia Sen. Tim Kaine told reporters today in Trump’s comments “revealed this complete temperamental misfit with the character thats required to do the job and in a nation.”

“We gotta be pulling together and countenancing violence is not something any leader should do,” Kaine said.

Connecticut Democratic Sen. Chris Murphy, who led a 15-hour filibuster in June to force a vote on gun control measures, took to Twitter to voice his displeasure with Trump’s comments.

“This isn’t play,” wrote Murphy. “Unstable people with powerful guns and an unhinged hatred for Hillary are listening to you, @realDonaldTrump.”

And Rep. Eric Swalwell, D-Calif., who wrote in a tweet that because he believed Trump “suggested someone kill Sec. Clinton,” called for a Secret Service investigation.

See the rest here:
Trump: Maybe ‘2nd Amendment People’ Can Stop Clinton’s …

Posted in Second Amendment | Comments Off on Trump: Maybe ‘2nd Amendment People’ Can Stop Clinton’s …

Golden Rule – New World Encyclopedia

Posted: June 28, 2016 at 2:56 am

The Golden Rule is a cross-cultural ethical precept found in virtually all the religions of the world. Also known as the “Ethic of Reciprocity,” the Golden Rule can be rendered in either positive or negative formulations: most expressions take a passive form, as expressed by the Jewish sage Hillel: “What is hateful to you, do not to your fellow neighbor. This is the whole Law, all the rest is commentary” (Talmud, Shabbat 31a). In Christianity, however, the principle is expressed affirmatively by Jesus in the Sermon on the Mount: “Do unto others as you would have others do unto you” (Gospel of Matthew 7:12). This principle has for centuries been known in English as the Golden Rule in recognition of its high value and importance in both ethical living and reflection.

Arising as it does in nearly all cultures, the ethic of reciprocity is a principle that can readily be used in handling conflicts and promoting greater harmony and unity. Given the modern global trend of political, social, and economic integration and globalization, the Golden Rule of ethics may become even more relevant in the years ahead to foster inter-cultural and interreligious understanding.

Philosophers disagree about the nature of the Golden Rule: some have classified it as a form of deontological ethics (from the Greek deon, meaning “obligation”) whereby decisions are made primarily by considering one’s duties and the rights of others. Deontology posits the existence of a priori moral obligations suggesting that people ought to live by a set of permanently defined principles that do not change merely as a result of a change in circumstances. However, other philosophers have argued that most religious understandings of the Golden Rule imply its use as a virtue toward greater mutual respect for one’s neighbor rather than as a deontological formulation. They argue that the Golden Rule depends on everyone’s ability to accept and respect differences because even religious teachings vary. Thus, many philosophers, such as Karl Popper, have suggested that the Golden Rule can be best understood in term of what it is not (through the via negativa):

First, they note that the Golden Rule should not be confused with revenge, an eye for an eye, tit for tat, retributive justice or the law of retaliation. A key element of the ethic of reciprocity is that a person attempting to live by this rule treats all people, not just members of his or her in-group, with due consideration. The Golden Rule should also not be confused with another major ethical principle, often known as Wiccan Rede, or liberty principle, which is an ethical prohibition against aggression. This rule is also an ethical rule of “license” or “right,” that is people can do anything they like as long as it does not harm others. This rule does not compel one to help the other in need. On the other hand, “the golden rule is a good standard which is further improved by doing unto others, wherever possible, as they want to be done by.”[1]

Lastly, the Golden Rule of ethics should not be confused with a “rule” in the semantic or logical sense. A logical loophole in the positive form of Golden “Rule” is that it would require a masochist to harm others, even without their consent, if that is what the masochist would wish for themselves. This loophole can be addressed by invoking a supplementary rule, which is sometimes called the Silver Rule. This states, “treat others in the way that they wish to be treated.” However, the Silver Rule may create another logical loophole. In a situation where an individual’s background or belief may offend the sentiment of the majority (such as homosexuality or blasphemy), the silver rule may imply ethical majority rule if the Golden Rule is enforced as if it were a law.

Under ethic of reciprocity, a person of atheist persuasion may have a (legal) right to insult religion under the right of freedom of expression but, as a personal choice, may refrain to do so in public out of respect to the sensitivity of the other. Conversely, a person of religious persuasion may refrain from taking action against such public display out of respect to the sensitivity of other about the right of freedom of speech. Conversely, the lack of mutual respect might mean that each side might deliberately violate the golden rule as a provocation (to assert one’s right) or as intimidation (to prevent other from making offense).

This understanding is crucial because it shows how to apply the golden rule. In 1963, John F. Kennedy ordered Alabama National Guardsmen to help admit two clearly qualified “Negro” students to the University of Alabama. In his speech that evening Kennedy appealed to every American:

Stop and examine his conscience about this and other related incidents throughout America…If an American, because his skin is dark, cannot eat lunch in a restaurant open to the public, if he cannot send his children to the best public school available, if he cannot vote for the public officials who will represent him, …. then who among us would be content to have the color of his skin changed and stand in his place? …. The heart of the question is …. whether we are going to treat our fellow Americans as we want to be treated.[2]

It could be argued that the ethics of reciprocity may replace all other moral principles, or at least that it is superior to them. Though this guiding rule may not explicitly tell one which actions or treatments are right or wrong, it can provide one with moral coherenceit is a consistency principle. One’s actions are to be consistent with mutual love and respect to other fellow humans.

A survey of the religious scriptures of the world reveals striking congruence among their respective articulations of the Golden Rule of ethics. Not only do the scriptures reveal that the Golden Rule is an ancient precept, but they also show that there is almost unanimous agreement among the religions that this principle ought to govern human affairs. Virtually all of the world’s religions offer formulations of the Golden Rule somewhere in their scriptures, and they speak in unison on this principle. Consequently, the Golden Rule has been one of the key operating ideas that has governed human ethics and interaction over thousands of years. Specific examples and formulations of the Golden Rule from the religious scriptures of the world are found below:

In Buddhism, the first of the Five Precepts (Panca-sila) of Buddhism is to abstain from destruction of life. The justification of the precept is given in chapter ten of the Dhammapada, which states:

Everyone fears punishment; everyone fears death, just as you do. Therefore do not kill or cause to kill. Everyone fears punishment; everyone loves life, as you do. Therefore do not kill or cause to kill.

According to the second of Four Noble Truths of Buddhism, egoism (desire, craving or attachment) is rooted in ignorance and is considered as the cause of all suffering. Consequently, kindness, compassion and equanimity are regarded as the untainted aspect of human nature.

Even though the Golden Rule is a widely accepted religious ethic, Martin Forward writes that the Golden Rule is itself not beyond criticism. His critique of the Golden Rule is worth repeating in full. He writes:

Two serious criticisms can be leveled against [the Golden Rule]. First of all, although the Golden Rule makes sense as an aspiration, it is much more problematic when it is used as a foundation for practical living or philosophical reflection. For example: sh
ould we unfailingly pardon murderers on the grounds that, if we stood in their shoes, we should ourselves wish to be pardoned? Many goodly and godly people would have problems with such a proposal, even though it is a logical application of the Golden Rule. At the very least, then, it would be helpful to specify what sort of a rule the Golden Rule actually is, rather than assuming that it is an unqualified asset to ethical living in a pluralistic world. Furthermore, it is not usually seen as the heart of religion by faithful people, but simply as the obvious starting point for a religious and humane vision of life. Take the famous story in Judaism recorded in the Talmud: Shabbat 31:

Forward’s argument continues:

Even assuming that the Golden Rule could be developed into a more nuanced pattern of behaving well in todays world, there would still be issues for religious people to deal with. For whilst moral behavior is an important dimension of religion, it does not exhaust its meaning. There is a tendency for religious people in the West to play down or even despise doctrine, but this is surely a passing fancy. It is important for religious people in every culture to inquire after the nature of transcendence: its attitude towards humans and the created order; and the demands that it makes. People cannot sensibly describe what is demanded of them as important, without describing the source that wills it and enables it to be lived out. Besides, the world would be a safer place if people challenged paranoid and wicked visions of God (or however ultimate reality is defined) with truer and more generous ones, rather than if they abandoned the naming and defining of God to fearful and sociopath persons (From the Inter-religious Dialogue article in The Encyclopedia of General Knowledge).

In other words, Forward warns religious adherents not to be satisfied with merely the Golden Rule of ethics that can be interpreted and used as a form of religious and ethical relativism, but to ponder the deeper religious impulses that lead to the conviction of the Golden Rule in the first place, such as the idea of love in Christianity.

Due to its widespread acceptance in the world’s cultures, it has been suggested that the Golden Rule may be related to innate aspects of human nature. In fact, the principle of reciprocity has been mathematically proved to be the most mutually beneficial means of resolving conflict (as in the Prisoner’s Dilemma).[3] As it has touchstones in virtually all cultures, the ethic of reciprocity provides a universally comprehensible tool for handling conflictual situations. However, the logical and ethical objections presented above make the viability of this principle as a Kantian categorical imperative doubtful. In a world where sociopathy and religious zealotry exist, it is not always feasible to base one’s actions upon the perceived desires of others. Further, the Golden Rule, in modernity, has lost some of its persuasive power, after being diluted into a bland, secular precept through cloying e-mail forwards and newspaper cartoons. As Forward argues, perhaps the Golden Rule must be approached in its original religious context, as this context provides an ethical and metaphysical grounding for a belief in the ultimate power of human goodness.

Regardless of the above objections, modern trends of political, social, and economic globalization necessitate the development of understandable, codifiable and universally-accepted ethical guidelines. For this purpose, we (as a species) could certainly do worse than to rely upon the age-old, heuristic principle spelled out in the Golden Rule.

All links retrieved December 19, 2013.

New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:

Note: Some restrictions may apply to use of individual images which are separately licensed.

Continue reading here:

Golden Rule – New World Encyclopedia

Posted in Golden Rule | Comments Off on Golden Rule – New World Encyclopedia

American Patriot Friends Network APFN

Posted: June 27, 2016 at 6:36 am

Then ‘MAKE SURE’ your vote is counted! http://www.votersunite.org/

Why did 65 US Senators break a solemn oath? Watch. Listen http://www.apfn.org/apfn/oath-of-office.htm

The Case for Impeachment C-Span2 Book TV 8/2/06 With Dave Lindorff and Barbara Oskansky Website: http://www.thiscantbehappening.net

HOW TO IMPEACH A PRESIDENT Includes 6 part videos: ‘The Case for Impeachment’ http://www.apfn.org/apfn/impeach_pres.htm

Cointelpro, Provacateurs,Disinfo Agents.

Citizen’s Rule Book 44 pages Download here: http://www.apfn.org/pdf/citizen.pdf

Quality pocketsized hardcopies of this booklet may be obtained from: Whitten Printers (602) 258-6406 1001 S 5th St., Phoenix, AZ 85004 Editoral Work by Webster Adams PAPER-HOUSE PUBLICATIONS “Stronger than Steel” 4th Revision

“Each time a person stands up for an ideal, or acts to improve the lot of others. . .they send forth a ripple of hope, and crossing each other from a million different centers of energy and daring, those ripples build a current that can sweep down the mightiest walls of oppression and resistance.” – Robert F. Kennedy

Philosophy Of Liberty (Flash) http://www.apfn.org/flash/PhilosophyOfLiberty-english.swf

March 29, 2000

Once a government is committed to the principle of silencing the voice of opposition, it has only one way to go, and that is down the path of increasingly repressive measures, until it becomes a source of terror to all its citizens and creates a country where everyone lives in fear. –Harry S. Truman

APFN Contents Page:Click Here

Message Board

APFN Home Page

“The American Dream” Fire ’em all!

Join the Blue Ribbon Online Free Speech Campaign!

American Patriot Friends Network a/k/a American Patriot Fax Network was founded Feb. 21, 1993. We started with faxing daily reports from the Weaver-Harris trials. Then on Feb. 28 1993, The BATF launched Operation Showtime – “The Siege on the Branch Davidians”. From this point, it’s been the Death of Vince Foster, the Oklahoma Bombing, TWA-800, The Train Deaths, Bio-War, on and on. We are not anti-government, we are anti-corrupt-government. A Patriot is one who loves God, Family and Country…..

We believe Patriots should rule America…. Please join in the fight with us in seeking TRUTH, JUSTICE AND FREEDOM FOR ALL AMERICANS….

Join our e-mail list and build your own e-mail/Fax networking contacts.

Without Justice, there is JUST_US

EXCELLENT!! Download & WATCH THIS! (Flash Player) http://www.apfn.org/apfn/pentagon121.swf

The Attack on America 9/11 http://www.apfn.org/apfn/WTC.htm

9/11 Philip Marshall and His Two Children Silenced for Telling the Truth http://www.apfn.org/apfn/bamboozle.htm

OBAMA’S DRONES WAR ON WOMEN AND CHILDREN http://www.apfn.org/apfn/drones.htm

SMART METERS and Agenda 21 http://www.apfn.org/apfn/smartmeters.htm

TWO SUPREME COURT DECISIONS THE ANTI-GUNNERS DON’T WANT YOU TO SEE http://www.apfn.org/apfn/Gun-law.htm

APFN Pogo Radio Your Way http://www.apfn.net/pogo.htm

APFN iPod Download Page http://www.apfn.org/iPod/index.htm

America Media Columnists (500) Listed By Names

“I believe in the United States of America as a Government of the people by the people, for the people, whose just powers are derived from the consent of the governed; a democracy in a Republic; a sovereign Nation of many sovereign States; a perfect Union, one and inseparable; established upon those principles of freedom, equality, justice, and humanity for which American patriots sacrificed their lives and fortunes.

I therefore believe it is my duty to my Country to love it; to support its Constitution; to obey its laws; to respect its flag, and to defend it against all enemies.”

http://www.icss.com/usflag/american.creed.html

Freedom is ANYTHING BUT FREE!

“…. a network of net-worker’s….”

Dedication:

I was born an American. I live as an American; I shall die an American; and I intend to perform the duties incumbent upon me in that character to the end of my career. I mean to do this with absolute disregard to personal consequences. What are the personal consequences?

What is the individual man with all the good or evil that may betide him, in comparison with the good and evil which may befall a great country, and in the midst of great transactions which concern that country’s fate? Let the consequences be what they will, I am careless, No man can suffer too much, and no man can fall too soon, if he suffer or if he fall, in the defense of the liberties and Constitution of his country.

…Daniel Webster

APFN IS NOT A BUSINESS APFN IS SUPPORTED BY “FREE WILL” GIFT/DONATIONS Without Justice, there is JUST_US! http://www.apfn.org

If you would like to donate a contribution to APFN: Mail to: 7558 West Thunderbird Rd. Ste. 1-#115 Peoria, Arizona 85381

Message Board

APFN Sitemap

APFN Home Page

APFN Contents Page

You can subscribe to this RSS feed in a number of ways, including the following:

One-click subscriptions

If you use one of the following web-based News Readers, click on the appropriate button to subscribe to the RSS feed.

E-Mail apfn@apfn.org

Visit link:

American Patriot Friends Network APFN

Posted in Government Oppression | Comments Off on American Patriot Friends Network APFN

William Wilberforce: biography and bibliography

Posted: June 21, 2016 at 6:34 am

Biography William Wilberforce is perhaps the best known of the abolitionists. He came from a prosperous merchant family of Kingston-upon-Hull, a North Sea port which saw little in the way of slave trading. (His birthplace is now preserved as the Wilberforce House Museum.) At twenty-one, the youngest age at which one could be so elected, he was returned to Parliament for his native town. Four years later he was again returned to Parliament, this time for the county seat of Yorkshire which was large and populous, and which therefore required an expensive election contest. The advantage was that the election, being genuinely democratic, conferred a greater legitimacy to the two Members which that county returned to Parliament. Wilberforce’s early years in Parliament were not untypical for a young back-bencher. He was noted for his eloquence and charm, attributes no doubt enhanced by his considerable wealth, but he did not involve himself at first with any great cause. A sudden conversion to evangelical Christianity in 1785 changed that and from then onwards he approached politics from a position of strict Christian morality. In 1786 he carried through the House of Commons a bill for amending criminal law which failed to pass the Lords, a pattern which was to be repeated during his abolitionist career. The following year he founded the Proclamation Society which had as its aim the suppression of vice and the reformation of public manners. Later in 1787 he became, at the suggestion of the Prime Minister, William Pitt the Younger, the parliamentary leader of the abolition movement, although he did not officially join the Abolition Society until 1794.

The story of Pitt’s conversation with Wilberforce under an old tree near Croydon has passed into the mythology of the anti-slavery movement. The result was that Wilberforce returned to London having promised to look over the evidence which Thomas Clarkson had amassed against the trade. As he did so he clearly become genuinely horrified and resolved to give the abolition movement his support. Working closely with Clarkson, he presented evidence to a committee of the Privy Council during 1788. This episode did not go as planned. Some of the key witnesses against the trade, apparently bribed or intimidated, changed their story and testified in favour. In the country at large abolitionist sentiment was growing rapidly. While the king’s illness and the Regency Bill crisis no doubt supplanted the slave trade as the chief topic of political conversation in the winter of 1788-9, by the spring the king had recovered and abolition was once more at the top of the agenda. It was under these circumstances that Wilberforce prepared to present his Abolition Bill before the House of Commons. This speech, the most important of Wilberforce’s life to that point, was praised in the newspapers as being one of the most eloquent ever to have been heard in the house. Indeed, The Star reported that ‘the gallery of the House of Commons on Tuesday was crowded with Liverpool Merchants; who hung their heads in sorrow – for the African occupation of bolts and chains is no more’.

The newspaper was premature in sounding the death knell of the slave trade. After the 1789 speech parliamentary delaying tactics came into play. Further evidence was requested and heard over the summer months and then, on 23 June 1789, the matter was adjourned until the next session. Wilberforce left town, holidaying at Buxton with Hannah More, confident that the next session would see a resolution of the debate and abolition of the trade. It did not and by January 1790 the question was deemed to be taking up so much parliamentary time that consideration of the evidence was moved upstairs (as parliamentary jargon has it) to a Select Committee. Evidence in favour of the trade was heard until April, followed by evidence against. In June Pitt called an early general election. Wilberforce was safely returned as a Member for Yorkshire, but parliamentary business was disrupted. Despite being behind schedule, Wilberforce continued to work for an abolition which it appeared the country wanted. News of the slave rebellion in Dominica reached Britain in February 1791 and hardened attitudes against abolition, but Wilberforce pressed on. After almost two years of delay the debate finally resumed and Wilberforce again addressed the Commons on 18 April 1791.

When, on the following night, the House divided on the question of abolition fewer than half of its Members remained to vote. Because of this or not, the Abolition Bill fell with a majority of 75 against abolishing the slave trade. Wilberforce and the other members of the Abolition Committee returned to the task of drumming up support for abolition both from Members of Parliament and from ordinary people. More petitions were collected, further meetings held, extra pamphlets published, and a boycott of sugar was organised. The campaign was not helped by news of the revolutions in France and Haiti. Perhaps sensing that a hardening of attitudes was becoming increasingly likely Wilberforce again brought the question of abolition before the House and, almost a year after the previous defeat, on 2 April 1792, once more found himself addressing the House of Commons. Every account we have of this speech shows that it was an intense and lengthy emotional harangue. Public feeling was outraged and, on this occasion, so was the feeling of the House. But not quite enough. Henry Dundas suggested an amendment to the Abolition Bill: the introduction of the word ‘gradual’. The bill passed as amended, by 230 votes to 85, and gradual abolition became law, the final date for slave trading to remain legal being later fixed at 1796. But this gave the ‘West India Interest’ – the slave traders’ lobby – room to manoeuvre. Once again parliamentary delaying tactics came into play, further evidence was demanded, and it became clear that gradual abolition was to mean no abolition.

This event marked a turning point in the fortunes of the abolition camapign. Partly because of a hardening of attitudes caused by the outbreak of war with France, and partly because of determined resistance from the West-India Interest there was a collapse in public enthusiasm for the cause. Some abolitionists withdrew from the campaign entirely. Wilberforce did not, but his speeches fell on ever deafer ears. Although Wilberforce reintroduced the Abolition Bill almost every year in the 1790s, little progress was made even though Wilberforce remained optimistic for the long-term success of the cause. He directed some of his efforts into other arenas, largely evangelical or philanthropic, and was instrumental in setting up organisations such as The Bible Society and The Society for Bettering the Condition of the Poor. In 1797 he published a book, A Practical view of the Prevailing Religious System of Professed Christians, a work of popular theology with a strong evangelical hue which sold well on publication and throughout the nineteenth century. On 30 May 1797, after a short romance, he married Barbara Ann Spooner.

If the first two years of the new century were particularly bleak ones for the abolition movement, the situation was rapidly reversed in 1804. The association of abolitionism with Jacobinism dispersed as Napoleon’s hostility to emancipation became known. Members of Parliament, especially the many new Irish members, increasingly tended toward abolition. The Abolition Society reformed with a mixture of experienced older members and new blood. Wilberforce assumed his old role of parliamentary leader, and introduced the Abolition Bill before parliament. The Bill fell in 1804 and 1805, but gave the abolitionists an opportunity to sound out support. In 1806, Wilberforce published an influential tract advocating abolition and, in June that year, resolutions supporting abolition were passed in parliament. A public campaign once again promoted the cause, and the new Whig government was in favour as well. In January 1807, the Abolition Bill was once again introduced, this time attracting very considerable support, and, on 23 February 1807, almost fifteen years after Dundas had effectively wrecked abolition with his gradualist amendment, Parliament voted overwhelmingly in favour of abolition of the slave trade. During the debate the then Solicitor-General, Sir Samuel Romilly, spoke against the trade. His speech concluded with a long and emotional tribute to Wilberforce in which he contrasted the peaceful happiness of Wilberforce in his bed with the tortured sleeplessness of the guilty Napoleon Bonaparte. In the words of Romilly’s biographer;

The Abolition Act received the Royal Assent (became law) on 25 March 1807 but, although the trade in slaves had become illegal in British ships, slavery remained a reality in British colonies. Wilberforce himself was privately convinced that the institution of slavery should be entirely abolished, but understood that there was little political will for emancipation. Already recognised as an elder statesman in his 50s, Wilberforce received a steady throng of visitors and supplicants, and he became involved in many of the political questions of the day. He supported Catholic Emancipation and the Corn Laws. His health was poor, however, and in 1812 he resigned the large and arduous seat of Yorkshire for the pocket borough of Bramber. In the same year he started work on the Slave Registration Bill, which he saw as necessary to ensure compliance with the Abolition Act. If slaves were registered, he argued, it could be proved whether or not they had been recently transported from Africa. The Prime Minister, Spencer Perceval, supported the Bill, but was assassinated shortly after. Thereafter, Wilberforce’s efforts met with increasing resistance from the government. In 1815, with the government again blocking progress, Wilberforce publically declared that as they would not support him, he felt himself no longer bound by their line on emancipation. From this time on, Wilberforce campaigned openly for an end to the institution of slavery.

Wilberforce’s health, never good, was deteriorating. Although now free to speak his mind on emancipation, he was never able to campaign with the same vigour that he had done for abolition of the trade. However, he continued to attack slavery both at public meetings and in the House of Commons. In 1823, he published another pamphlet attacking slavery. This pamphlet was connected with the foundation of The Anti-Slavery Society which led the campaign to emancipate all slaves in British colonies. Leadership of the parliamentary campaign, however, was passed from Wilberforce to Thomas Fowell Buxton. In 1825, Wilberforce resigned from the House of Commons. He enjoyed a quiet retirement at Mill Hill, just north of London, although he suffered some financial difficulties. His last public appearance was at a meeting of the Anti-Slavery Society in 1830, at which, at Thomas Clarkson’s suggestion, he took the chair. In parliament, the Emancipation Bill gathered support and received its final commons reading on 26 July 1833. Slavery would be abolished, but the planters would be heavily compensated. ‘Thank God’, said Wilberforce, ‘that I have lived to witness a day in which England is willing to give twenty millions sterling for the Abolition of Slavery’. Three days later, on 29 July 1833, he died. He is buried in Westminster Abbey.

Brycchan Carey 2000-2002

View post:

William Wilberforce: biography and bibliography

Posted in Abolition Of Work | Comments Off on William Wilberforce: biography and bibliography

20 Outrageous Examples That Show How Political Correctness …

Posted: June 17, 2016 at 4:55 am

The thought police are watching you. Back in the 1990s, lots of jokes were made about political correctness, and almost everybody thought they were really funny. Unfortunately, very few people are laughing now because political correctness has become a way of life in America. If you say the wrong thing you could lose your job or you could rapidly end up in court. Every single day, the mainstream media bombards us with subtle messages that make it clear what is appropriate and what is inappropriate, and most Americans quietly fall in line with this unwritten speech code. But just because it is not written down somewhere does not mean that it isnt real. In fact, this speech code becomes more restrictive and more suffocating with each passing year. The goal of the thought Nazis is to control what people say to one another, because eventually that will shape what most people think and what most people believe. If you dont think this is true, just try the following experiment some time. Go to a public place where a lot of people are gathered and yell out something horribly politically incorrect such as I love Jesus and watch people visibly cringe. The name of Jesus has become a curse word in our politically correct society, and we have been trained to have a negative reaction to it in public places. After that, yell out something politically correct such as I support gay marriage and watch what happens. You will probably get a bunch of smiles and quite a few people may even approach you to express their appreciation for what you just said. Of course this is going to vary depending on what area of the country you live in, but hopefully you get the idea. Billions of dollars of media programming has changed the definitions of what people consider to be acceptable and what people consider to be not acceptable. Political correctness shapes the way that we all communicate with each other every single day, and it is only going to get worse in the years ahead. Sadly, most people simply have no idea what is happening to them.

The following are 20 outrageous examples that show how political correctness is taking over America

#1 According to a new Army manual, U.S. soldiers will now be instructed to avoid any criticism of pedophilia and to avoid criticizing anything related to Islam. The following is from a recent Judicial Watch article

The draft leaked to the newspaper offers a list of taboo conversation topics that soldiers should avoid, including making derogatory comments about the Taliban, advocating womens rights, any criticism of pedophilia, directing any criticism towards Afghans, mentioning homosexuality and homosexual conduct or anything related to Islam.

#2 The Obama administration has banned all U.S. government agencies from producing any training materials that link Islam with terrorism. In fact, the FBI has gone back and purged references to Islam and terrorism from hundreds of old documents.

#3 Authorities are cracking down on public expressions of the Christian faith all over the nation, and yet atheists in New York City are allowed to put up an extremely offensive billboard in Time Square this holiday season that shows a picture of Jesus on the cross underneath a picture of Santa with the following tagline: Keep the Merry! Dump the Myth!

#4 According to the Equal Employment Opportunity Commission, it is illegal for employers to discriminate against criminals because it has a disproportionate impact on minorities.

#5 Down in California, Governor Jerry Brown has signed a bill that will allow large numbers of illegal immigrants to legally get California drivers licenses.

#6 Should an illegal immigrant be able to get a law license and practice law in the United States? That is exactly what the State Bar of California argued earlier this year

An illegal immigrant applying for a law license in California should be allowed to receive it, the State Bar of California argues in a filing to the state Supreme Court.

Sergio Garcia, 35, of Chico, Calif., has met the rules for admission, including passing the bar exam and the moral character review, and his lack of legal status in the United States should not automatically disqualify him, the Committee of Bar Examiners said Monday.

#7 More than 75 percent of the babies born in Detroit are born to unmarried women, yet it is considered to be politically correct to suggest that there is anything wrong with that.

#8 The University of Minnesota Duluth (UMD) initiated an aggressive advertising campaign earlier this year that included online videos, billboards, and lectures that sought to raise awareness about white privilege.

#9 At one high school down in California, five students were sent home from school for wearing shirts that displayed the American flag on the Mexican holiday of Cinco de Mayo.

#10 Chris Matthews of MSNBC recently suggested that it is racist for conservatives to use the word Chicago.

#11 A judge down in North Carolina has ruled that it is unconstitutional for North Carolina to offer license plates that say Choose Life on them.

#12 The number of gay characters on television is at an all-time record high. Meanwhile, there are barely any strongly Christian characters to be found anywhere on television or in the movies, and if they do happen to show up they are almost always portrayed in a very negative light.

#13 House Speaker John Boehner recently stripped key committee positions from four rebellious conservatives in the U.S. House of Representatives. It is believed that this purge happened in order to send a message that members of the party better fall in line and support Boehner in his negotiations with Barack Obama.

#14 There is already a huge push to have a woman elected president in 2016. It doesnt appear that it even matters which woman is elected. There just seems to be a feeling that it is time for a woman to be elected even if she doesnt happen to be the best candidate.

#15 Volunteer chaplains for the Charlotte-Mecklenburg Police Department have been banned from using the name of Jesus on government property.

#16 Chaplains in the U.S. military are being forced to perform gay marriages, even if it goes against their personal religious beliefs. The few chaplains that have refused to follow orders know that it means the end of their careers.

#17 All over the country, the term manhole is being replaced with the terms utility hole or maintenance hole.

#18 In San Francisco, authorities have installed small plastic privacy screens on library computers so that perverts can continue to exercise their right to watch pornography at the library without children being exposed to it.

#19 You will never guess what is going on at one college up in Washington state

A Washington college said their non-discrimination policy prevents them from stopping a transgender man from exposing himself to young girls inside a womens locker room, according to a group of concerned parents.

#20 All over America, liberal commentators are now suggesting that football has become too violent and too dangerous and that it needs to be substantially toned down. In fact, one liberal columnist for the Boston Globe is even proposing that football should be banned for anyone under the age of 14.

The rest is here:

20 Outrageous Examples That Show How Political Correctness …

Posted in Political Correctness | Comments Off on 20 Outrageous Examples That Show How Political Correctness …

The origin and nature of political correctness (26/11/2015)

Posted: June 16, 2016 at 5:47 pm

What Is Political Correctness? Political Correctness (PC) is the communal tyranny that erupted in the 1980s. It was a spontaneous declaration that particular ideas, expressions and behaviour, which were then legal, should be forbidden by law, and people who transgressed should be punished. (see Newspeak) It started with a few voices but grew in popularity until it became unwritten and written law within the community. With those who were publicly declared as being not politically correct becoming the object of persecution by the mob, if not prosecution by the state.

The Odious Nature Of Political Correctness To attempt to point out the odious nature of Political Correctness is to restate the crucial importance of plain speaking, freedom of choice and freedom of speech; these are the community’s safe-guards against the imposition of tyranny, indeed their absence is tyranny (see “On Liberty”, Chapter II, by J.S. Mill). Which is why any such restrictions on expression such as those invoked by the laws of libel, slander and public decency, are grave matters to be decided by common law methodology; not by the dictates of the mob.

Clear Inspiration For Political Correctness The declared rational of this tyranny is to prevent people being offended; to compel everyone to avoid using words or behaviour that may upset homosexuals, women, non-whites, the crippled, the stupid, the fat or the ugly. This reveals not only its absurdity but its inspiration. The set of values that are detested are those held by the previous generation (those who fought the Second World War), which is why the terms niggers, coons, dagos, wogs, poofs, spastics and sheilas, have become heresy, for, in an act of infantile rebellion, their subject have become revered by the new generation. Political Correctness is merely the resentment of spoilt children directed against their parent’s values.

The Origins Of Political Correctness A community declines when the majority of its citizens become selfish, and under this influence it slowly dismantles all the restraints upon self-indulgence established by manners, customs, beliefs and law: tradition. (See the law of reverse civilization) As each subsequent generation of selfish citizens inherits control of the community, it takes its opportunity to abandon more of the irksome restraints that wisdom had installed. The proponents of this social demolition achieve their irrational purpose by publicly embracing absurdity through slogans while vilifying any who do not support their stance. The purpose of the slogan is to enshrine irrational fears, or fancies, as truth through the use of presumptuous words, so public pronouncement:

For example the slogan Australia is Multicultural is a claim that:

All of which is an attack upon truth, clear thinking and plain speaking.

Outright Assault Upon Tradition Naturally as the restraints shrink the rebellion grows ever more extreme in nature. When the author of Animal Farm wrote an article in 1946 about the pleasures of a rose garden, he was criticised for being bourgeois. George Orwell mentions this in his essay A Good Word For The Vicar Of Bray, published in the Tribune, 1946. The term bourgeois was then a popular slogan meaning having humdrum middle class ideas (The Oxford English Dictionary 3rd Edition, 1938), which is just a blatant attack upon tradition the sanity of the community.

From Bourgeois To Racist Now, in the late 1990s, the results of being bourgeois (retaining traditional notions), is being labelled racist, sexist etc. and risk losing your job, your reputation, being jostled in the street, being subject to judicial penalty and death threats. And it is this very extremity of reaction that has won media attention and the name Political Correctness, though the reaction will become even more unpleasant with the next generation.

Parental Values Always Attacked The inevitable scapegoat for people impatient of restraint must always be parents, because these are society’s agents for teaching private restraint. So the cherished notions of the parents are always subject to attack by their maturing offspring. This resentment of tradition was observed in his own civilization by Polybius (c. 200-118 BC), the Greek historian, who said:

Tyranny Grows Once a community embraces tyranny the penalties can only grow in severity. This gradual increase is easily seen by the example of Toastmasters. As the members of the club became more concerned about the delights of socializing and less concerned about the disciplines of public speaking, they became more intolerant of citizens who were earnest about learning the art of rhetoric. Once those members who did their duty by truthfully pointing out the shortcomings in another member’s performance were just labeled as negative or discouraging; later this became a risk of being socially ostracized. Now (since 1998) unpopularity can result in being permanently ejected from the club by a majority vote.

Australian Experience Of PC Tyranny In my country the tyranny erupted with the persecution of public figures such as Arthur Tunstall for uttering truths that had become unpopular, either directly in a speech, or indirectly by telling jokes. The maiden speech of the Federal Member of Parliament for Ipswich contained so many disliked truths that the rabble escalated the ferocity of their attack and extended them to her supporters, introducing terror into Australian politics. Anyone who watched the TV coverage (1997/8) of Pauline Hanson’s political campaign will have seen the nature of her opponents; a throng who looked and behaved more like barbarians than citizens of a civilized community. And any mob that chants “Burn the witch” (when she spoke outside an Ipswich hall after she had been refused entry) leaves no doubt as to their intent or character.

Widespread Throughout The Community Revealing the extent of the mob’s support, their sentiments (suitably refined) were enthusiastically echoed by the media and the administration. And in an unprecedented act of cooperation, all the political parties conspired to eject Ms Hanson from the federal parliament in the election of October 3rd 1998. This was revealed by the how-to-vote cards of the parties contesting the seat of Blaire, which all placed Ms Hanson last. This was a public admission by both the major parties that they would rather risk losing the election than allow this forthright woman to keep her seat in parliament.

International Experience Of PC Tyranny And it is not just in Australia but in every western democratic country popular demands have been made for restrictions on expression. Bowing to the clamour of the electorate, politicians in these countries have enacted absurd laws. The Australian community wide declaration of irrational hatred displayed by the persecution of Pauline Hanson, paralleled the Canadian experience of Paul Fromm, director of the Canadian Association for Free Expression Inc., and the examples of the national soccer coach of England and a prominent public servant in Washington, USA confirm that the hysteria is everywhere.

The Inevitable Result Of Political Correctness By using the excuse of not upsetting anyone, the politically correct are demanding that people behave like the fool who would please everyone; that everyone must become such a fool! All must accept the notions of the Politically Correct as truth, or else! This is the same mentality that inspired the Inquisition and forced Galileo to recant; the same mentality that inspired the Nazis and obtained the Holocaust. Once expression gets placed in a straitjacket of official truth, then the madness that occurs in all totalitarian states is obtained. Life, in private and public, becomes a meaningless charade where delusion thrives and terror rules.

Examples Of Denying Freedom Of Speech Evidence of this effect is amply demonstrated by the Soviets, who embraced Political Correctness with the Communist Revolution. The lumbering, pompous, impoverished, humourless monster this Nation became is now History. And it should be remembered that in 1914 Tsarist Russia was considered by Edmund Cars, a French economist who then published a book about the subject, to be an economic giant set to overshadow Europe. The SBS television program “What Ever Happened To Russia”, which was broadcast at 8.30 pm on 25th August 1994, detailed the terrible effect the Bolshevik’s oppression had on their empire. And SBS further detailed the terrible crimes inflicted upon the Russians by their leader Stalin, in the series “Blood On The Snow” broadcast in March 1999. (Also see “Stalin’s Secret War” by Nikolai Tolstoy)

An Old Witness Helen, a member of Parramatta writers club in 1992, was a citizen of Kiev during the Red Terror, and described living with official truth and the constant threat of arrest. Knowing the content of the latest party newspaper was critical to avoiding internment, as public contradiction, either directly or indirectly, meant denouncement to the KGB. If you complained about being hungry when food shortages were not officially recognized, then you became an enemy of the state. If you failed to praise a Soviet hero, or praised an ex-hero, then again your fate was sealed. The need to be politically correct dominated all conversation and behaviour, as failure meant drastic penalty. Uncertainty and fear pervaded everything, nobody could be sure that an official request to visit Party headquarters meant imprisonment, torture, death, public reward or nothing important.

Living with such a terrible handicap naturally destroyed all spontaneity of thought or action, rendering the whole community mad. The awful effect this had upon Helen’s sanity was made clear when she escaped to Australia. Here she encountered the free press, which had an unpleasant impact upon her. One day she read The Australian newspaper which happened to carry two separate articles about Patrick White, one praising, the other denigrating, this well known writer. Poor Helen found herself turning from one to the other, which was she to repeat as correct? She nearly had a nervous breakdown.

Political Correctness Is Social Dementia Unless plain speaking is allowed, clear thinking is denied. There can be no good reason for denying freedom of expression, there is no case to rebut, only the empty slogans of people inspired by selfishness and unrestrained by morality. The proponents of this nonsense neither understand the implications of what they say, nor why they are saying it: they are insane; which must mean that any community that embraces Political Correctness has discarded sanity.

Social Decline Grows Worse With Each Generation Political Correctness is part of the social decline that generation by generation makes public behaviour less restrained and less rational.

See the original post here:

The origin and nature of political correctness (26/11/2015)

Posted in Political Correctness | Comments Off on The origin and nature of political correctness (26/11/2015)

Bob Black – Wikipedia, the free encyclopedia

Posted: June 12, 2016 at 8:19 pm

Bob Black Born Robert Charles Black, Jr. (1951-01-04) January 4, 1951 (age65) Detroit, Michigan Almamater University of Michigan Era 20th-century philosophy Region Western Philosophy School Post-left anarchy

Main interests

Notable ideas

Influences

Robert Charles “Bob” Black, Jr. (born January 4, 1951) is an American anarchist. He is the author of the books The Abolition of Work and Other Essays, Beneath the Underground, Friendly Fire, Anarchy After Leftism, Defacing the Currency, and numerous political essays.

Black graduated from the University of Michigan and Georgetown Law School. He later took M.A. degrees in jurisprudence and social policy from the University of California (Berkeley), criminal justice from the State University of New York (SUNY) at Albany, and an LL.M in criminal law from the SUNY Buffalo School of Law. During his college days (1969-1973) he became disillusioned with the New Left of the 1970s and undertook extensive readings in anarchism, utopian socialism, council communism, and other left tendencies critical of both MarxismLeninism and social democracy. He found some of these sources at the Labadie Collection at the University of Michigan, a major collection of radical, labor, socialist, and anarchist materials which is now the repository for Black’s papers and correspondence. He was soon drawn to Situationist thought, egoist communism, and the anti-authoritarian analyses of John Zerzan and the Detroit magazine Fifth Estate. He produced a series of ironic political posters signed “The Last International”, first in Ann Arbor, Michigan, then in San Francisco where he moved in 1978. In the Bay Area he became involved with the publishing and cultural underground, writing reviews and critiques of what he called the “marginals milieu.” Since 1988 he has lived in upstate New York.[1]

Black is best known for a 1985 essay, “The Abolition of Work,” which has been widely reprinted and translated into at least thirteen languages (most recently, Urdu). In it he argued that work is a fundamental source of domination, comparable to capitalism and the state, which should be transformed into voluntary “productive play.” Black acknowledged among his inspirations the French utopian socialist Charles Fourier, the British utopian socialist William Morris, the Russian anarcho-communist Peter Kropotkin, and the Situationists. The Abolition of Work and Other Essays, published by Loompanics in 1986, included, along with the title essay, some of his short Last International texts, and some essays and reviews reprinted from his column in “San Francisco’s Appeal to Reason,” a leftist and counter-cultural tabloid published from 1980 to 1984.

Two more essay collections were later published as books, Friendly Fire (Autonomedia, 1992) and Beneath the Underground (Feral House, 1994), the latter devoted to the do-it-yourself/fanzine subculture of the ’80s and ’90s which he called “the marginals milieu” and in which he had been heavily involved. Anarchy after Leftism (C.A.L. Press, 1996) is a more or less point-by-point rebuttal of Murray Bookchin’s Social Anarchism or Lifestyle Anarchism: An Unbridgeable Chasm (A.K. Press, 1996), which had criticized as “lifestyle anarchism” various nontraditional tendencies in contemporary anarchism. Black’s short book (“about an even shorter book,” as he put it) was succeededas an E-book published in 2011 at the online Anarchist Libraryby Nightmares of Reason, a longer and more wide-ranging critique of Bookchin’s anthropological and historical arguments, especially Bookchin’s espousal of “libertarian municipalism” which Black ridiculed as “mini-statism.”

In 1996 Black cooperated with the Seattle police Narcotics Division against Seattle author Jim Hogshire, leading to a police raid on Hogshire’s home and the subsequent arrest of Hogshire and his wife.[2][3][4]

Since 2000, Black has focused on topics reflecting his education and reading in the sociology and the ethnography of law, resulting in writings often published in Anarchy: A Journal of Desire Armed. His recent interests have included the anarchist implications of dispute resolution institutions in stateless primitive societies (arguing that mediation, arbitration, etc., cannot feasibly be annexed to the U.S. criminal justice system, because they presuppose anarchism and a relative social equality not found in state/class societies). At the 2011 annual B.A.S.T.A.R.D. anarchist conference in Berkeley, California, Black presented a workshop where he argued that, in society as it is, crime can be an anarchist method of social control, especially for people systematically neglected by the legal system. An article based on this presentation appeared in Anarchy magazine and in his 2013 book, Defacing the Currency: Selected Writings, 1992-2012.

Black has expressed an interest, which grew out of his polemics with Bookchin, in the relation of democracy to anarchism. For Bookchin, democracythe “direct democracy” of face-to-face assemblies of citizensis anarchism. Some contemporary anarchists agree, including the academics Cindy Milstein, David Graeber, and Peter Staudenmeier. Black, however, has always rejected the idea that democracy (direct or representative) is anarchist. He made this argument at a presentation at the Long Haul Bookshop (in Berkeley) in 2008. In 2011, C.A.L. Press published as a pamphlet Debunking Democracy, elaborating on the speech and providing citation support. This too is reprinted in Defacing the Currency.

Some of his work from the early 1980s includes (anthologized in The Abolition of Work and Other Essays) highlights his critiques of the nuclear freeze movement (“Anti-Nuclear Terror”), the editors of Processed World (“Circle A Deceit: A Review of Processed World”), radical feminists (“Feminism as Fascism”), and right wing libertarians (“The Libertarian As Conservative”). Some of these essays previously appeared in “San Francisco’s Appeal to Reason” (1981-1984), a leftist and counter-cultural tabloid newspaper for which Black wrote a column.

“To demonize state authoritarianism while ignoring identical albeit contract-consecrated subservient arrangements in the large-scale corporations which control the world economy is fetishism at its worst … Your foreman or supervisor gives you more or-else orders in a week than the police do in a decade.”

The Abolition of Work and Other Essays (1986), draws upon some ideas of the Situationist International, the utopian socialists Charles Fourier and William Morris, anarchists such as Paul Goodman, and anthropologists such as Richard Borshay Lee and Marshall Sahlins. Black criticizes work for its compulsion, and, in industrial society, for taking the form of “jobs”the restriction of the worker to a single limited task, usually one which involves no creativity and often no skill. Black’s alternative is the elimination of what William Morris called “useless toil” and the transformation of useful work into “productive play,” with opportunities to participate in a variety of useful yet intrinsically enjoyable activities, as proposed by Charles Fourier. Beneath the Underground (1992) is a collection of texts relating to what Black calls the “marginals milieu”the do-it-yourself zine subculture which flourished in the 80s and early 90s. Friendly Fire (1992) is, like Black’s first book, an eclectic collection touching on many topics including the Art Strike, Nietzsche, the first Gulf War and the Dial-a-Rumor telephone project he conducted with Zack Replica (1981-1983).

Defacing the Currency: Selected Writings, 1992-2012[6] was published by Little Black Cart Press in 2013. It includes a lengthy (113 pages), previously unpublished critique of Noam Chomsky, “Chomsky on the Nod.” A similar collection has been published, in Russian translation, by Hylaea Books in Moscow. Black’s most recent book, also from LBC Books, is Instead of Work, which collects “The Abolition of Work” and seven other previously published texts, with a lengthy new update, “Afterthoughts on the Abolition of Work.” The introduction is by science fiction writer Bruce Sterling.

Follow this link:

Bob Black – Wikipedia, the free encyclopedia

Posted in Abolition Of Work | Comments Off on Bob Black – Wikipedia, the free encyclopedia

FSTV Store | Free Speech TV

Posted: May 30, 2016 at 2:43 am

Free Speech TV (FSTV) is a tax-exempt, 501(c)3 nonprofit organization funded entirely through individual donations and grants from foundations. We pride ourselves on being independent from billionaires, corporations, and governments, in that we receive no corporate underwriting or government support and thus are not subject to their influence. Our tax ID number is 51-0173482. To make a donation by mail, please send your check or money order to: Free Speech TV, P.O. Box 44099, Denver, CO 80201. If you have any questions check out our donation help page or contact Heather by calling 303-542-4813 or emailing heather(at)freespeech.org.

To leave a comment, compliment, or any input, please contact the comment line at 888-378-8855 and leave a message.

Add items to your shopping cart.

867

$ 5 to The Jason Mckain “…

$ 5 to The Jason Mckain “…

On June 14, 2015, Free Speech TV tragically lost our beloved development…

$ 5 to The Jason Mckain “…

$ 5 to The Jason Mckain “Giant Slayer” Fellowship

On June 14, 2015, Free Speech TV tragically lost our beloved development director and friend Jason O McKain. He was affectionately nicknamed “Giant Slayer,” because he was fearless in standing up to and calling out giant corporations. Please help FSTV raise $50,000 to fund the next 5 years of “Giant Slayers.”

868

$10 to the Jason McKain “…

$10 to the Jason McKain “…

On June 14, 2015, Free Speech TV tragically lost our beloved development…

$10 to the Jason McKain “…

$10 to the Jason McKain “Giant Slayer” Fellowship

On June 14, 2015, Free Speech TV tragically lost our beloved development director and friend Jason O McKain. He was affectionately nicknamed “Giant Slayer,” because he was fearless in standing up to and calling out giant corporations. Please help FSTV raise $50,000 to fund the next 5 years of “Giant Slayers.”

869

$25 to the Jason McKain “…

$25 to the Jason McKain “…

On June 14, 2015, Free Speech TV tragically lost our beloved development…

$25 to the Jason McKain “…

$25 to the Jason McKain “Giant Slayer” Fellowship

On June 14, 2015, Free Speech TV tragically lost our beloved development director and friend Jason O McKain. He was affectionately nicknamed “Giant Slayer,” because he was fearless in standing up to and calling out giant corporations. Please help FSTV raise $50,000 to fund the next 5 years of “Giant Slayers.”

Original post:
FSTV Store | Free Speech TV

Posted in Free Speech | Comments Off on FSTV Store | Free Speech TV