Tag Archives: speech

Cyprus Space Exploration Organisation (CSEO)

Posted: November 21, 2016 at 11:11 am

Posted: 16 May, 2015 Cyprus’ project “Arachnobeea” is the winner of the International Space Apps Challenge! 16 May, Nicosia

A universal success for the Cypriot team and recognition by NASA!

NASA announced the winners of the International Space Apps Challenge today, and “Arachnobeea”, the runner-up team of the Space Apps Challenge Limassol 2015, was the global winner in the “Best Mission Concept” category!

Arachnobeea was selected by a NASA judging committee, among over 950 other projects from 135 locations worldwide, as one of the 6 global winners!

The team definitely did an incredible job designing the most innovative quad-copter drone destined for usage in space vehicles, and they managed to excite everyone with their presentation at the local competition in Limassol in early April. Apparently, the NASA experts identified the uniqueness of the team’s design and awarded the Cypriot team as the international winner for the “Best Mission Concept” of the 2015 International Space Apps Challenge.

Team “Arachnobeea” truly make us proud with their success! The announcement of the winners by NASA

During the official opening gala of the CSEO Space Week 2015, at the Russian Cultural Centre, Cosmonauts on-board the ISS sent greetings to the guests of the opening ceremony and to the island of Cyprus.

The moment the Space Week was declared open

From left: Mr Rogalev – Director of Russian Cultural Centre, Mr Thrassou – President of Cypro-Russian Friendship Association, Mr Danos – President of CSEO, Cosmonaut Alexandr Volkov, Russian Ambassador Mr Osadchiy, Honorary Russian Consul Mr Prodromou

More on CSEO Space Week 2015:

Our aim is to promote space exploration with various events and activities.

In cooperation with the Municipality of Nicosia and the support of the Russian Cultural Centre, ROSCOSMOS, the Confucius Institute, the Cypro-Russian Friendship Association, the China Society of Astronautics, the University of Cyprus and the Ministry of Communications and Works of the Republic of Cyprus we are organising “CSEO Space Week 2015” in the capital of Cyprus – Nicosia, from the 20th – 26th of April 2015, promoting space exploration, with various events and activities.

Part of the programme for the “CSEO Space Week 2015” includes:

Special opening highlight event – Monday 21st July, 19:15 – 21:00, City Plaza, Nicosia

We are connecting live with the USA, for a special live talk with famous author and science journalist Andrew Chaikin, organised just for Cyprus, all thanks to the kind effort and assistance of the American Embassy.

Andrew Chaikin is the author of the book “A Man on the Moon”, a detailed description of the Apollo missions to the Moon, which was turned into the world famous TV production “From the Earth to the Moon”, a 12-part HBO miniseries. Event Details

Our team MarsSense was short-listed in the Top 4 finalists for the “Best Student Paper” Award, at SpaceOps 2014, organised by JPL, NASA at Pasadena, California, last May.

They presented at the topmost event on Space Operations, organised by NASA, to leading members of Space Agencies and Community. Their research received very positive feedback from respected leaders of the space community and was finally shortlisted in the top 4 research student papers of the last 2 years at SpaceOps 2014!

Congratulations to MarsSense!!!

During our mission to the USA, the Cyprus Space Exploration Organisation (CSEO) promoted collaboration with many international organisations and national space agencies, paving the way to a number of exciting agreements.

Press Conference, at the Ministry of Communications and Works, Friday 20th June 2014:

CSEO’s President explained that the involvement of Cyprus in the Space Industry and a full membership to ESA can bring big economic benefits to the island’s economy.

CSEO extended a hand of cooperation to the Cypriot government.

The Minister of Communications and Works, Mr Marios Demetriades, as part of his speech said: (translation) “I would like to publicly congratulate the Cypriot delegation to the USA, and specifically the finalist team, as well as the Cyprus Space Exploration Organisation, for its support and participation in the entire effort of the mission”.

“The Ministry of Communications and Works, as well as I personally will support every effort, to ensure that this breakthrough has continuity and perspective. The geographical position of Cyprus and its status as an EU member state creates unprecedented opportunities that we must not allow to be lost”. The Press Conference was covered by all the main local TV channels and other media.

CSEO’s promotional video as first seen at the SpaceWeek Gala on 10th of April 2014.

Our aim is to promote space exploration with various events and activities, leading up to the NASA Space Apps and the visit by Cosmonaut Aleksandr Volkov that holds the record of longest stay in space.

NASA designated to CSEO’s Marios Isaakides to organize NASA Space Apps Nicosia 2014, for the weekend of 12-13 April 2014.

More on the Space Week:

Part of the programme for the “Space Week” includes:

Join in on the Fun!

Posted: January 15, 2014 “Launching Cyprus Into the Space Era – Event 2: Building the Future” 20th January 2014, 19:00 – ARTos Foundation, Nicosia

Read more here:

Cyprus Space Exploration Organisation (CSEO)

Posted in Space Exploration | Comments Off on Cyprus Space Exploration Organisation (CSEO)

Political correctness – Wikipedia

Posted: at 11:08 am

The term political correctness (adjectivally: politically correct, commonly abbreviated to PC;[1] also abbreviated as P.C. and p.c.) in modern usage, is used to describe language, policies, or measures that are intended primarily not to offend or disadvantage any particular group of people in society. In the media, the term is generally used as a pejorative, implying that these policies are excessive.[2][3][4][5][6][7][8]

The term had only scattered usage before the early 1990s, usually as an ironic self-description, but entered more mainstream usage in the United States when it was the subject of a series of articles in The New York Times.[9][10][11][12][13][14] The phrase was widely used in the debate about Allan Bloom’s 1987 book The Closing of the American Mind,[4][6][15][16] and gained further currency in response to Roger Kimball’s Tenured Radicals (1990),[4][6][17][18] and conservative author Dinesh D’Souza’s 1991 book Illiberal Education, in which he condemned what he saw as liberal efforts to advance self-victimization, multiculturalism through language, affirmative action, and changes to the content of school and university curricula.[4][5][17][19]

Commentators on the left have said that conservatives pushed the term in order to divert attention from more substantive matters of discrimination and as part of a broader culture war against liberalism.[17][20][21] They also argue that conservatives have their own forms of political correctness, which are generally ignored by conservative commenters.[22][23][24]

The term “politically correct” was used infrequently until the latter part of the 20th century. This earlier use did not communicate the social disapproval usually implied in more recent usage. In 1793, the term “politically correct” appeared in a U.S. Supreme Court judgment of a political lawsuit.[25] The term also had occasional use in other English-speaking countries.[26][27]William Safire states that the first recorded use of the term in the typical modern sense is by Toni Cade Bambara in the 1970 anthology The Black Woman.[28][clarification needed] The term probably entered use in the United Kingdom around 1975.[8][clarification needed]

In the early-to-mid 20th century, the phrase “politically correct” was associated with the dogmatic application of Stalinist doctrine, debated between Communist Party members and American Socialists. This usage referred to the Communist party line, which provided “correct” positions on many political matters. According to American educator Herbert Kohl, writing about debates in New York in the late 1940s and early 1950s,

The term “politically correct” was used disparagingly, to refer to someone whose loyalty to the CP line overrode compassion, and led to bad politics. It was used by Socialists against Communists, and was meant to separate out Socialists who believed in egalitarian moral ideas from dogmatic Communists who would advocate and defend party positions regardless of their moral substance.

In March 1968, the French philosopher Michel Foucault is quoted as saying: “a political thought can be politically correct (‘politiquement correcte’) only if it is scientifically painstaking”, referring to leftist intellectuals attempting to make Marxism scientifically rigorous rather than relying on orthodoxy.[29]

In the 1970s, the American New Left began using the term “politically correct”.[30] In the essay The Black Woman: An Anthology (1970), Toni Cade Bambara said that “a man cannot be politically correct and a [male] chauvinist, too.” Thereafter, the term was often used as self-critical satire. Debra L. Shultz said that “throughout the 1970s and 1980s, the New Left, feminists, and progressives… used their term ‘politically correct’ ironically, as a guard against their own orthodoxy in social change efforts.”[4][30][31] As such, PC is a popular usage in the comic book Merton of the Movement, by Bobby London, which then was followed by the term ideologically sound, in the comic strips of Bart Dickon.[30][32] In her essay “Toward a feminist Revolution” (1992) Ellen Willis said: “In the early eighties, when feminists used the term ‘political correctness’, it was used to refer sarcastically to the anti-pornography movement’s efforts to define a ‘feminist sexuality’.”[33]

Stuart Hall suggests one way in which the original use of the term may have developed into the modern one:

According to one version, political correctness actually began as an in-joke on the left: radical students on American campuses acting out an ironic replay of the Bad Old Days BS (Before the Sixties) when every revolutionary groupuscule had a party line about everything. They would address some glaring examples of sexist or racist behaviour by their fellow students in imitation of the tone of voice of the Red Guards or Cultural Revolution Commissar: “Not very ‘politically correct’, Comrade!”[34]

Critics, including Camille Paglia[35] and James Atlas,[36][37] have pointed to Allan Bloom’s 1987 book The Closing of the American Mind[15] as the likely beginning of the modern debate about what was soon named “political correctness” in American higher education.[4][6][16][38] Professor of English literary and cultural studies at CMU Jeffrey J. Williams wrote that the “assault on…political correctness that simmered through the Reagan years, gained bestsellerdom with Bloom’s Closing of the American Mind.” [39] According to Z.F. Gamson, “Bloom’s Closing of the American Mind…attacked the faculty for ‘political correctness’.”[40] Prof. of Social Work at CSU Tony Platt goes further and says the “campaign against ‘political correctness'” was launched by the book in 1987.[41]

A word search of six “regionally representative Canadian metropolitan newspapers”, found only 153 articles in which the terms “politically correct” or “political correctness” appeared between 1 January 1987 and 27 October 1990.[12]

An October 1990 New York Times article by Richard Bernstein is credited with popularizing the term.[11][13][14][42][43] At this time, the term was mainly being used within academia: “Across the country the term p.c., as it is commonly abbreviated, is being heard more and more in debates over what should be taught at the universities”.[9]Nexis citations in “arcnews/curnews” reveal only seventy total citations in articles to “political correctness” for 1990; but one year later, Nexis records 1532 citations, with a steady increase to more than 7000 citations by 1994.[42][44] In May 1991 The New York Times had a follow-up article, according to which the term was increasingly being used in a wider public arena:

What has come to be called “political correctness,” a term that began to gain currency at the start of the academic year last fall, has spread in recent months and has become the focus of an angry national debate, mainly on campuses, but also in the larger arenas of American life.

The previously obscure far-left term became common currency in the lexicon of the conservative social and political challenges against progressive teaching methods and curriculum changes in the secondary schools and universities of the U.S.[5][45] Policies, behavior, and speech codes that the speaker or the writer regarded as being the imposition of a liberal orthodoxy, were described and criticized as “politically correct”.[17] In May 1991, at a commencement ceremony for a graduating class of the University of Michigan, then U.S. President George H.W. Bush used the term in his speech: “The notion of political correctness has ignited controversy across the land. And although the movement arises from the laudable desire to sweep away the debris of racism and sexism and hatred, it replaces old prejudice with new ones. It declares certain topics off-limits, certain expression off-limits, even certain gestures off-limits.”[46][47][48]

After 1991, its use as a pejorative phrase became widespread amongst conservatives in the US.[5] It became a key term encapsulating conservative concerns about the left in culture and political debate more broadly, as well as in academia. Two articles on the topic in late 1990 in Forbes and Newsweek both used the term “thought police” in their headlines, exemplifying the tone of the new usage, but it was Dinesh D’Souza’s Illiberal Education: The Politics of Race and Sex on Campus (1991) which “captured the press’s imagination.”[5][clarification needed] Similar critical terminology was used by D’Souza for a range of policies in academia around victimization, supporting multiculturalism through affirmative action, sanctions against anti-minority hate speech, and revising curricula (sometimes referred to as “canon busting”).[5][49][not in citation given] These trends were at least in part a response to multiculturalism and the rise of identity politics, with movements such as feminism, gay rights movements and ethnic minority movements. That response received funding from conservative foundations and think tanks such as the John M. Olin Foundation, which funded several books such as D’Souza’s.[4][17]

Herbert Kohl, in 1992, commented that a number of neoconservatives who promoted the use of the term “politically correct” in the early 1990s were former Communist Party members, and, as a result, familiar with the Marxist use of the phrase. He argued that in doing so, they intended “to insinuate that egalitarian democratic ideas are actually authoritarian, orthodox and Communist-influenced, when they oppose the right of people to be racist, sexist, and homophobic.”[3]

During the 1990s, conservative and right-wing politicians, think-tanks, and speakers adopted the phrase as a pejorative descriptor of their ideological enemies especially in the context of the Culture Wars about language and the content of public-school curricula. Roger Kimball, in Tenured Radicals, endorsed Frederick Crews’s view that PC is best described as “Left Eclecticism”, a term defined by Kimball as “any of a wide variety of anti-establishment modes of thought from structuralism and poststructuralism, deconstruction, and Lacanian analyst to feminist, homosexual, black, and other patently political forms of criticism.”[18][39]Jan Narveson wrote that “that phrase was born to live between scare-quotes: it suggests that the operative considerations in the area so called are merely political, steamrolling the genuine reasons of principle for which we ought to be acting…”[2]

In the American Speech journal article “Cultural Sensitivity and Political Correctness: The Linguistic Problem of Naming” (1996), Edna Andrews said that the usage of culturally inclusive and gender-neutral language is based upon the concept that “language represents thought, and may even control thought”.[50] Andrews’ proposition is conceptually derived from the SapirWhorf Hypothesis, which proposes that the grammatical categories of a language shape the ideas, thoughts, and actions of the speaker. Moreover, Andrews said that politically moderate conceptions of the languagethought relationship suffice to support the “reasonable deduction … [of] cultural change via linguistic change” reported in the Sex Roles journal article “Development and Validation of an Instrument to Measure Attitudes Toward Sexist/Nonsexist Language” (2000), by Janet B. Parks and Mary Ann Robinson.[citation needed]

Liberal commentators have argued that the conservatives and reactionaries who used the term did so in effort to divert political discussion away from the substantive matters of resolving societal discrimination such as racial, social class, gender, and legal inequality against people whom the right-wing do not consider part of the social mainstream.[4][20][51][52][53][54][55] Commenting in 2001, one such British journalist,[56][57]Polly Toynbee, said “the phrase is an empty, right-wing smear, designed only to elevate its user”, and, in 2010 “…the phrase “political correctness” was born as a coded cover for all who still want to say Paki, spastic, or queer…”[56][57][58][59] Another British journalist, Will Hutton,[60][61][62][63] wrote in 2001:

Political correctness is one of the brilliant tools that the American Right developed in the mid1980s, as part of its demolition of American liberalism…. What the sharpest thinkers on the American Right saw quickly was that by declaring war on the cultural manifestations of liberalism by levelling the charge of “political correctness” against its exponents they could discredit the whole political project.

Glenn Loury described the situation in 1994 as such:

To address the subject of “political correctness,” when power and authority within the academic community is being contested by parties on either side of that issue, is to invite scrutiny of one’s arguments by would-be “friends” and “enemies.” Combatants from the left and the right will try to assess whether a writer is “for them” or “against them.”

In the US, the term has been widely used in the intellectual media, but in Britain, usage has been confined mainly to the popular press.[65] Many such authors and popular-media figures, particularly on the right, have used the term to criticize what they see as bias in the media.[2][17] William McGowan argues that journalists get stories wrong or ignore stories worthy of coverage, because of what McGowan perceives to be their liberal ideologies and their fear of offending minority groups.[66] Robert Novak, in his essay “Political Correctness Has No Place in the Newsroom”, used the term to blame newspapers for adopting language use policies that he thinks tend to excessively avoid the appearance of bias. He argued that political correctness in language not only destroys meaning but also demeans the people who are meant to be protected.[67][68][69] Authors David Sloan and Emily Hoff claim that in the US, journalists shrug off concerns about political correctness in the newsroom, equating the political correctness criticisms with the old “liberal media bias” label.[70]

Jessica Pinta and Joy Yakubu caution against political incorrectness in media and other uses, writing in the Journal of Educational and Social Research: “…linguistic constructs influence our way of thinking negatively, peaceful coexistence is threatened and social stability is jeopardized.” What may result, they add as example “the effect of political incorrect use of language” in some historical occurrences:

Conflicts were recorded in Northern Nigeria as a result of insensitive use of language. In Kaduna for instance violence broke out on the 16th November 2002 following an article credited to one Daniel Isioma which was published in This Day Newspaper, where the writer carelessly made a remark about the Prophet Mohammed and the beauty queens of the Miss World Beauty Pageant that was to be hosted in the Country that year (Terwase n.d). In this crisis, He reported that over 250 people were killed and churches destroyed. In the same vein, crisis erupted on 18th February 2006 in Borno because of a cartoon of the Prophet Mohammed in Iyllands-posten Newspaper (Terwase n.d). Here over 50 people were killed and 30 churches burnt.

Much of the modern debate on the term was sparked by conservative critiques of liberal bias in academia and education,[4] and conservatives have used it as a major line of attack since.[5] University of Pennsylvania professor Alan Charles Kors and lawyer Harvey A. Silverglate connect speech codes in US universities to philosopher Herbert Marcuse. They claim that speech codes create a “climate of repression”, arguing that they are based on “Marcusean logic”.[relevant? discuss] The speech codes, “mandate a redefined notion of “freedom”, based on the belief that the imposition of a moral agenda on a community is justified”, a view which, “requires less emphasis on individual rights and more on assuring “historically oppressed” persons the means of achieving equal rights.” They claim:

Our colleges and universities do not offer the protection of fair rules, equal justice, and consistent standards to the generation that finds itself on our campuses. They encourage students to bring charges of harassment against those whose opinions or expressions “offend” them. At almost every college and university, students deemed members of “historically oppressed groups” above all, women, blacks, gays, and Hispanics are informed during orientation that their campuses are teeming with illegal or intolerable violations of their “right” not to be offended. Judging from these warnings, there is a racial or sexual bigot, to borrow the mocking phrase of McCarthy’s critics, “under every bed.”[72][relevant? discuss]

Kors and Silverglate later established the Foundation for Individual Rights in Education (FIRE), which campaigns against infringement of rights of due process, rights of religion and speech, in particular “speech codes”.[73] Similarly, a common conservative criticism of higher education in the United States is that the political views of the faculty are much more liberal than the general population, and that this situation contributes to an atmosphere of political correctness.[74]

Jessica Pinta and Joy Yakubu write that political correctness is useful in education, in the Journal of Educational and Social Research:

Political correctness is a useful area of consideration when using English language particularly in second language situations. This is because both social and cultural contexts of language are taken into consideration. Zabotkina (1989) says political correctness is not only an essential, but an interesting area of study in English as a Second Language (ESL) or English as Foreign Language (EFL) classrooms. This is because it presents language as used in carrying out different speech acts which provoke reactions as it can persuade, incite, complain, condemn, and disapprove. Language is used for communication and creating social linkages, as such must be used communicatively. Using language communicatively involves the ability to use language at the grammatical level, sociolinguistic level, discourse and strategic levels (Canale & Swain 1980). Understanding language use at these levels center around the fact that differences exist among people, who must communicate with one another, and the differences could be religious, cultural, social, racial, gender or even ideological. Therefore, using language to suit the appropriate culture and context is of great significance.

Groups who oppose certain generally accepted scientific views about evolution, second-hand tobacco smoke, AIDS, global warming, race, and other politically contentious scientific matters have said that PC liberal orthodoxy of academia is the reason why their perspectives of those matters have been rejected by the scientific community.[75] For example, in Lamarck’s Signature: How Retrogenes are Changing Darwin’s Natural Selection Paradigm (1999), Prof. Edward J. Steele said:

We now stand on the threshold of what could be an exciting new era of genetic research…. However, the ‘politically correct’ thought agendas of the neoDarwinists of the 1990s are ideologically opposed to the idea of ‘Lamarckian Feedback’, just as the Church was opposed to the idea of evolution based on natural selection in the 1850s![76]

Zoologists Robert Pitman and Susan Chivers complained about popular and media negativity towards their discovery of two different types of killer whales, a “docile” type and a “wilder” type that ravages sperm whales by hunting in packs: “The forces of political correctness and media marketing seem bent on projecting an image of a more benign form (the Free Willy or Shamu model), and some people urge exclusive use of the name ‘orca’ for the species, instead of what is perceived as the more sinister label of “killer whale.”[77]

Stephen Morris, an economist and a game theorist, built a game model on the concept of political correctness, where “a speaker (advisor) communicates with the objective of conveying information, but the listener (decision maker) is initially unsure if the speaker is biased. There were three main insights from that model. First, in any informative equilibrium, certain statements will lower the reputation of the speaker, independent of whether they turn out to be true. Second, if reputational concerns are sufficiently important, no information is conveyed in equilibrium. Third, while instrumental reputational concerns might arise for many reasons, a sufficient reason is that speakers wish to be listened to.”[78][79][80][81]The Economist writes that “Mr Morris’s model suggests that the incentive to be politically correct fades as society’s population of racists, to take his example, falls.”[79] He credits Glenn Loury with the basis of his work.[78][relevant? discuss]

“Political correctness” is a label typically used for left-wing terms and actions, but not for equivalent attempts to mold language and behavior on the right. However, the term “right-wing political correctness” is sometimes applied by commentators drawing parallels: in 1995, one author used the term “conservative correctness” arguing, in relation to higher education, that “critics of political correctness show a curious blindness when it comes to examples of conservative correctness. Most often, the case is entirely ignored or censorship of the Left is justified as a positive virtue. […] A balanced perspective was lost, and everyone missed the fact that people on all sides were sometimes censored.”[22][82][83][84]

In 2003, Dixie Chicks, a U.S. country music group, criticized the then U.S. President George W. Bush for launching the war against Iraq.[85] They were criticized[86] and labeled “treasonous” by some U.S. right-wing commentators (including Ann Coulter and Bill O’Reilly).[23] Three years later, claiming that at the time “a virulent strain of right wing political correctness [had] all but shut down debate about the war in Iraq,” journalist Don Williams wrote that “[the ongoing] campaign against the Chicks represents political correctness run amok” and observed, “the ugliest form of political correctness occurs whenever there’s a war on.”[23]

In 2003, French fries and French toast were renamed “Freedom fries” and “Freedom toast”[87] in three U.S. House of Representatives cafeterias in response to France’s opposition to the proposed invasion of Iraq. This was described as “polluting the already confused concept of political correctness.”[88] In 2004, then Australian Labor leader Mark Latham described conservative calls for “civility” in politics as “the new political correctness.”[89]

In 2012, Paul Krugman wrote that “the big threat to our discourse is right-wing political correctness, which unlike the liberal version has lots of power and money behind it. And the goal is very much the kind of thing Orwell tried to convey with his notion of Newspeak: to make it impossible to talk, and possibly even think, about ideas that challenge the established order.”[24]

In a 2015 Harris poll it was found that “Republicans are almost twice as likely 42 percent vs. 23percent as Democrats to say that there are any books that should be banned completely….Republicans were also more likely to say that some video games, movies and television programs should be banned.”[90][91]

In 2015 and 2016, leading up to the 2016 United States presidential election, Republican candidate Donald Trump used political correctness as common target in his rhetoric.[90][92][93][94] Eric Mink in a column for the Huffington Post describes in disagreeing voice Trump’s concept of “political correctness”:

political correctness is a controversial social force in a nation with a constitutional guarantee of freedom of expression, and it raises legitimate issues well worth discussing and debating.

But thats not what Trump is doing. Hes not a rebel speaking unpopular truths to power. Hes not standing up for honest discussions of deeply contentious issues. Hes not out there defying rules handed down by elites to control what we say.

All Trumps defying is common decency.[93]

Columnists Blatt and Young of the The Federalist agree, with Blatt stating that “Trump is being rude, not politically incorrect” and that “PC is about preventing debate, not protecting rudeness”.[95][96]

In light of the sexual assault scandals and the criticism the victims faced from Trump supporters, Vox (website) notes that after railing so much against political correctness they simply practice a different kind of repression and shaming: “If the prepolitical correctness era was really so open, why is it only now that these women are speaking out?”[94]

Some right-wing commentators in the West argue that “political correctness” and multiculturalism are part of a conspiracy with the ultimate goal of undermining Judeo-Christian values. This theory, which holds that political correctness originates from the critical theory of the Frankfurt School as part of a conspiracy that its proponents call “Cultural Marxism”, is generally known as the Frankfurt School conspiracy theory by academics.[97][98] The theory originated with Michael Minnicino’s 1992 essay “New Dark Age: Frankfurt School and ‘Political Correctness'”, published in a Lyndon LaRouche movement journal.[99] In 2001, conservative commentator Patrick Buchanan wrote in The Death of the West that “political correctness is cultural Marxism”, and that “its trademark is intolerance”.[100]

In the United States, left forces of “political correctness” have been blamed for censorship, with Time citing campaigns against violence on network television as contributing to a “mainstream culture [which] has become cautious, sanitized, scared of its own shadow” because of “the watchful eye of the p.c. police”, even though in John Wilson’s view protests and advertiser boycotts targeting TV shows are generally organized by right-wing religious groups campaigning against violence, sex, and depictions of homosexuality on television.[101]

In the United Kingdom, some newspapers reported that a nursery school had altered the nursery rhyme “Baa Baa Black Sheep” to read “Baa Baa Rainbow Sheep” and had banned the original.[102] But it was later reported that in fact the Parents and Children Together (PACT) nursery had the children “turn the song into an action rhyme…. They sing happy, sad, bouncing, hopping, pink, blue, black and white sheep etc.”[103] This story was widely circulated and later extended to suggest that other language bans applied to the terms “black coffee” and “blackboard”.[104]Private Eye magazine reported that similar stories had been published in the British press since The Sun first ran them in 1986.[105]

Political correctness is often satirized, for example in The PC Manifesto (1992) by Saul Jerushalmy and Rens Zbignieuw X,[106] and Politically Correct Bedtime Stories (1994) by James Finn Garner, which presents fairy tales re-written from an exaggerated politically correct perspective. In 1994, the comedy film PCU took a look at political correctness on a college campus.

Other examples include the television program Politically Incorrect, George Carlins “Euphemisms” routine, and The Politically Correct Scrapbook.[107] The popularity of the South Park cartoon program led to the creation of the term “South Park Republican” by Andrew Sullivan, and later the book South Park Conservatives by Brian C. Anderson.[108] In its Season 19, South Park has constantly been poking fun at the principle of political correctness, embodied in the show’s new character, PC Principal.[109][110][111]

The Colbert Report’s host Stephen Colbert often talked, satirically, about the “PC Police”.[112][113]

Graham Good, an academic at the University of British Columbia, wrote that the term was widely used in debates on university education in Canada. Writing about a 1995 report on the Political Science department at his university, he concluded: “Political correctness” has become a popular phrase because it catches a certain kind of self-righteous and judgmental tone in some and a pervasive anxiety in others who, fearing that they may do something wrong, adjust their facial expressions, and pause in their speech to make sure they are not doing or saying anything inappropriate. The climate this has created on campuses is at least as bad in Canada as in the United States.[114]

In Hong Kong, as the 1997 handover drew nearer, greater control over the press was exercised by both owners and the Chinese state. This had a direct impact on news coverage of relatively sensitive political issues. The Chinese authorities exerted pressure on individual newspapers to take pro-Beijing stances on controversial issues.[115][116][117]Tung Chee-hwa’s policy advisers and senior bureaucrats increasingly linked their actions and remarks to “political correctness.” Zhaojia Liu and Siu-kai Lau, writing in The first Tung Chee-hwa administration: the first five years of the Hong Kong Special Administrative Region, said that “Hong Kong has traditionally been characterized as having freedom of speech and freedom of press, but that an unintended consequence of emphasizing political ‘correctness’ is to limit the space for such freedom of expression.”[118]

In New Zealand, controversies over PC surfaced during the 1990s regarding the social studies school curriculum.[119][120]

According to ThinkProgress, the “ongoing conversation about P.C. often relies on anecdotal evidence rather than data”.[121] In 2014, researchers at Cornell University reported that political correctness increased creativity in mixed-sex work teams,[122] saying “the effort to be P.C. can be justified not merely on moral grounds but also by the practical and potentially profitable consequences.”[121][clarification needed]

The term “politically correct”, with its suggestion of Stalinist orthodoxy, is spoken more with irony and disapproval than with reverence. But, across the country the term “P.C.”, as it is commonly abbreviated, is being heard more and more in debates over what should be taught at the universities.

More:

Political correctness – Wikipedia

Posted in Political Correctness | Comments Off on Political correctness – Wikipedia

Free Speech: Ten Principles for a Connected World …

Posted: October 27, 2016 at 11:59 am

Admirably clear, . . . wise, up-to-the-minute and wide-ranging. . . . Free Speech encourages us to take a breath, look hard at the facts, and see how well-tried liberal principles can be applied and defended in daunting new circumstances.Edmund Fawcett, New York Times Book Review

A major piece of cultural analysis, sane, witty and urgently important.Timothy Garton Ash exemplifies the robust civility he recommends as an antidote to the pervasive unhappiness, nervousness and incoherence around freedom of speech, rightly seeing the basic challenge as how we create a cultural and moral climate in which proper public argument is possible and human dignity affirmed.–Rowan Williams, Master of Magdalene College, Cambridge, and former Archbishop of Canterbury

Timothy Garton Ash aspires to articulate norms that should govern freedom of communication in a transnational world. His work is original and inspiring. Free Speech is an unfailingly eloquent and learned book that delights as well as instructs.–Robert Post, Dean and Sol & Lillian Goldman Professor of Law, Yale Law School

“A thorough and well-argued contribution to the quest for global free speech norms.”Kirkus Reviews

“There are still countless people risking their lives to defend free speech and struggling to makelonely voices heard in corners around the world where voices are hard to hear. Let us hope that this book will bring confidence and hope to this world-as-city. I believe it will exert great influence.–Murong Xuecun, author of Leave Me Alone: A Novel of Chengdu

“Garton Ash impresses with fact-filled, ideas-rich discussion that is routinely absorbing and illuminating.”Malcolm Forbes, The American Interest

“Particularly timely. . . . Garton Ash argues forcefully that . . . there is an increasing need for freer speech . . . A powerful, comprehensive book.”Economist

Timothy Garton Ash rises to the task of directing us how to live civilly in our connected diversity.John Lloyd, Financial Times

Free Speech is a resource, a weapon, an encyclopedia of anecdote, example and exemplum that reaches toward battling restrictions on expression with mountains of data, new ideas, liberating ideas.Diane Roberts, Prospect

Illuminating and thought-provoking. . . . [Garton Ashs] larger project is not merely to defend freedom of expression, but to promote civil, dispassionate discourse, within and across cultures, even about the most divisive and emotive subjects.Faramerz Dabhoiwala, The Guardian

“Timothy Garton Ashs new book Free Speech: Ten Principles for a Connected World is a rare thing: a worthwhile contribution to a debate without two developed sides. Ash does an excellent job laying out the theoretical and practical bases for the western liberal positions on free speech.”Malcolm Harris, New Republic

“An informative and bracing defense of free speech liberalism in the Internet age . . . In a world where free speech can never be taken for granted, Garton Ashs free speech liberalism is a good place to start any discussion”David Luban, New York Review of Books

See the article here:
Free Speech: Ten Principles for a Connected World …

Posted in Free Speech | Comments Off on Free Speech: Ten Principles for a Connected World …

Freedom of Speech Essay – 2160 Words – StudyMode

Posted: October 15, 2016 at 5:23 am

Freedom of Speech

With varying opinions and beliefs, our society needs to have unlimited freedom to speak about any and everything that concerns us in order to continually improve our society. Those free speech variables would be speech that creates a positive, and not negative, scenario in both long-terms and short-terms. Dictionary.com defines Freedom of Speech as, the right of people to express their opinions publicly without governmental interference, subject to the laws against libel, incitement to violence or rebellion, etc. Freedom of speech is also known as free speech or freedom of expression. Freedom of speech is also known as freedom of expression because a persons beliefs and thoughts can also be expressed in other ways other than speech. These ways could be art, writings, songs, and other forms of expression. If speaking freely and expressing ourselves freely is supposed to be without any consequence, then why are there constant law suits and consequences for people who do. Freedom of speech and freedom of expression should be exactly what they mean. Although most people believe that they can speak about anything without there being consequences, this is very untrue. One of those spoken things that have consequences is speaking about the president in such a negative way that it sends red flags about your intentions. Because of the high terrorist alerts, people have to limit what they say about bombs, 9/11, and anything they may say out of anger about our government or country. In the documentary called Fahrenheit 9/11, Michael Moore spoke of a man who went to his gym and had a conversation with some of his gym buddies in a joking way. He made a joke about George W. Bush bombing us in oil profits. The next morning the FBI was at his front door because someone had reported what he freely spoke. Although the statements might have been derogatory, they were still his opinion, and he had a right to say whatever he wanted to about the president. In the past seven years there have been laws made that have obstructed our freedom of speech, and our right to privacy. Many of us have paused in the recent years when having a conversation because we are afraid that we are eavesdropped on. Even the eavesdropping would not be a problem if it were not for fear that there would be some legal action taken because of what you say. As mentioned in TalkLeft about the awkwardness in our current day conversations, We stop suddenly, momentarily afraid that our words might be taken out of context, then we laugh at our paranoia and go on. But our demeanor has changed, and our words are subtly altered. This is the loss of freedom we face when our privacy is taken from us. This is life in former East Germany, or life in Saddam Hussein’s Iraq. And it’s our future as we allow an ever-intrusive eye into our personal, private lives. Because of tighter security and defense by the United States there have been visible and invisible changes to the meaning of freedom of speech and expression. One wrong word or thing could lead to a disastrous consequence.

Another topic that has been limited for a long period of time is religion. Speaking about religion in certain places is severely frowned upon. One of those places is schools. Since I could remember, schools have always had a rule that certain things could not be spoken of related to religion. If they were, that person could receive consequences. As a young child I could never understand why students and staff members could not openly express their love for God. I also thought that prayer was not permitted in schools when they are. Prayers are permitted in school, but not in classrooms during class time. Also wearing religious symbols or clothing is banned in schools. If we are free to speak our thoughts and feelings, then how are we banned to do these things? It is like saying that we are free to speak whatever we want, but we may not say anything. In the article A…

Let your classmates know about this document and more at StudyMode.com

{“hostname”:”studymode.com”,”essaysImgCdnUrl”:”//images-study.netdna-ssl.com/pi/”,”useDefaultThumbs”:true,”defaultThumbImgs”:[“//stm-study.netdna-ssl.com/stm/images/placeholders/default_paper_1.png”,”//stm-study.netdna-ssl.com/stm/images/placeholders/default_paper_2.png”,”//stm-study.netdna-ssl.com/stm/images/placeholders/default_paper_3.png”,”//stm-study.netdna-ssl.com/stm/images/placeholders/default_paper_4.png”,”//stm-study.netdna-ssl.com/stm/images/placeholders/default_paper_5.png”],”thumb_default_size”:”160×220″,”thumb_ac_size”:”80×110″,”isPayOrJoin”:false,”essayUpload”:false,”site_id”:1,”autoComplete”:false,”isPremiumCountry”:false,”logPixelPath”:”//www.smhpix.com/pixel.gif”,”tracking_url”:”//www.smhpix.com/pixel.gif”,”essay”:{“essayId”:33424465,”categoryName”:”Fiction”,”categoryParentId”:”17″,”currentPage”:1,”format”:”text”,”pageMeta”:{“text”:{“startPage”:1,”endPage”:6,”pageRange”:”1-6″,”totalPages”:6}},”access”:”premium”,”title”:”Freedom of Speech Essay”,”additionalIds”:[9,103,2,3],”additional”:[“Entertainment”,”Entertainment/Film”,”Awards u0026 Events”,”Business u0026 Economy”],”loadedPages”:{“html”:[],”text”:[1,2,3,4,5,6]}},”user”:null,”canonicalUrl”:”http://www.studymode.com/essays/Freedom-Of-Speech-Essay-223535.html”,”pagesPerLoad”:50,”userType”:”member_guest”,”ct”:10,”ndocs”:”1,500,000″,”pdocs”:”6,000″,”cc”:”10_PERCENT_1MO_AND_6MO”,”signUpUrl”:”https://www.studymode.com/signup/”,”joinUrl”:”https://www.studymode.com/join”,”payPlanUrl”:”/checkout/pay/100241″,”upgradeUrl”:”/checkout/upgrade”,”freeTrialUrl”:”https://www.studymode.com/signup/?redirectUrl=https%3A%2F%2Fwww.studymode.com%2Fcheckout%2Fpay%2Ffree-trialu0026bypassPaymentPage=1″,”showModal”:”get-access”,”showModalUrl”:”https://www.studymode.com/signup/?redirectUrl=https%3A%2F%2Fwww.studymode.com%2Fjoin”,”joinFreeUrl”:”/essays/?newuser=1″,”siteId”:1,”facebook”:{“clientId”:”306058689489023″,”version”:”v2.2″,”language”:”en_US”},”analytics”:{“googleId”:”UA-32718321-1″}}

See more here:
Freedom of Speech Essay – 2160 Words – StudyMode

Posted in Freedom of Speech | Comments Off on Freedom of Speech Essay – 2160 Words – StudyMode

Debate: Freedom of Speech | Debate.org

Posted: at 5:23 am

To begin, I am greatly happy that you, Mdal, joined my debate. It appears that your arguments appeals to logic, which is, in my opinion the most persuasive type of argument. I will primarily be appealing to logic, however will also touch on the ideals of value, as it is one of the main moral reasons I support this idea. I have also adapted the format of my arguments to suit your style.

Voltaire, an enlightenment thinker, regarded with as intuitive and influential a mind as Montesquieu, Rousseau, and Locke. All influential people who host beliefs that influenced the framers of the Constitution, and all of which created ideals that support, and influence my own belief on restricting the rights of the first amendment to hate group’s gathering in public areas.

I agree with your definition of what the constitution is advancing us towards, “a stable, liberty driven, peaceful, prosperous state” and would in turn like to define hate groups as any groups that gather with the intentions of breeding fear, terror, hate, or violence towards any particular group of people (defined as a group of similar races, religion, or belief [such as sexual orientation].) More specifically, I will be focusing on, and discussing the two groups you mentioned, the Ku Klux Klan, and the Aryan Brotherhood.

Now, before I begin my own arguments, I will answer your question: “who gets to say what is ok and what isn’t?”

I have long meditated in search of a proper way for our nation to adapt to such a monumental change as I have proposed. The only way that I could think of was to add a fourth branch to our current system of checks and balances. This branch would be in charge of adapting the constitution to better suit the nation as it evolves (including any exceptions the members of this branch deem necessary to create.) They would have equal power to the executive, legislative and judicial branches, and would their adjustments would be checked by both the legislative branch (requiring a majority vote as opposed to the current two thirds vote necessary to create an amendment) and the judicial branch to make sure that any and all changes and exceptions created by this new branch follow the main ideals that are upheld within our nation, and do not violate the main intentions of the framers ideals. I realize that this is also a very controversial topic, and would love to hear any and all concerns you have regarding this issue; however, I do not want this to distract us from the main topic of our debate.

Rebuttal #1: In response to the “slippery-slope” argument Logic: The system of checks and balances was created in order to stop one particular group from gaining power. Adapting this system by creating another branch should quite any worries you had about the “slippery-slope” that may occur, as the extent of the branches power will be modified by two other branches, the Legislative and the Judicial. Therefore, the new branch will not be able to abuse this power, and they, because of these restrictions, would not be able to quiet the entire, “market place of ideas.”

Rebuttal #2: In response to the argument that this will limit the market place of ideas Logic: You brought up the argument that if we allow bad ideas to mix with good ideas, then the good ideas will “rise to the top.” In response to this, I would like to bring up the case of Osama Bin Laden, a terrorist who has, what are commonly assumed to be “bad ideas.” Because of Bin Laden’s influential abilities, his bad ideas were able to rise above the good ideas, and eventually led to a great influx of new members into terrorist beliefs, and further led to the tragic destruction of the World Trade Center in 2001.

I am in no way saying that the KKK or the Aryan Brotherhood has equal power to Terrorists, but I am instead proposing that they have similar bad ideas focused on fear and hatred towards a group of people. If the KKK were to gain an influential leader (horrendous, but influential none-the-less) as Osama Bin Laden, who’s to say whether or not our current small national terrorist group the KKK would turn into a world-wide terrorist organization such as that created by Osama Bin Laden?

It is better to regulate the public meetings of these organizations now, as opposed to later when their power may exceed that of the government they are encompassed by.

Rebuttal #3: In response to the argument that Free speech keeps our government accountable. Logic: As the government is not a group of people regulated by race, religion, or belief (refer to definition of groups of people). And the branch will only have the power to regulate hate groups from publicly discussing (note I am not restricting their right to gather in privacy, purely in public) their ideas, the proposition will have no effect on those who wish to speak out against the government.

Now onto my main argument:

Argument: We are currently not fully acknowledging people’s natural rights Logic: According to the natural rights originally proposed, and supported by enlightenment thinkers such as Locke, Montesquieu, and Rousseau all people are born with the right to live his/her life any way he/she likes without causing physical harm to another individual, directly or indirectly.

What I question within this right is the restriction, “without causing physical harm to another individual, directly or indirectly.” I concede that I am working under the assumption that hate groups gather with a common goal to assert their superiority (through violence or terror) over a different group of people. I also concede that I work under the assumption that mental harm can become so intense that it can eventually harm a person physically (I only state this because this was not common knowledge around the time of the enlightenment, and therefore was not included in their right.) I believe that these are fairly common assumptions, and therefore will continue with my argument. If we allow groups that have a goal of asserting superiority over a specific group of people, whether they currently act upon this goal, or whether they plan on accomplishing this goal in the future, they either directly or indirectly threaten the safety of others.

I also could go on, however do not wish to state all of my arguments in the first round of our five round discussion.

Thank you again for accepting this debate, so far it proves to be quite promising.

I will first respond to tsmart’s rebuttals to my 3 opening arguments, from there I will counter tsmart’s single argument, finally I must respond to the possible creation of a 4th branch of government as the actor created by tsmart in this case. Though I too do not want this debate dramatically side tracked by a debate about the actor who will create the proposed new laws set forth by tsmart. However as he uses this new 4th branch as an answer to my 3rd argument it has become very important to the core of this debate and will thus be discussed when answering Tsmart’s first rebuttal.

With this signposting finished, lets get to some arguments.

Rebuttal #1: Tsmart’s Rebuttal assures us that through the creation of the 4th branch of government who’s sole job is two interpret freedom of speech, and decide what is and what is not allowable under our new laws which limit certain types of speech. Tsmart’s exact quote of what the 4th branch of government would be is: “This branch would be in charge of adapting the constitution to better suit the nation as it evolves (including any exceptions the members of this branch deem necessary to create.) They would have equal power to the executive, legislative and judicial branches, and would their adjustments would be checked by both the legislative branch (requiring a majority vote as opposed to the current two thirds vote necessary to create an amendment) and the judicial branch to make sure that any and all changes and exceptions created by this new branch follow the main ideals that are upheld within our nation, and do not violate the main intentions of the framers ideals.”

My response: Whooooooo eeee! Where to start on this one?

To begin with it seems at first blush that the 4th branch is going to usurp what has been the power of the Supreme Court, namely interpreting the constitution. However upon closer examination it seems that Tsmart actually has created a body whose job is much more than merely interpreting the constitution, it is actually a body whose job is to CHANGE the constitution. So basically this new body is invented to abridge and thus destroy the power of the 1st amendment (one of the most important amendments in our constitution, one who has been upheld through countless court cases) take the power of the states and congress (the governmental structures who usually keep all of the checks and balances on the creation of new amendments)and given it all to this new 4th branch. Basically we have reorganized the very makeup of American government for the express reason of censoring people. *****In a cost benefit analysis the cost of destabilizing the government by shifting around the powers set in our government by our founding fathers to a new, strange, and untested power structure for the possibly non-existent benefit of censoring hate groups seems dramatically unbalanced. Under this cost benefit analysis it seems as if any marginal benefits we might get from censorship are DRAMATICALLY outweighed by the dangers of the radical upsetting of our governmental structure and thus shows that the CON’s proposed solutions just aren’t worth the trouble.

Rebuttal #2: In response to my argument for an open Market Place of Ideas (something we have now but will lose if we lose Freedom of Speech) Tsmart brings up the example of Osoma Bin Laden and how his ideas have risen to the top in some places and beat out better ideas, so we should instead keep these sort of ideas out of the public’s purview.

My Response: Tsmart actually just proved my point by using the example of Osoma Bin Laden, tell me readers (and Tsmart) have you been convinced by listening to Bin Laden on our television? It wasn’t hidden from us. Everyone in the US is allowed to listen to what Bin Laden has to say, yet HERE in the US where the market place of ideas flourishes Bin Laden’s brand of extremism hasn’t gained a foothold. The places where he is much more popular don’t have the myriad of view points like we have the capacity of getting here in the States, instead in places like Iran, Saudi Arabia, Afghanistan, Pakistan and other nations in the Middle East we find a correlation between the free-er the speech, the less extremist the views in the country. This is because when the market place of ideas is allowed to work, people are able to make well informed decisions and that usually leads them away from extremist views and towards the center ground when considering an issue. Thus we can see how Tsmart’s example just proves exactly how important the market place of ideas really is and how important it is to keep from abridging the first amendment which is SO key to keeping the market place of ideas viable.

Rebuttal #3: I stated that freedom of speech is a huge check on the government. Tsmart says: “…the branch will only have the power to regulate hate groups from publicly discussing (note I am not restricting their right to gather in privacy, purely in public) their ideas, the proposition will have no effect on those who wish to speak out against the government.” My Response: What about the hate groups Tsmart? What happens if an incredibly racist, cruel, mean, hate filled Neo Nazi has a well conceived critique of the the government, but wants to express this brilliant critique in hate filled language? His speech, though offensive to you and me, will also give a benefit to the society because he will point out something about the government which needs to be looked at. Re-reading your quote you say that the hate group will be unable to discuss their ideas in public, what if their ideas have to do with the government? Is this a new exception? Are Hate groups allowed to talk about the government? You see how restricting even a small part of Freedom of Speech has huge ramifications for everyone in our society? Rather than risk the benefit of one of the best checks on our government (freedom of speech) we should play it safe and not try to silence people we don’t agree with.

On to Tsmart’s argument of expanded natural rights, His claim is that if people are railed against in public by hate groups they may be harmed mentally and that may eventually lead to physical harm. Thus we should protect these minorities and targeted groups from the hate groups.

Response to Tsmart’s Argument: Tsmart, it seems as though you have come to an overreaching understanding of what the government is supposed to do in situations like this. Your solution is to take preemptive action by taking away freedoms from people who might threaten others. However it seems as though the goal you are trying to accomplish is to make certain that the targeted minority groups ARE safe as well as help them FEEL safe. This goal can be met much better by an investment in anti-hate laws which will increase the punishment for hate crimes, or better yet you could increase the capabilities of the police and thus keep extremist groups like the hate organizations in line. However abridging freedom of speech is not the best, or even a decent, way of defending targeted minority groups.

Read more:
Debate: Freedom of Speech | Debate.org

Posted in Freedom of Speech | Comments Off on Debate: Freedom of Speech | Debate.org

History of artificial intelligence – Wikipedia, the free …

Posted: August 30, 2016 at 11:03 pm

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with “an ancient wish to forge the gods.”

The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The Turing test was proposed by British mathematician Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: “I propose to consider the question, ‘Can machines think?'” The term ‘Artificial Intelligence’ was created at a conference held at Dartmouth College in 1956.[2]Allen Newell, J. C. Shaw, and Herbert A. Simon pioneered the newly created artificial intelligence field with the Logic Theory Machine (1956), and the General Problem Solver in 1957.[3] In 1958, John McCarthy and Marvin Minsky started the MIT Artificial Intelligence lab with $50,000.[4] John McCarthy also created LISP in the summer of 1958, a programming language still important in artificial intelligence research.[5]

In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and withdrew funding again.

McCorduck (2004) writes “artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized,” expressed in humanity’s myths, legends, stories, speculation and clockwork automatons.

Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion’s Galatea.[7] In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as Jbir ibn Hayyn’s Takwin, Paracelsus’ homunculus and Rabbi Judah Loew’s Golem.[8] By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots), and speculation, such as Samuel Butler’s “Darwin among the Machines.” AI has continued to be an important element of science fiction into the present.

Realistic humanoid automatons were built by craftsman from every civilization, including Yan Shi,[11]Hero of Alexandria,[12]Al-Jazari and Wolfgang von Kempelen.[14] The oldest known automatons were the sacred statues of ancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotionHermes Trismegistus wrote that “by discovering the true nature of the gods, man has been able to reproduce it.”[15][16]

Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanicalor “formal”reasoning has a long history. Chinese, Indian and Greek philosophers all developed structured methods of formal deduction in the first millennium BCE. Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), Muslim mathematician al-Khwrizm (who developed algebra and gave his name to “algorithm”) and European scholastic philosophers such as William of Ockham and Duns Scotus.[17]

Majorcan philosopher Ramon Llull (12321315) developed several logical machines devoted to the production of knowledge by logical means;[18] Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge.[19] Llull’s work had a great influence on Gottfried Leibniz, who redeveloped his ideas.[20]

In the 17th century, Leibniz, Thomas Hobbes and Ren Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry.[21]Hobbes famously wrote in Leviathan: “reason is nothing but reckoning”.[22]Leibniz envisioned a universal language of reasoning (his characteristica universalis) which would reduce argumentation to calculation, so that “there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let us calculate.”[23] These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research.

In the 20th century, the study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible. The foundations had been set by such works as Boole’s The Laws of Thought and Frege’s Begriffsschrift. Building on Frege’s system, Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the Principia Mathematica in 1913. Inspired by Russell’s success, David Hilbert challenged mathematicians of the 1920s and 30s to answer this fundamental question: “can all of mathematical reasoning be formalized?”[17] His question was answered by Gdel’s incompleteness proof, Turing’s machine and Church’s Lambda calculus.[17][24] Their answer was surprising in two ways.

First, they proved that there were, in fact, limits to what mathematical logic could accomplish. But second (and more important for AI) their work suggested that, within these limits, any form of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. The key insight was the Turing machinea simple theoretical construct that captured the essence of abstract symbol manipulation. This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines.[17][26]

Calculating machines were built in antiquity and improved throughout history by many mathematicians, including (once again) philosopher Gottfried Leibniz. In the early 19th century, Charles Babbage designed a programmable computer (the Analytical Engine), although it was never built. Ada Lovelace speculated that the machine “might compose elaborate and scientific pieces of music of any degree of complexity or extent”.[27] (She is often credited as the first programmer because of a set of notes she wrote that completely detail a method for calculating Bernoulli numbers with the Engine.)

The first modern computers were the massive code breaking machines of the Second World War (such as Z3, ENIAC and Colossus). The latter two of these machines were based on the theoretical foundation laid by Alan Turing[28] and developed by John von Neumann.[29]

In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. The field of artificial intelligence research was founded as an academic discipline in 1956.

The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 30s, 40s and early 50s. Recent research in neurology had shown that the brain was an electrical network of neurons that fired in all-or-nothing pulses. Norbert Wiener’s cybernetics described control and stability in electrical networks. Claude Shannon’s information theory described digital signals (i.e., all-or-nothing signals). Alan Turing’s theory of computation showed that any form of computation could be described digitally. The close relationship between these ideas suggested that it might be possible to construct an electronic brain.[30]

Examples of work in this vein includes robots such as W. Grey Walter’s turtles and the Johns Hopkins Beast. These machines did not use computers, digital electronics or symbolic reasoning; they were controlled entirely by analog circuitry.[31]

Walter Pitts and Warren McCulloch analyzed networks of idealized artificial neurons and showed how they might perform simple logical functions. They were the first to describe what later researchers would call a neural network.[32] One of the students inspired by Pitts and McCulloch was a young Marvin Minsky, then a 24-year-old graduate student. In 1951 (with Dean Edmonds) he built the first neural net machine, the SNARC.[33]Minsky was to become one of the most important leaders and innovators in AI for the next 50 years.

In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of creating machines that think.[34] He noted that “thinking” is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was “thinking”. This simplified version of the problem allowed Turing to argue convincingly that a “thinking machine” was at least plausible and the paper answered all the most common objections to the proposition.[35] The Turing Test was the first serious proposal in the philosophy of artificial intelligence.

In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess.[36]Arthur Samuel’s checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur.[37]Game AI would continue to be used as a measure of progress in AI throughout its history.

When access to digital computers became possible in the middle fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines.[38]

In 1955, Allen Newell and (future Nobel Laureate) Herbert A. Simon created the “Logic Theorist” (with help from J. C. Shaw). The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead’s Principia Mathematica, and find new and more elegant proofs for some.[39] Simon said that they had “solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind.”[40] (This was an early statement of the philosophical position John Searle would later call “Strong AI”: that machines can contain minds just as human bodies do.)[41]

The Dartmouth Conference of 1956[42] was organized by Marvin Minsky, John McCarthy and two senior scientists: Claude Shannon and Nathan Rochester of IBM. The proposal for the conference included this assertion: “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it”.[43] The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would create important programs during the first decades of AI research.[44] At the conference Newell and Simon debuted the “Logic Theorist” and McCarthy persuaded the attendees to accept “Artificial Intelligence” as the name of the field.[45] The 1956 Dartmouth conference was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI.[46]

The years after the Dartmouth conference were an era of discovery, of sprinting across new ground. The programs that were developed during this time were, to most people, simply “astonishing”:[47] computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such “intelligent” behavior by machines was possible at all.[48] Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years.[49] Government agencies like ARPA poured money into the new field.[50]

There were many successful programs and new directions in the late 50s and 1960s. Among the most influential were these:

Many early AI programs used the same basic algorithm. To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze, backtracking whenever they reached a dead end. This paradigm was called “reasoning as search”.[51]

The principal difficulty was that, for many problems, the number of possible paths through the “maze” was simply astronomical (a situation known as a “combinatorial explosion”). Researchers would reduce the search space by using heuristics or “rules of thumb” that would eliminate those paths that were unlikely to lead to a solution.[52]

Newell and Simon tried to capture a general version of this algorithm in a program called the “General Problem Solver”.[53] Other “searching” programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Herbert Gelernter’s Geometry Theorem Prover (1958) and SAINT, written by Minsky’s student James Slagle (1961).[54] Other programs searched through goals and subgoals to plan actions, like the STRIPS system developed at Stanford to control the behavior of their robot Shakey.[55]

An important goal of AI research is to allow computers to communicate in natural languages like English. An early success was Daniel Bobrow’s program STUDENT, which could solve high school algebra word problems.[56]

A semantic net represents concepts (e.g. “house”,”door”) as nodes and relations among concepts (e.g. “has-a”) as links between the nodes. The first AI program to use a semantic net was written by Ross Quillian[57] and the most successful (and controversial) version was Roger Schank’s Conceptual dependency theory.[58]

Joseph Weizenbaum’s ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program. But in fact, ELIZA had no idea what she was talking about. She simply gave a canned response or repeated back what was said to her, rephrasing her response with a few grammar rules. ELIZA was the first chatterbot.[59]

In the late 60s, Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI research should focus on artificially simple situations known as micro-worlds. They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused on a “blocks world,” which consists of colored blocks of various shapes and sizes arrayed on a flat surface.[60]

This paradigm led to innovative work in machine vision by Gerald Sussman (who led the team), Adolfo Guzman, David Waltz (who invented “constraint propagation”), and especially Patrick Winston. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing the blocks world to life. The crowning achievement of the micro-world program was Terry Winograd’s SHRDLU. It could communicate in ordinary English sentences, plan operations and execute them.[61]

The first generation of AI researchers made these predictions about their work:

In June 1963, MIT received a $2.2 million grant from the newly created Advanced Research Projects Agency (later known as DARPA). The money was used to fund project MAC which subsumed the “AI Group” founded by Minsky and McCarthy five years earlier. DARPA continued to provide three million dollars a year until the 70s.[66]DARPA made similar grants to Newell and Simon’s program at CMU and to the Stanford AI Project (founded by John McCarthy in 1963).[67] Another important AI laboratory was established at Edinburgh University by Donald Michie in 1965.[68] These four institutions would continue to be the main centers of AI research (and funding) in academia for many years.[69]

The money was proffered with few strings attached: J. C. R. Licklider, then the director of ARPA, believed that his organization should “fund people, not projects!” and allowed researchers to pursue whatever directions might interest them.[70] This created a freewheeling atmosphere at MIT that gave birth to the hacker culture,[71] but this “hands off” approach would not last.

In the 70s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared.[72] At the same time, the field of connectionism (or neural nets) was shut down almost completely for 10 years by Marvin Minsky’s devastating criticism of perceptrons.[73] Despite the difficulties with public perception of AI in the late 70s, new ideas were explored in logic programming, commonsense reasoning and many other areas.[74]

In the early seventies, the capabilities of AI programs were limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, “toys”.[75] AI researchers had begun to run into several fundamental limits that could not be overcome in the 1970s. Although some of these limits would be conquered in later decades, others still stymie the field to this day.[76]

The agencies which funded AI research (such as the British government, DARPA and NRC) became frustrated with the lack of progress and eventually cut off almost all funding for undirected research into AI. The pattern began as early as 1966 when the ALPAC report appeared criticizing machine translation efforts. After spending 20 million dollars, the NRC ended all support.[84] In 1973, the Lighthill report on the state of AI research in England criticized the utter failure of AI to achieve its “grandiose objectives” and led to the dismantling of AI research in that country.[85] (The report specifically mentioned the combinatorial explosion problem as a reason for AI’s failings.)[86]DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at CMU and canceled an annual grant of three million dollars.[87] By 1974, funding for AI projects was hard to find.

Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues. “Many researchers were caught up in a web of increasing exaggeration.”[88] However, there was another issue: since the passage of the Mansfield Amendment in 1969, DARPA had been under increasing pressure to fund “mission-oriented direct research, rather than basic undirected research”. Funding for the creative, freewheeling exploration that had gone on in the 60s would not come from DARPA. Instead, the money was directed at specific projects with clear objectives, such as autonomous tanks and battle management systems.[89]

Several philosophers had strong objections to the claims being made by AI researchers. One of the earliest was John Lucas, who argued that Gdel’s incompleteness theorem showed that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could.[90]Hubert Dreyfus ridiculed the broken promises of the 60s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little “symbol processing” and a great deal of embodied, instinctive, unconscious “know how”.[91][92]John Searle’s Chinese Room argument, presented in 1980, attempted to show that a program could not be said to “understand” the symbols that it uses (a quality called “intentionality”). If the symbols have no meaning for the machine, Searle argued, then the machine can not be described as “thinking”.[93]

These critiques were not taken seriously by AI researchers, often because they seemed so far off the point. Problems like intractability and commonsense knowledge seemed much more immediate and serious. It was unclear what difference “know how” or “intentionality” made to an actual computer program. Minsky said of Dreyfus and Searle “they misunderstand, and should be ignored.”[94] Dreyfus, who taught at MIT, was given a cold shoulder: he later said that AI researchers “dared not be seen having lunch with me.”[95]Joseph Weizenbaum, the author of ELIZA, felt his colleagues’ treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus’ positions, he “deliberately made it plain that theirs was not the way to treat a human being.”[96]

Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote DOCTOR, a chatterbot therapist. Weizenbaum was disturbed that Colby saw his mindless program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby did not credit Weizenbaum for his contribution to the program. In 1976, Weizenbaum published Computer Power and Human Reason which argued that the misuse of artificial intelligence has the potential to devalue human life.[97]

A perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt, who had been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI researchers, he was optimistic about their power, predicting that “perceptron may eventually be able to learn, make decisions, and translate languages.” An active research program into the paradigm was carried out throughout the 60s but came to a sudden halt with the publication of Minsky and Papert’s 1969 book Perceptrons. It suggested that there were severe limitations to what perceptrons could do and that Frank Rosenblatt’s predictions had been grossly exaggerated. The effect of the book was devastating: virtually no research at all was done in connectionism for 10 years. Eventually, a new generation of researchers would revive the field and thereafter it would become a vital and useful part of artificial intelligence. Rosenblatt would not live to see this, as he died in a boating accident shortly after the book was published.[73]

Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal.[98] In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 60s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[99] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[100] Prolog uses a subset of logic (Horn clauses, closely related to “rules” and “production rules”) that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum’s expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[101]

Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[102] McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problemsnot machines that think as people do.[103]

Among the critics of McCarthy’s approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like “story understanding” and “object recognition” that required a machine to think like a person. In order to use ordinary concepts like “chair” or “restaurant” they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that “using precise language to describe essentially imprecise concepts doesn’t make them any more precise.”[104]Schank described their “anti-logic” approaches as “scruffy”, as opposed to the “neat” paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[105]

In 1975, in a seminal paper, Minsky noted that many of his fellow “scruffy” researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be “logical”, but these structured sets of assumptions are part of the context of everything we say and think. He called these structures “frames”. Schank used a version of frames he called “scripts” to successfully answer questions about short stories in English.[106] Many years later object-oriented programming would adopt the essential idea of “inheritance” from AI research on frames.

In the 1980s a form of AI program called “expert systems” was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart. Once again, AI had achieved success.

An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach.[107]

Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be useful: something that AI had not been able to achieve up to this point.[108]

In 1980, an expert system called XCON was completed at CMU for the Digital Equipment Corporation. It was an enormous success: it was saving the company 40 million dollars annually by 1986.[109] Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp and Aion.[110]

The power of expert systems came from the expert knowledge they contained. They were part of a new direction in AI research that had been gaining ground throughout the 70s. “AI researchers were beginning to suspectreluctantly, for it violated the scientific canon of parsimonythat intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways,”[111] writes Pamela McCorduck. “[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay”.[112]Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s.[113]

The 1980s also saw the birth of Cyc, the first attempt to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows. Douglas Lenat, who started and led the project, argued that there is no shortcut the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand. The project was not expected to be completed for many decades.[114]

Chess playing programs HiTech and Deep Thought defeated chess masters in 1989. Both were developed by Carnegie Mellon University; Deep Thought development paved the way for the Deep Blue.[115]

In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings.[116] Much to the chagrin of scruffies, they chose Prolog as the primary computer language for the project.[117]

Other countries responded with new programs of their own. The UK began the 350 million Alvey project. A consortium of American companies formed the Microelectronics and Computer Technology Corporation (or “MCC”) to fund large scale projects in AI and information technology.[118][119]DARPA responded as well, founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988.[120]

In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a “Hopfield net”) could learn and process information in a completely new way. Around the same time, David Rumelhart popularized a new method for training neural networks called “backpropagation” (discovered years earlier by Paul Werbos). These two discoveries revived the field of connectionism which had been largely abandoned since 1970.[119][121]

The new field was unified and inspired by the appearance of Parallel Distributed Processing in 1986a two volume collection of papers edited by Rumelhart and psychologist James McClelland. Neural networks would become commercially successful in the 1990s, when they began to be used as the engines driving programs like optical character recognition and speech recognition.[119][122]

The business community’s fascination with AI rose and fell in the 80s in the classic pattern of an economic bubble. The collapse was in the perception of AI by government agencies and investors the field continued to make advances despite the criticism. Rodney Brooks and Hans Moravec, researchers from the related field of robotics, argued for an entirely new approach to artificial intelligence.

The term “AI winter” was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow.[123] Their fears were well founded: in the late 80s and early 90s, AI suffered a series of financial setbacks.

The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight.[124]

Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were “brittle” (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts.[125]

In the late 80s, the Strategic Computing Initiative cut funding to AI “deeply and brutally.” New leadership at DARPA had decided that AI was not “the next wave” and directed funds towards projects that seemed more likely to produce immediate results.[126]

By 1991, the impressive list of goals penned in 1981 for Japan’s Fifth Generation Project had not been met. Indeed, some of them, like “carry on a casual conversation” had not been met by 2010.[127] As with other AI projects, expectations had run much higher than what was actually possible.[127]

In the late 80s, several researchers advocated a completely new approach to artificial intelligence, based on robotics.[128] They believed that, to show real intelligence, a machine needs to have a body it needs to perceive, move, survive and deal with the world. They argued that these sensorimotor skills are essential to higher level skills like commonsense reasoning and that abstract reasoning was actually the least interesting or important human skill (see Moravec’s paradox). They advocated building intelligence “from the bottom up.”[129]

The approach revived ideas from cybernetics and control theory that had been unpopular since the sixties. Another precursor was David Marr, who had come to MIT in the late 70s from a successful background in theoretical neuroscience to lead the group studying vision. He rejected all symbolic approaches (both McCarthy’s logic and Minsky’s frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place. (Marr’s work would be cut short by leukemia in 1980.)[130]

In a 1990 paper, “Elephants Don’t Play Chess,”[131] robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since “the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough.”[132] In the 80s and 90s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the embodied mind thesis.[133]

The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. Inside the field there was little agreement on the reasons for AI’s failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of “artificial intelligence”.[134] AI was both more cautious and more successful than it had ever been.

On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.[135] The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second. The event was broadcast live over the internet and received over 74 million hits.[136]

In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail.[137] Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.[138] In February 2011, in a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[139]

These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous power of computers today.[140] In fact, Deep Blue’s computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951.[141] This dramatic increase is measured by Moore’s law, which predicts that the speed and memory capacity of computers doubles every two years. The fundamental problem of “raw computer power” was slowly being overcome.

A new paradigm called “intelligent agents” became widely accepted during the 90s.[142] Although earlier researchers had proposed modular “divide and conquer” approaches to AI,[143] the intelligent agent did not reach its modern form until Judea Pearl, Allen Newell and others brought concepts from decision theory and economics into the study of AI.[144] When the economist’s definition of a rational agent was married to computer science’s definition of an object or module, the intelligent agent paradigm was complete.

An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. By this definition, simple programs that solve specific problems are “intelligent agents”, as are human beings and organizations of human beings, such as firms. The intelligent agent paradigm defines AI research as “the study of intelligent agents”. This is a generalization of some earlier definitions of AI: it goes beyond studying human intelligence; it studies all kinds of intelligence.[145]

The paradigm gave researchers license to study isolated problems and find solutions that were both verifiable and useful. It provided a common language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like economics and control theory. It was hoped that a complete agent architecture (like Newell’s SOAR) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents.[144][146]

AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past.[147] There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics, economics or operations research. The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable; AI had become a more rigorous “scientific” discipline. Russell & Norvig (2003) describe this as nothing less than a “revolution” and “the victory of the neats”.[148][149]

Judea Pearl’s highly influential 1988 book[150] brought probability and decision theory into AI. Among the many new tools in use were Bayesian networks, hidden Markov models, information theory, stochastic modeling and classical optimization. Precise mathematical descriptions were also developed for “computational intelligence” paradigms like neural networks and evolutionary algorithms.[148]

Algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems[151] and their solutions proved to be useful throughout the technology industry,[152] such as data mining, industrial robotics, logistics,[153]speech recognition,[154] banking software,[155] medical diagnosis[155] and Google’s search engine.[156]

The field of AI receives little or no credit for these successes. Many of AI’s greatest innovations have been reduced to the status of just another item in the tool chest of computer science.[157]Nick Bostrom explains “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.”[158]

Many researchers in AI in 1990s deliberately called their work by other names, such as informatics, knowledge-based systems, cognitive systems or computational intelligence. In part, this may be because they considered their field to be fundamentally different from AI, but also the new names help to procure funding. In the commercial world at least, the failed promises of the AI Winter continue to haunt AI research, as the New York Times reported in 2005: “Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.”[159][160][161]

In 1968, Arthur C. Clarke and Stanley Kubrick had imagined that by the year 2001, a machine would exist with an intelligence that matched or exceeded the capability of human beings. The character they created, HAL 9000, was based on a belief shared by many leading AI researchers that such a machine would exist by the year 2001.[162]

Marvin Minsky asks “So the question is why didn’t we get HAL in 2001?”[163] Minsky believes that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blames the qualification problem.[164] For Ray Kurzweil, the issue is computer power and, using Moore’s Law, he predicts that machines with human-level intelligence will appear by 2029.[165]Jeff Hawkins argues that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[166] There are many other explanations and for each there is a corresponding research program underway.

.

Go here to read the rest:

History of artificial intelligence – Wikipedia, the free …

Posted in Ai | Comments Off on History of artificial intelligence – Wikipedia, the free …

First Amendment – Watchdog.org

Posted: August 25, 2016 at 4:20 pm

By M.D. Kittle / August 14, 2016 / First Amendment, Free Speech, News, Power Abuse, Wisconsin / No Comments

There is a vital need for citizens to have an effective remedy against government officials who investigate them principally because of their partisan affiliation and political speech.

By M.D. Kittle / August 8, 2016 / Commentary, First Amendment, Free Speech, National, Wisconsin / No Comments

Thats precisely what I expected from a party whose platform includes rewriting the First Amendment

By M.D. Kittle / August 3, 2016 / First Amendment, Free Speech, News, Power Abuse, Wisconsin / No Comments

The question that arises is do conservatives have civil rights before Judge Lynn Adelman?

By M.D. Kittle / August 2, 2016 / First Amendment, News, Power Abuse, Wisconsin / No Comments

Now, years after defendants unlawfully seized and catalogued millions of our sensitive documents, we ask the court to vindicate our rights under federal law.

By M.D. Kittle / July 25, 2016 / First Amendment, National, News, Politics & Elections, Wisconsin / No Comments

Moore has uttered some of the more inflammatory, ill-informed statements in Congress.

By M.D. Kittle / July 14, 2016 / First Amendment, Judiciary, News, Power Abuse, Wisconsin / No Comments

The process continues to be the punishment for people who were found wholly innocent of any wrongdoing, she said.

View post:
First Amendment – Watchdog.org

Posted in First Amendment | Comments Off on First Amendment – Watchdog.org

Trump: Maybe ‘2nd Amendment People’ Can Stop Clinton’s …

Posted: August 10, 2016 at 9:08 pm

Republican presidential nominee Donald Trump raised eyebrows Tuesday when he suggested there is “nothing” that can be done to stop Hillary Clinton’s Supreme Court picks, except “maybe” the “Second Amendment people.”

“Hillary wants to abolish, essentially abolish the Second Amendment,” Trump said to the crowd of supporters gathered in the Trask Coliseum at North Carolina University in Wilmington. “If she gets to pick her judges, nothing you can do, folks.

“Although the Second Amendment people, maybe there is. I don’t know.”

After the speech, Clinton’s campaign seized on the remarks.

“This is simple what Trump is saying is dangerous,” read a statement from campaign manager Robby Mook. “A person seeking to be president of the United States should not suggest violence in any way.”

ABC News reached out to the Secret Service for response to Trump’s comment, and the agency said it was aware of the remarks.

The Trump campaign insisted the candidate’s words referred to the power of “Second Amendment people” to unify.

“It’s called the power of unification 2nd Amendment people have amazing spirit and are tremendously unified, which gives them great political power,” read a statement, titled “Trump Campaign Statement Against Dishonest Media,” from senior communications adviser Jason Miller.

In a tweet Tuesday night, Trump tried to explain his remarks.

And in an interview with Fox News Tuesday night, Trump told the network: “This is a strong, powerful movement, the Second Amendment” and called the NRA “terrific people.”

“There can be no other interpretation,” he said of his earlier remarks. “I mean, give me a break.”

Trump’s running mate Mike Pence rose to the candidate’s defense and said Trump was not insinuating that there should be violence against Clinton.

“Donald Trump is clearly saying is that people who cherish that right, who believe that firearms in the hands of law-abiding citizens makes our communities more safe, not less safe, should be involved in the political process and let their voice be heard,” Pence said today in an interview with NBC10, a local Philadelphia TV station.

Clinton’s running mate, Virginia Sen. Tim Kaine told reporters today in Trump’s comments “revealed this complete temperamental misfit with the character thats required to do the job and in a nation.”

“We gotta be pulling together and countenancing violence is not something any leader should do,” Kaine said.

Connecticut Democratic Sen. Chris Murphy, who led a 15-hour filibuster in June to force a vote on gun control measures, took to Twitter to voice his displeasure with Trump’s comments.

“This isn’t play,” wrote Murphy. “Unstable people with powerful guns and an unhinged hatred for Hillary are listening to you, @realDonaldTrump.”

And Rep. Eric Swalwell, D-Calif., who wrote in a tweet that because he believed Trump “suggested someone kill Sec. Clinton,” called for a Secret Service investigation.

See the rest here:
Trump: Maybe ‘2nd Amendment People’ Can Stop Clinton’s …

Posted in Second Amendment | Comments Off on Trump: Maybe ‘2nd Amendment People’ Can Stop Clinton’s …

Golden Rule – New World Encyclopedia

Posted: June 28, 2016 at 2:56 am

The Golden Rule is a cross-cultural ethical precept found in virtually all the religions of the world. Also known as the “Ethic of Reciprocity,” the Golden Rule can be rendered in either positive or negative formulations: most expressions take a passive form, as expressed by the Jewish sage Hillel: “What is hateful to you, do not to your fellow neighbor. This is the whole Law, all the rest is commentary” (Talmud, Shabbat 31a). In Christianity, however, the principle is expressed affirmatively by Jesus in the Sermon on the Mount: “Do unto others as you would have others do unto you” (Gospel of Matthew 7:12). This principle has for centuries been known in English as the Golden Rule in recognition of its high value and importance in both ethical living and reflection.

Arising as it does in nearly all cultures, the ethic of reciprocity is a principle that can readily be used in handling conflicts and promoting greater harmony and unity. Given the modern global trend of political, social, and economic integration and globalization, the Golden Rule of ethics may become even more relevant in the years ahead to foster inter-cultural and interreligious understanding.

Philosophers disagree about the nature of the Golden Rule: some have classified it as a form of deontological ethics (from the Greek deon, meaning “obligation”) whereby decisions are made primarily by considering one’s duties and the rights of others. Deontology posits the existence of a priori moral obligations suggesting that people ought to live by a set of permanently defined principles that do not change merely as a result of a change in circumstances. However, other philosophers have argued that most religious understandings of the Golden Rule imply its use as a virtue toward greater mutual respect for one’s neighbor rather than as a deontological formulation. They argue that the Golden Rule depends on everyone’s ability to accept and respect differences because even religious teachings vary. Thus, many philosophers, such as Karl Popper, have suggested that the Golden Rule can be best understood in term of what it is not (through the via negativa):

First, they note that the Golden Rule should not be confused with revenge, an eye for an eye, tit for tat, retributive justice or the law of retaliation. A key element of the ethic of reciprocity is that a person attempting to live by this rule treats all people, not just members of his or her in-group, with due consideration. The Golden Rule should also not be confused with another major ethical principle, often known as Wiccan Rede, or liberty principle, which is an ethical prohibition against aggression. This rule is also an ethical rule of “license” or “right,” that is people can do anything they like as long as it does not harm others. This rule does not compel one to help the other in need. On the other hand, “the golden rule is a good standard which is further improved by doing unto others, wherever possible, as they want to be done by.”[1]

Lastly, the Golden Rule of ethics should not be confused with a “rule” in the semantic or logical sense. A logical loophole in the positive form of Golden “Rule” is that it would require a masochist to harm others, even without their consent, if that is what the masochist would wish for themselves. This loophole can be addressed by invoking a supplementary rule, which is sometimes called the Silver Rule. This states, “treat others in the way that they wish to be treated.” However, the Silver Rule may create another logical loophole. In a situation where an individual’s background or belief may offend the sentiment of the majority (such as homosexuality or blasphemy), the silver rule may imply ethical majority rule if the Golden Rule is enforced as if it were a law.

Under ethic of reciprocity, a person of atheist persuasion may have a (legal) right to insult religion under the right of freedom of expression but, as a personal choice, may refrain to do so in public out of respect to the sensitivity of the other. Conversely, a person of religious persuasion may refrain from taking action against such public display out of respect to the sensitivity of other about the right of freedom of speech. Conversely, the lack of mutual respect might mean that each side might deliberately violate the golden rule as a provocation (to assert one’s right) or as intimidation (to prevent other from making offense).

This understanding is crucial because it shows how to apply the golden rule. In 1963, John F. Kennedy ordered Alabama National Guardsmen to help admit two clearly qualified “Negro” students to the University of Alabama. In his speech that evening Kennedy appealed to every American:

Stop and examine his conscience about this and other related incidents throughout America…If an American, because his skin is dark, cannot eat lunch in a restaurant open to the public, if he cannot send his children to the best public school available, if he cannot vote for the public officials who will represent him, …. then who among us would be content to have the color of his skin changed and stand in his place? …. The heart of the question is …. whether we are going to treat our fellow Americans as we want to be treated.[2]

It could be argued that the ethics of reciprocity may replace all other moral principles, or at least that it is superior to them. Though this guiding rule may not explicitly tell one which actions or treatments are right or wrong, it can provide one with moral coherenceit is a consistency principle. One’s actions are to be consistent with mutual love and respect to other fellow humans.

A survey of the religious scriptures of the world reveals striking congruence among their respective articulations of the Golden Rule of ethics. Not only do the scriptures reveal that the Golden Rule is an ancient precept, but they also show that there is almost unanimous agreement among the religions that this principle ought to govern human affairs. Virtually all of the world’s religions offer formulations of the Golden Rule somewhere in their scriptures, and they speak in unison on this principle. Consequently, the Golden Rule has been one of the key operating ideas that has governed human ethics and interaction over thousands of years. Specific examples and formulations of the Golden Rule from the religious scriptures of the world are found below:

In Buddhism, the first of the Five Precepts (Panca-sila) of Buddhism is to abstain from destruction of life. The justification of the precept is given in chapter ten of the Dhammapada, which states:

Everyone fears punishment; everyone fears death, just as you do. Therefore do not kill or cause to kill. Everyone fears punishment; everyone loves life, as you do. Therefore do not kill or cause to kill.

According to the second of Four Noble Truths of Buddhism, egoism (desire, craving or attachment) is rooted in ignorance and is considered as the cause of all suffering. Consequently, kindness, compassion and equanimity are regarded as the untainted aspect of human nature.

Even though the Golden Rule is a widely accepted religious ethic, Martin Forward writes that the Golden Rule is itself not beyond criticism. His critique of the Golden Rule is worth repeating in full. He writes:

Two serious criticisms can be leveled against [the Golden Rule]. First of all, although the Golden Rule makes sense as an aspiration, it is much more problematic when it is used as a foundation for practical living or philosophical reflection. For example: sh
ould we unfailingly pardon murderers on the grounds that, if we stood in their shoes, we should ourselves wish to be pardoned? Many goodly and godly people would have problems with such a proposal, even though it is a logical application of the Golden Rule. At the very least, then, it would be helpful to specify what sort of a rule the Golden Rule actually is, rather than assuming that it is an unqualified asset to ethical living in a pluralistic world. Furthermore, it is not usually seen as the heart of religion by faithful people, but simply as the obvious starting point for a religious and humane vision of life. Take the famous story in Judaism recorded in the Talmud: Shabbat 31:

Forward’s argument continues:

Even assuming that the Golden Rule could be developed into a more nuanced pattern of behaving well in todays world, there would still be issues for religious people to deal with. For whilst moral behavior is an important dimension of religion, it does not exhaust its meaning. There is a tendency for religious people in the West to play down or even despise doctrine, but this is surely a passing fancy. It is important for religious people in every culture to inquire after the nature of transcendence: its attitude towards humans and the created order; and the demands that it makes. People cannot sensibly describe what is demanded of them as important, without describing the source that wills it and enables it to be lived out. Besides, the world would be a safer place if people challenged paranoid and wicked visions of God (or however ultimate reality is defined) with truer and more generous ones, rather than if they abandoned the naming and defining of God to fearful and sociopath persons (From the Inter-religious Dialogue article in The Encyclopedia of General Knowledge).

In other words, Forward warns religious adherents not to be satisfied with merely the Golden Rule of ethics that can be interpreted and used as a form of religious and ethical relativism, but to ponder the deeper religious impulses that lead to the conviction of the Golden Rule in the first place, such as the idea of love in Christianity.

Due to its widespread acceptance in the world’s cultures, it has been suggested that the Golden Rule may be related to innate aspects of human nature. In fact, the principle of reciprocity has been mathematically proved to be the most mutually beneficial means of resolving conflict (as in the Prisoner’s Dilemma).[3] As it has touchstones in virtually all cultures, the ethic of reciprocity provides a universally comprehensible tool for handling conflictual situations. However, the logical and ethical objections presented above make the viability of this principle as a Kantian categorical imperative doubtful. In a world where sociopathy and religious zealotry exist, it is not always feasible to base one’s actions upon the perceived desires of others. Further, the Golden Rule, in modernity, has lost some of its persuasive power, after being diluted into a bland, secular precept through cloying e-mail forwards and newspaper cartoons. As Forward argues, perhaps the Golden Rule must be approached in its original religious context, as this context provides an ethical and metaphysical grounding for a belief in the ultimate power of human goodness.

Regardless of the above objections, modern trends of political, social, and economic globalization necessitate the development of understandable, codifiable and universally-accepted ethical guidelines. For this purpose, we (as a species) could certainly do worse than to rely upon the age-old, heuristic principle spelled out in the Golden Rule.

All links retrieved December 19, 2013.

New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:

Note: Some restrictions may apply to use of individual images which are separately licensed.

Continue reading here:

Golden Rule – New World Encyclopedia

Posted in Golden Rule | Comments Off on Golden Rule – New World Encyclopedia

American Patriot Friends Network APFN

Posted: June 27, 2016 at 6:36 am

Then ‘MAKE SURE’ your vote is counted! http://www.votersunite.org/

Why did 65 US Senators break a solemn oath? Watch. Listen http://www.apfn.org/apfn/oath-of-office.htm

The Case for Impeachment C-Span2 Book TV 8/2/06 With Dave Lindorff and Barbara Oskansky Website: http://www.thiscantbehappening.net

HOW TO IMPEACH A PRESIDENT Includes 6 part videos: ‘The Case for Impeachment’ http://www.apfn.org/apfn/impeach_pres.htm

Cointelpro, Provacateurs,Disinfo Agents.

Citizen’s Rule Book 44 pages Download here: http://www.apfn.org/pdf/citizen.pdf

Quality pocketsized hardcopies of this booklet may be obtained from: Whitten Printers (602) 258-6406 1001 S 5th St., Phoenix, AZ 85004 Editoral Work by Webster Adams PAPER-HOUSE PUBLICATIONS “Stronger than Steel” 4th Revision

“Each time a person stands up for an ideal, or acts to improve the lot of others. . .they send forth a ripple of hope, and crossing each other from a million different centers of energy and daring, those ripples build a current that can sweep down the mightiest walls of oppression and resistance.” – Robert F. Kennedy

Philosophy Of Liberty (Flash) http://www.apfn.org/flash/PhilosophyOfLiberty-english.swf

March 29, 2000

Once a government is committed to the principle of silencing the voice of opposition, it has only one way to go, and that is down the path of increasingly repressive measures, until it becomes a source of terror to all its citizens and creates a country where everyone lives in fear. –Harry S. Truman

APFN Contents Page:Click Here

Message Board

APFN Home Page

“The American Dream” Fire ’em all!

Join the Blue Ribbon Online Free Speech Campaign!

American Patriot Friends Network a/k/a American Patriot Fax Network was founded Feb. 21, 1993. We started with faxing daily reports from the Weaver-Harris trials. Then on Feb. 28 1993, The BATF launched Operation Showtime – “The Siege on the Branch Davidians”. From this point, it’s been the Death of Vince Foster, the Oklahoma Bombing, TWA-800, The Train Deaths, Bio-War, on and on. We are not anti-government, we are anti-corrupt-government. A Patriot is one who loves God, Family and Country…..

We believe Patriots should rule America…. Please join in the fight with us in seeking TRUTH, JUSTICE AND FREEDOM FOR ALL AMERICANS….

Join our e-mail list and build your own e-mail/Fax networking contacts.

Without Justice, there is JUST_US

EXCELLENT!! Download & WATCH THIS! (Flash Player) http://www.apfn.org/apfn/pentagon121.swf

The Attack on America 9/11 http://www.apfn.org/apfn/WTC.htm

9/11 Philip Marshall and His Two Children Silenced for Telling the Truth http://www.apfn.org/apfn/bamboozle.htm

OBAMA’S DRONES WAR ON WOMEN AND CHILDREN http://www.apfn.org/apfn/drones.htm

SMART METERS and Agenda 21 http://www.apfn.org/apfn/smartmeters.htm

TWO SUPREME COURT DECISIONS THE ANTI-GUNNERS DON’T WANT YOU TO SEE http://www.apfn.org/apfn/Gun-law.htm

APFN Pogo Radio Your Way http://www.apfn.net/pogo.htm

APFN iPod Download Page http://www.apfn.org/iPod/index.htm

America Media Columnists (500) Listed By Names

“I believe in the United States of America as a Government of the people by the people, for the people, whose just powers are derived from the consent of the governed; a democracy in a Republic; a sovereign Nation of many sovereign States; a perfect Union, one and inseparable; established upon those principles of freedom, equality, justice, and humanity for which American patriots sacrificed their lives and fortunes.

I therefore believe it is my duty to my Country to love it; to support its Constitution; to obey its laws; to respect its flag, and to defend it against all enemies.”

http://www.icss.com/usflag/american.creed.html

Freedom is ANYTHING BUT FREE!

“…. a network of net-worker’s….”

Dedication:

I was born an American. I live as an American; I shall die an American; and I intend to perform the duties incumbent upon me in that character to the end of my career. I mean to do this with absolute disregard to personal consequences. What are the personal consequences?

What is the individual man with all the good or evil that may betide him, in comparison with the good and evil which may befall a great country, and in the midst of great transactions which concern that country’s fate? Let the consequences be what they will, I am careless, No man can suffer too much, and no man can fall too soon, if he suffer or if he fall, in the defense of the liberties and Constitution of his country.

…Daniel Webster

APFN IS NOT A BUSINESS APFN IS SUPPORTED BY “FREE WILL” GIFT/DONATIONS Without Justice, there is JUST_US! http://www.apfn.org

If you would like to donate a contribution to APFN: Mail to: 7558 West Thunderbird Rd. Ste. 1-#115 Peoria, Arizona 85381

Message Board

APFN Sitemap

APFN Home Page

APFN Contents Page

You can subscribe to this RSS feed in a number of ways, including the following:

One-click subscriptions

If you use one of the following web-based News Readers, click on the appropriate button to subscribe to the RSS feed.

E-Mail apfn@apfn.org

Visit link:

American Patriot Friends Network APFN

Posted in Government Oppression | Comments Off on American Patriot Friends Network APFN