Tag Archives: speech

Golden calf – Wikipedia

Posted: January 15, 2017 at 1:09 pm

According to the Bible, the golden calf ( ggel hazhv) was an icon (a cult image) made by the Israelites during Moses’ absence, when he went up to Mount Sinai. In Hebrew, the incident is known as haggel ( ) or “The Sin of the Calf”. It is first mentioned in Exodus 32:4.

Bull worship was common in many cultures. In Egypt, whence according to the Exodus narrative the Hebrews had recently come, the Apis Bull was a comparable object of worship, which some believe the Hebrews were reviving in the wilderness;[1] alternatively, some believe the God of Israel was associated with or pictured as a calf/bull deity through the process of religious assimilation and syncretism. Among the Egyptians’ and Hebrews’ neighbors in the ancient Near East and in the Aegean, the Aurochs, the wild bull, was widely worshipped, often as the Lunar Bull and as the creature of El.

When Moses went up into Biblical Mount Sinai to receive the Ten Commandments (Exodus 24:12-18), he left the Israelites for forty days and forty nights. The Israelites feared that he would not return and demanded that Aaron make them “gods” to go before them (Exodus 32:1). Aaron gathered up the Israelites’ golden earrings and ornaments, constructed a “molten calf” and they declared: “These [be] thy gods, O Israel, which brought thee up out of the land of Egypt.” (Exodus 32:4)

Aaron built an altar before the calf and proclaimed the next day to be a feast to the LORD. So they rose up early the next day and “offered burnt-offerings, and brought peace-offerings; and the people sat down to eat and to drink, and rose up to play.” (Exodus 32:6) God told Moses what the Israelites were up to back in camp, that they had turned aside quickly out of the way which God commanded them and he was going to destroy them and start a new people from Moses. Moses besought and pleaded that they should be spared (Exodus 32:11-14), and God “repented of the evil which He said He would do unto His people.”

Moses went down from the mountain, but upon seeing the calf, he became angry and threw down the two Tablets of Stone, breaking them. Moses burnt the golden calf in a fire, ground it to powder, scattered it on water, and forced the Israelites to drink it. When Moses asked him, Aaron admitted collecting the gold, and throwing it into the fire, and said it came out as a calf (Exodus 32:21-24).

The bible records that the tribe of Levi did not worship the golden calf. When Moses stood in the gate of the camp, and said: ‘Whosoever is on the LORD’s side, let him come unto me.’ And all the sons of Levi gathered themselves together unto him. And he said unto them: ‘Thus saith the LORD, the God of Israel: Put ye every man his sword upon his thigh, and go to and fro from gate to gate throughout the camp, and slay every man his brother, and every man his companion, and every man his neighbour.’ And the sons of Levi did according to the word of Moses; and there fell of the people that day about three thousand men. (Exodus 32:26-28)

The golden calf is mentioned in Nehemiah 9:1621.

“But they, our ancestors, became arrogant and stiff-necked, and they did not obey your commands. They refused to listen and failed to remember the miracles you performed among them. They became stiff-necked and in their rebellion appointed a leader in order to return to their slavery. But you are a forgiving God, gracious and compassionate, slow to anger and abounding in love. Therefore you did not desert them, even when they cast for themselves an image of a calf and said, ‘This is your god, who brought you up out of Egypt,’ or when they committed awful blasphemies. “Because of your great compassion you did not abandon them in the wilderness. By day the pillar of cloud did not fail to guide them on their path, nor the pillar of fire by night to shine on the way they were to take. You gave your good Spirit to instruct them. You did not withhold your manna from their mouths, and you gave them water for their thirst. For forty years you sustained them in the wilderness; they lacked nothing, their clothes did not wear out nor did their feet become swollen.”

The language suggests that there are some inconsistencies in the other accounts of the Israelites and their use of the calf. As the version in Exodus and 1 Kings are written by Deuteronomistic historians based in the southern kingdom of Judah, there is a proclivity to expose the Israelites as unfaithful. The inconsistency is primarily located in Exodus 32.4 where “gods” is plural despite the construction of a single calf. When Ezra retells the story, he uses the single, capitalized God.[2]

Conversely, a more biblically conservative view offers a tenable explanation accounting for the discrepancy between “gods” in Exodus 32 and “God” in Nehemiah 9:18. In both instances, the Hebrew ‘elohim’ is used. Since ancient Hebrew failed to distinguish ‘elohim’ God (known as the majestic plural) from ‘elohim’ gods, Biblical translations are either determined by a) context or b) the local verb(s). In the original account in Exodus 32, the local verb is in the 3rd person plural. In Nehemiah 9, the verb connected to ‘elohim’ is singular. For the JEDP (i.e. Deuteronomistic) theorist, this inconsistency is confirmatory since the theory maintains a roughly equivalent date for the composition of Exodus and Nehemiah. More conservative scholarship would argue that these two texts were composed about 1000 years apart: Exodus (by Moses) circa 1500 BCE, and Nehemiah circa 500 BCE. The biblically conservative framework would therefore account for the verbal inconsistency from Exodus to Nehemiah as a philological evolution over the approximate millennium separating the two books.

According to 1 Kings 12:2630, after Jeroboam establishes the northern Kingdom of Israel, he contemplates the sacrificial practices of the Israelites.

Jeroboam thought to himself, “The kingdom will now likely revert to the house of David. If these people go up to offer sacrifices at the temple of the LORD in Jerusalem, they will again give their allegiance to their lord, Rehoboam king of Judah. They will kill me and return to King Rehoboam.” After seeking advice, the king made two golden calves. He said to the people, “It is too much for you to go up to Jerusalem. Here are your gods, Israel, who brought you up out of Egypt.” One he set up in Bethel, and the other in Dan. And this thing became a sin; the people came to worship the one at Bethel and went as far as Dan to worship the other.

His concern was that the tendency to offer sacrifices in Jerusalem, which is in the southern Kingdom of Judah, would lead to a return to King Rehoboam. He makes two golden calves and places them in Bethel and Dan. He erects the two calves in what he figures (in some interpretations) as substitutes for the cherubim built by King Solomon in Jerusalem.[3]

Richard Elliott Friedman says “at a minimum we can say that the writer of the golden calf account in Exodus seems to have taken the words that were traditionally ascribed to Jeroboam and placed them in the mouths of the people.” Friedman believes that the story was turned into a polemic, exaggerating the throne platform decoration into idolatry, by a family of priests sidelined by Jeroboam.[4]

The declarations of Aaron and Jeroboam are almost identical:

After making the golden calf or golden calves both Aaron and Jeroboam celebrate festivals. Aaron builds an altar and Jeroboam ascends an altar (Exod 32:56; 1 Kings 12:3233).[5]

The incident of the worship of the Golden Calf is narrated in the Qur’an and other Islamic literature. The Qur’an narrates that after they refused to enter the promised land, God decreed that as punishment the Israelites would wander for forty years. Moses continued to lead the Israelites to Mount Sinai for Divine guidance. According to Islamic literature, God ordered Moses to fast for thirty days, and upon near completion of the thirty days, Moses ate a scented plant to improve the odour of his mouth. God commanded Moses to fast for ten more days, before receiving the guidance for the Israelites. When Moses completed the fasts, he approached God for guidance. During this time, Moses had instructed the Israelites that Aaron (Harun) was to lead them. The Israelites grew restless, since Moses had not returned to them, and after thirty days, a man the Qur’an names Samiri raised doubts among the Israelites. Samiri claimed that Moses had forsaken the Israelites and ordered his followers among the Israelites to light a fire and bring him all the jewelry and gold ornaments they had.[6] Samiri fashioned the gold into a golden calf along with the dust on which the angel Gabriel had trodden, which he proclaimed to be the God of Moses and the God who had guided them out of Egypt.[7] There is a sharp contrast between the Qur’anic and the biblical accounts the prophet Aaron’s actions. The Qur’an mentions that Aaron attempted to guide and warn the people from worshipping the Golden Calf. However, the Israelites refused to stop until Moses had returned.[8] The righteous separated themselves from the pagans. God informed Moses that He had tried the Israelites in his absence and that they had failed by worshipping the Golden Calf.

Returning to the Israelites in great anger, Moses asked Aaron why he had not stopped the Israelites when he had seen them worshipping the Golden Calf. The Qur’an reports that Aaron stated that he did not act due to the fear that Moses would blame him for causing divisions among the Israelites. Moses realized his helplessness in the situation, and both prayed to God for forgiveness. Moses then questioned Samiri for the creation of the Golden Calf; Samiri justified his actions by stating that he had thrown the dust of the ground upon which Gabriel had tread on into the fire because his soul had suggested it to him.[6] Moses informed him that he would be banished and that they would burn the Golden Calf and spread its dust into the sea. Moses ordered seventy delegates to repent to God and pray for forgiveness.[9] The delegates traveled alongside Moses to Mount Sinai, where they witnessed the speech between him and God but refused to believe until they had witnessed God with their sight. As punishment, God struck the delegates with lightning and killed them with a violent earthquake.[10] Moses prayed to God for their forgiveness. God forgave and resurrected them and they continued on their journey.

In the Islamic view, the Calf-worshipers’ sin had been shirk (Arabic: ), the sin of idolatry or polytheism. Shirk is the deification or worship of anyone or anything other than the singular God (Allah), or more literally the establishment of “partners” placed beside God, a most serious and unforgivable sin, with the Calf-worshipers’ being ultimately forgiven being a mark of special forbearance by Allah.

Despite a seemingly simplistic faade, the golden calf narrative is complex. According to Michael Coogan, it seems that the golden calf was not an idol for another god, and thus a false god.[11] He cites Exodus 32:4-5 as evidence: He [Aaron] took the gold from them, formed it in a mold, and cast an image of a calf; and they said, “These are your gods, O Israel, who brought you up out of the land of Egypt!” When Aaron saw this, he built an altar before it; and Aaron made proclamation and said, “Tomorrow shall be a festival to the Lord.” Importantly, there is a single calf in this narrative, though the people refer to it as representative of the “gods.” While a reference to singular god does not necessarily imply Yahweh worship, it does not rule out the possibility that it is Yahweh that the people are worshiping, as the reference to a plurality of “gods” would. Additionally, the festival “to the Lord” in verse 5 is sometimes translated as “to Yahweh”.[11] It should also be noted that “in the chronology of the narrative of the Ten Commandments” the commandment against the creation of graven images had not yet been given to the people when they pressed upon Aaron to help them make the calf, and that such behavior was not yet explicitly outlawed.[11]

Another understanding of the golden calf narrative is that the calf was meant to be the pedestal of Yahweh. In Near Eastern art, gods were often depicted standing on an animal, rather than seated on a throne.[11] This reading suggests that the golden calf was merely an alternative to the ark of the covenant or the cherubim upon which Yahweh was enthroned.[11]

The reason for this complication may be understood as 1.) a criticism of Aaron, as the founder of one priestly house that rivaled the priestly house of Moses, and/or 2.) as “an attack on the northern kingdom of Israel.”[11] The second explanation relies on the “sin of Jeroboam,” who was the first king of the northern kingdom, as the cause of the northern kingdoms fall to Assyria in 722 BCE.[11] Jeroboams “sin” was creating two calves of gold, and sending one to Bethel as a worship site in the south of the Kingdom, and the other to Dan as a worship site in the north, so that the people of the northern kingdom would not have to continue to go to Jerusalem to worship (see 1 Kings 12.2630). According to Coogan, this episode is part of the Deuteronomistic history, written in the southern kingdom of Judah, after the fall of the Northern kingdom, which was biased against the northern kingdom.[11] Coogan maintains that Jeroboam was merely presenting an alternative to the cherubim of the Temple in Jerusalem, and that calves did not indicate non-Yahwehistic worship.[11]

The documentary hypothesis can be used to further understand the layers of this narrative: it is plausible that the earliest story of the golden calf was preserved by E (Israel source) and originated in the Northern kingdom. When E and J (Judah source) were combined after the fall of northern kingdom, “the narrative was reworked to portray the northern kingdom in a negative light,” and the worship of the calf was depicted as “polytheism, with the suggestion of a sexual orgy” (see Exodus 32.6). When compiling the narratives, P (a later Priest source from Jerusalem) may have minimized Aarons guilt in the matter, but preserved the negativity associated with the calf.[11]

Alternatively it could be said that there is no golden calf story in the J source, and if it is correct that the Jeroboam story was the original as stated by Friedman, then it is unlikely that the Golden Calf events as described in Exodus occurred at all. Friedman states that the smashing of the Ten Commandments by Moses when he beheld the worship of the golden calf, is really an attempt to cast into doubt the validity of Judah’s central shrine, the Ark of the Covenant. “The author of E, in fashioning the golden calf story, attacked both the Israelite and Judean religious establishments.” [12]

As to the likelihood that these events ever took place, on the one hand there are two versions of the ten commandments story, in E (Exodus 20) and J (Exodus 34), this gives some antiquity and there may be some original events serving as a basis to the stories. The Golden Calf story is only in the E version and a later editor added in an explanation that God made a second pair of tablets to give continuity to the J story.[13] The actual Ten Commandments as given in Exodus 20 were also inserted by the redactor who combined the various sources.[14]

Archaeologists Israel Finkelstein and Neil Asher Silberman say that while archaeology has found traces left by small bands of hunter-gatherers in the Sinai, there is no evidence at all for the large body of people described in the Exodus story: “The conclusion that Exodus did not happen at the time and in the manner described in the Bible seems irrefutable… repeated excavations and surveys throughout the entire area have not provided even the slightest evidence.”[15]

A metaphoric interpretation emphasizes the “gold” part of “golden calf” to criticize the pursuit of wealth.

This usage can be found in Spanish[16] where Mammon, the Gospel personification of idolatry of wealth, is not so current.

People and things in the Quran

Groups and tribes

Note: The names are sorted alphabetically. Standard form: Islamic name / Bibilical name (title or relationship)

Link:

Golden calf – Wikipedia

Posted in Golden Rule | Comments Off on Golden calf – Wikipedia

Top Five Zeitgeist: The Movie Myths! | Peter Joseph

Posted: January 10, 2017 at 2:59 am

Top Five Zeitgeist The Movie Myths!

1) The Zeitgeist Movement is all about support of Zeitgeist: The Movie!

Actually, as per my experience over the past 6 years, most within The Zeitgeist Movement (TZM) do not subscribe or agree with this film in general, although mixed reactions are most common. Zeitgeist: The Movie was created years before TZM was formed. TZM was created originally to support Jacque Frescos Venus Project (TVP). After TVP and TZM split three years later, TZM became a self-propelling institution with its own body of work. The text The Zeitgeist Movement Defined is the core source of Movement interests and expresses what TZM is about clearly.

As of 2015, any ongoing association with TZM and Zeitgeist: The Movie is often perpetuated by those merely with malicious intent. As the rest of this list will express, Zeitgeist: The Movie has been a point of extreme attack and bigoted reactions since its inception. Having been seen by literally hundreds of millions of people, it is no surprise so many in vehement disagreement rise to the top. I wish I counted the number of death threats and the amount of cyber stalking I have personally endured. I have spent upwards of $20,000 in legal fees fighting constant defamation by those offended by that film.

As an aside, many have suggested that a simple name change (remove Zeitgeist) would have solved the problem. Yet, if a name change alone is that persuasive, isnt that actually indicative of a deep lack of critical thought? Where a mere superficial title changes peoples sense of association? I find this troubling if so. But regardless, the genie cannot go back in the bottle. Love it or hate it, Zeitgeist: The Movie isnt going anywhere and its content/implications 8 years later seem to only get stronger and more validated. According to my online distributor, it is one of the most popular docs on Netflix, now in many languages/regions there.

2) Its all been debunked!

The term debunked has become a mantra of sorts by the anti-ztm crowd. You also see this kind of overly zealous absolutism in other communities as well, such as the atheist community. As an atheist myself, I have learned that compassion is much more powerful than ridicule and if the goal of any communication is to change minds, taking a condescending and absolute approach does nothing but inflate the initiators ego not help educate others.

In that, many interpreted the first section of Zeitgeist: The Movie as an attack on religion. I would say it is providing a contrary view of its history and it does so in a non-derisive way. It is very academic in its presentation and to call it an attack is without merit.

That noted, Zeitgeist: The Movie was an art piece first and foremost and a great deal of liberty was taken in its expression. In the very first edition, I had a section with John F. Kennedy talking about the grand conspiracy of Communism and overlaid it onto his assassination footage. I knew what I was doing and did so because it was an amazing artistic effect. It wasnt until the film was grossly misinterpreted in its mixed genre style and artistic license that I later went back and made such editorial changes to conform it to a more documentary form.

I was sad to have to do this, in fact but It seems it was too advanced a piece for common culture and people were not ready to be critical of such liberties; understand the context. Zeitgeist: The Movie was the ultimate expression of demanding critical thought. It wasnt made to declare, it was made to challenge.Same goes for the long held up cry of manipulative filmmaking, such as when footage of the Madrid subway bombing was used to introduce a section on the 7/7 London Bombings. How dare I show a different explosion!

In 2010, I cleaned it up to conform to a more traditional documentary form and produced a free 220 booklet to support the literally 100s of claims made in the work. To date, no one has addressed this text. I would also add that while points made in the film from the origins of religion, to the events of Sept 11th, to the history of war for profit and social manipulation by financial interests are subject to interpretation and could perhaps be wrong, no single opposing claim or group of contradictions debunks the whole film. As the filmmaker, I will state that even I am not sure about some of the claims as far as what the absolute truth is. But again, that isnt the purpose of this work.

3) There are no sources!

I have seen this claim posted in reviews constantly. Zeitgeist: The Movie is likely the most sourced film in documentary history. I know of no other work that has painstakingly shown where the content came from. Again, one can argue about the truth of any given idea, but to say it is made up is beyond absurd. Companion Source Guide : http://www.zeitgeistmovie.com/Zeitgeist,%20The%20Movie-%20Companion%20Guide%20PDF.pdf

4) Its anti-semitic!

This one really took me by surprise when I starting hear about it, especially since I end the film with one of the most heart warming/human unity quotes of all time by Carl Sagan. It appears to have started with a woman named Michelle Goldberg. She essentially stated that my use of a 1941 anti-war speech by Charles A. Lindbergh implied this, as Lindbergh was supposedly anti-semitic.

In the opening section of part 3 of the film, she claims Charles A. Lindbergh was talking about the jews when describing warring interests trying to bring American into WWII. This is just about as wrong and irresponsible as it comes. Sadly, this theme has carried forward through history as the echo of pro-war/pro establishment media propaganda redefines reality. Long story short, Charles A Lindbergh was a famous American aviator, author, inventor, explorer, and social activist. He was the son of Congressman Charles Lindbergh Sr. who was extremely outspoken against the banking system a generation prior, writing texts on the Money Trust, referring to the financial system and its power. (He too was often called anti-semitic with no validation as a means of personal attack.) Charles A. Lindbergh deeply opposed US involve in WWII. He was an isolationist. In this crusade, he was attacked as anti-semitic in order to pollute his message. (sound familiar?) Its that simple. To his discredit, his speaking skills were poor and he often spoke primitively about groups. He held some bad science views that were very common of the time and its easy to look back on such un-informed issues and find false relationships. Yet, his non-racist stance is very clear to those paying attention.

For example, he once stated: I am not attacking either the Jewish or the British people. Both races, I admire. But I am saying that the leaders of both the British and the Jewish races, for reasons which are as understandable from their viewpoint as they are inadvisable from ours, for reasons which are not American, wish to involve us in the war. We cannot blame them for looking out for what they believe to be their own interests, but we also must look out for ours. We cannot allow the natural passions and prejudices of other peoples to lead our country to destruction This was a political statement, not a racist one but the press at the time ran that it was anti-semitic, which, again, is a good ploy if you want people to distrust someone. We see this technique being used today, constantly. Here are the last lines of the speech used in Zeitgeist: The Movie (that was called anti-semitic), along with the next sentence, not included in the film (in bold):

Our theaters soon became filled with plays portraying the glory of war. Newsreels lost all semblance of objectivity. Newspapers and magazines began to lose advertising if they carried anti-war articles. A smear campaign was instituted against individuals who opposed intervention. The terms fifth columnist, traitor, Nazi, anti-Semitic were thrown ceaselessly at any one who dared to suggest that it was not to the best interests of the United States to enter the war. Men lost their jobs if they were frankly anti-war. Many others dared no longer speak.

Later in the speech he then states: No person with a sense of the dignity of mankind can condone the persecution of the Jewish race in Germany.

Does this sound like a racist to you? In a book written by his wife, she states: His prewar isolationist speeches were given in all sincerity for what he thought was the good of the country and the worldHe was accused of being anti-semetic, but in the 45 years I lived with him I never heard him make a remark against the jews, not a crack or joke, and neither did any of our children. So what we have is a victim of the media culture, glamorized through history with the vile horror of hindsight given the horrors/persecutions around WWII. Lindbergh might not have been the smartest and most strategic in his manner of activism and communication but there is no evidence he was a racist.

5) Its an anti-New World Order Conspiracy Film!

Proponents who talk about the New World Order, (long before Zeitgeist The Movie) have always agitated me. I have never supported this bizarre and esoteric body of assumptions and, to this day, can honestly say I have no idea how the current ideas even came about given the origin of the original term. New World Order is a term put forward by H. G. Wells in his book of the same title. In this, he speaks about the world unifying as one for the better. Since that time, however, the term has been skyrocketed into bizarro land.The only times I have ever sympathized with anyone who does have this pop culture belief was when I tried and get behind it and talk about root causes of human behavior and power abuse. And yet, even the current Wikipedia entry on Zeitgeist: The Movie says it is about New World Order forces But then again its Wikipedia the encyclopedia that lets random opinion and select news sources serve as historical fact.

Anyway, while the very original version of the film did talk about global government run by corporate power as an Orwellian 1984 type assumption for the future, this was artistically presented and deduced as a result of global financial power and the tendency to constantly concentrate this power. I later removed this section entirely (in 2010) as I was disgusted by the constant misinterpretations.

Likewise, the notion of a Conspiracy film is equally as misguided. This is simply derision by categorical association. No different than how the term communist was used to force people to shy away from any information or ideas that were against the status quo during the Mcarthy Era in the 1950s.

Zeitgeist: The Movie takes three subjects and bridges them within the context of social myth. This context is then evidenced to show how people become biased and can be manipulated based upon those dominant shared (bogus) beliefs (hence the term zeitgeist itself).

In the context of the real world, power abuse is obvious since the nature of our economy supports massive class division and the movement of power and money to a small group. This isnt conspiracy it is a system reality. We live in a war system and massive gaming for personal/group self-interest is happening at every moment.

Thats enough for now.

~Peter Joseph, Feb 22nd 2015

Read more:

Top Five Zeitgeist: The Movie Myths! | Peter Joseph

Posted in Zeitgeist Movement | Comments Off on Top Five Zeitgeist: The Movie Myths! | Peter Joseph

Cyprus Space Exploration Organisation (CSEO)

Posted: November 21, 2016 at 11:11 am

Posted: 16 May, 2015 Cyprus’ project “Arachnobeea” is the winner of the International Space Apps Challenge! 16 May, Nicosia

A universal success for the Cypriot team and recognition by NASA!

NASA announced the winners of the International Space Apps Challenge today, and “Arachnobeea”, the runner-up team of the Space Apps Challenge Limassol 2015, was the global winner in the “Best Mission Concept” category!

Arachnobeea was selected by a NASA judging committee, among over 950 other projects from 135 locations worldwide, as one of the 6 global winners!

The team definitely did an incredible job designing the most innovative quad-copter drone destined for usage in space vehicles, and they managed to excite everyone with their presentation at the local competition in Limassol in early April. Apparently, the NASA experts identified the uniqueness of the team’s design and awarded the Cypriot team as the international winner for the “Best Mission Concept” of the 2015 International Space Apps Challenge.

Team “Arachnobeea” truly make us proud with their success! The announcement of the winners by NASA

During the official opening gala of the CSEO Space Week 2015, at the Russian Cultural Centre, Cosmonauts on-board the ISS sent greetings to the guests of the opening ceremony and to the island of Cyprus.

The moment the Space Week was declared open

From left: Mr Rogalev – Director of Russian Cultural Centre, Mr Thrassou – President of Cypro-Russian Friendship Association, Mr Danos – President of CSEO, Cosmonaut Alexandr Volkov, Russian Ambassador Mr Osadchiy, Honorary Russian Consul Mr Prodromou

More on CSEO Space Week 2015:

Our aim is to promote space exploration with various events and activities.

In cooperation with the Municipality of Nicosia and the support of the Russian Cultural Centre, ROSCOSMOS, the Confucius Institute, the Cypro-Russian Friendship Association, the China Society of Astronautics, the University of Cyprus and the Ministry of Communications and Works of the Republic of Cyprus we are organising “CSEO Space Week 2015” in the capital of Cyprus – Nicosia, from the 20th – 26th of April 2015, promoting space exploration, with various events and activities.

Part of the programme for the “CSEO Space Week 2015” includes:

Special opening highlight event – Monday 21st July, 19:15 – 21:00, City Plaza, Nicosia

We are connecting live with the USA, for a special live talk with famous author and science journalist Andrew Chaikin, organised just for Cyprus, all thanks to the kind effort and assistance of the American Embassy.

Andrew Chaikin is the author of the book “A Man on the Moon”, a detailed description of the Apollo missions to the Moon, which was turned into the world famous TV production “From the Earth to the Moon”, a 12-part HBO miniseries. Event Details

Our team MarsSense was short-listed in the Top 4 finalists for the “Best Student Paper” Award, at SpaceOps 2014, organised by JPL, NASA at Pasadena, California, last May.

They presented at the topmost event on Space Operations, organised by NASA, to leading members of Space Agencies and Community. Their research received very positive feedback from respected leaders of the space community and was finally shortlisted in the top 4 research student papers of the last 2 years at SpaceOps 2014!

Congratulations to MarsSense!!!

During our mission to the USA, the Cyprus Space Exploration Organisation (CSEO) promoted collaboration with many international organisations and national space agencies, paving the way to a number of exciting agreements.

Press Conference, at the Ministry of Communications and Works, Friday 20th June 2014:

CSEO’s President explained that the involvement of Cyprus in the Space Industry and a full membership to ESA can bring big economic benefits to the island’s economy.

CSEO extended a hand of cooperation to the Cypriot government.

The Minister of Communications and Works, Mr Marios Demetriades, as part of his speech said: (translation) “I would like to publicly congratulate the Cypriot delegation to the USA, and specifically the finalist team, as well as the Cyprus Space Exploration Organisation, for its support and participation in the entire effort of the mission”.

“The Ministry of Communications and Works, as well as I personally will support every effort, to ensure that this breakthrough has continuity and perspective. The geographical position of Cyprus and its status as an EU member state creates unprecedented opportunities that we must not allow to be lost”. The Press Conference was covered by all the main local TV channels and other media.

CSEO’s promotional video as first seen at the SpaceWeek Gala on 10th of April 2014.

Our aim is to promote space exploration with various events and activities, leading up to the NASA Space Apps and the visit by Cosmonaut Aleksandr Volkov that holds the record of longest stay in space.

NASA designated to CSEO’s Marios Isaakides to organize NASA Space Apps Nicosia 2014, for the weekend of 12-13 April 2014.

More on the Space Week:

Part of the programme for the “Space Week” includes:

Join in on the Fun!

Posted: January 15, 2014 “Launching Cyprus Into the Space Era – Event 2: Building the Future” 20th January 2014, 19:00 – ARTos Foundation, Nicosia

Read more here:

Cyprus Space Exploration Organisation (CSEO)

Posted in Space Exploration | Comments Off on Cyprus Space Exploration Organisation (CSEO)

Political correctness – Wikipedia

Posted: at 11:08 am

The term political correctness (adjectivally: politically correct, commonly abbreviated to PC;[1] also abbreviated as P.C. and p.c.) in modern usage, is used to describe language, policies, or measures that are intended primarily not to offend or disadvantage any particular group of people in society. In the media, the term is generally used as a pejorative, implying that these policies are excessive.[2][3][4][5][6][7][8]

The term had only scattered usage before the early 1990s, usually as an ironic self-description, but entered more mainstream usage in the United States when it was the subject of a series of articles in The New York Times.[9][10][11][12][13][14] The phrase was widely used in the debate about Allan Bloom’s 1987 book The Closing of the American Mind,[4][6][15][16] and gained further currency in response to Roger Kimball’s Tenured Radicals (1990),[4][6][17][18] and conservative author Dinesh D’Souza’s 1991 book Illiberal Education, in which he condemned what he saw as liberal efforts to advance self-victimization, multiculturalism through language, affirmative action, and changes to the content of school and university curricula.[4][5][17][19]

Commentators on the left have said that conservatives pushed the term in order to divert attention from more substantive matters of discrimination and as part of a broader culture war against liberalism.[17][20][21] They also argue that conservatives have their own forms of political correctness, which are generally ignored by conservative commenters.[22][23][24]

The term “politically correct” was used infrequently until the latter part of the 20th century. This earlier use did not communicate the social disapproval usually implied in more recent usage. In 1793, the term “politically correct” appeared in a U.S. Supreme Court judgment of a political lawsuit.[25] The term also had occasional use in other English-speaking countries.[26][27]William Safire states that the first recorded use of the term in the typical modern sense is by Toni Cade Bambara in the 1970 anthology The Black Woman.[28][clarification needed] The term probably entered use in the United Kingdom around 1975.[8][clarification needed]

In the early-to-mid 20th century, the phrase “politically correct” was associated with the dogmatic application of Stalinist doctrine, debated between Communist Party members and American Socialists. This usage referred to the Communist party line, which provided “correct” positions on many political matters. According to American educator Herbert Kohl, writing about debates in New York in the late 1940s and early 1950s,

The term “politically correct” was used disparagingly, to refer to someone whose loyalty to the CP line overrode compassion, and led to bad politics. It was used by Socialists against Communists, and was meant to separate out Socialists who believed in egalitarian moral ideas from dogmatic Communists who would advocate and defend party positions regardless of their moral substance.

In March 1968, the French philosopher Michel Foucault is quoted as saying: “a political thought can be politically correct (‘politiquement correcte’) only if it is scientifically painstaking”, referring to leftist intellectuals attempting to make Marxism scientifically rigorous rather than relying on orthodoxy.[29]

In the 1970s, the American New Left began using the term “politically correct”.[30] In the essay The Black Woman: An Anthology (1970), Toni Cade Bambara said that “a man cannot be politically correct and a [male] chauvinist, too.” Thereafter, the term was often used as self-critical satire. Debra L. Shultz said that “throughout the 1970s and 1980s, the New Left, feminists, and progressives… used their term ‘politically correct’ ironically, as a guard against their own orthodoxy in social change efforts.”[4][30][31] As such, PC is a popular usage in the comic book Merton of the Movement, by Bobby London, which then was followed by the term ideologically sound, in the comic strips of Bart Dickon.[30][32] In her essay “Toward a feminist Revolution” (1992) Ellen Willis said: “In the early eighties, when feminists used the term ‘political correctness’, it was used to refer sarcastically to the anti-pornography movement’s efforts to define a ‘feminist sexuality’.”[33]

Stuart Hall suggests one way in which the original use of the term may have developed into the modern one:

According to one version, political correctness actually began as an in-joke on the left: radical students on American campuses acting out an ironic replay of the Bad Old Days BS (Before the Sixties) when every revolutionary groupuscule had a party line about everything. They would address some glaring examples of sexist or racist behaviour by their fellow students in imitation of the tone of voice of the Red Guards or Cultural Revolution Commissar: “Not very ‘politically correct’, Comrade!”[34]

Critics, including Camille Paglia[35] and James Atlas,[36][37] have pointed to Allan Bloom’s 1987 book The Closing of the American Mind[15] as the likely beginning of the modern debate about what was soon named “political correctness” in American higher education.[4][6][16][38] Professor of English literary and cultural studies at CMU Jeffrey J. Williams wrote that the “assault on…political correctness that simmered through the Reagan years, gained bestsellerdom with Bloom’s Closing of the American Mind.” [39] According to Z.F. Gamson, “Bloom’s Closing of the American Mind…attacked the faculty for ‘political correctness’.”[40] Prof. of Social Work at CSU Tony Platt goes further and says the “campaign against ‘political correctness'” was launched by the book in 1987.[41]

A word search of six “regionally representative Canadian metropolitan newspapers”, found only 153 articles in which the terms “politically correct” or “political correctness” appeared between 1 January 1987 and 27 October 1990.[12]

An October 1990 New York Times article by Richard Bernstein is credited with popularizing the term.[11][13][14][42][43] At this time, the term was mainly being used within academia: “Across the country the term p.c., as it is commonly abbreviated, is being heard more and more in debates over what should be taught at the universities”.[9]Nexis citations in “arcnews/curnews” reveal only seventy total citations in articles to “political correctness” for 1990; but one year later, Nexis records 1532 citations, with a steady increase to more than 7000 citations by 1994.[42][44] In May 1991 The New York Times had a follow-up article, according to which the term was increasingly being used in a wider public arena:

What has come to be called “political correctness,” a term that began to gain currency at the start of the academic year last fall, has spread in recent months and has become the focus of an angry national debate, mainly on campuses, but also in the larger arenas of American life.

The previously obscure far-left term became common currency in the lexicon of the conservative social and political challenges against progressive teaching methods and curriculum changes in the secondary schools and universities of the U.S.[5][45] Policies, behavior, and speech codes that the speaker or the writer regarded as being the imposition of a liberal orthodoxy, were described and criticized as “politically correct”.[17] In May 1991, at a commencement ceremony for a graduating class of the University of Michigan, then U.S. President George H.W. Bush used the term in his speech: “The notion of political correctness has ignited controversy across the land. And although the movement arises from the laudable desire to sweep away the debris of racism and sexism and hatred, it replaces old prejudice with new ones. It declares certain topics off-limits, certain expression off-limits, even certain gestures off-limits.”[46][47][48]

After 1991, its use as a pejorative phrase became widespread amongst conservatives in the US.[5] It became a key term encapsulating conservative concerns about the left in culture and political debate more broadly, as well as in academia. Two articles on the topic in late 1990 in Forbes and Newsweek both used the term “thought police” in their headlines, exemplifying the tone of the new usage, but it was Dinesh D’Souza’s Illiberal Education: The Politics of Race and Sex on Campus (1991) which “captured the press’s imagination.”[5][clarification needed] Similar critical terminology was used by D’Souza for a range of policies in academia around victimization, supporting multiculturalism through affirmative action, sanctions against anti-minority hate speech, and revising curricula (sometimes referred to as “canon busting”).[5][49][not in citation given] These trends were at least in part a response to multiculturalism and the rise of identity politics, with movements such as feminism, gay rights movements and ethnic minority movements. That response received funding from conservative foundations and think tanks such as the John M. Olin Foundation, which funded several books such as D’Souza’s.[4][17]

Herbert Kohl, in 1992, commented that a number of neoconservatives who promoted the use of the term “politically correct” in the early 1990s were former Communist Party members, and, as a result, familiar with the Marxist use of the phrase. He argued that in doing so, they intended “to insinuate that egalitarian democratic ideas are actually authoritarian, orthodox and Communist-influenced, when they oppose the right of people to be racist, sexist, and homophobic.”[3]

During the 1990s, conservative and right-wing politicians, think-tanks, and speakers adopted the phrase as a pejorative descriptor of their ideological enemies especially in the context of the Culture Wars about language and the content of public-school curricula. Roger Kimball, in Tenured Radicals, endorsed Frederick Crews’s view that PC is best described as “Left Eclecticism”, a term defined by Kimball as “any of a wide variety of anti-establishment modes of thought from structuralism and poststructuralism, deconstruction, and Lacanian analyst to feminist, homosexual, black, and other patently political forms of criticism.”[18][39]Jan Narveson wrote that “that phrase was born to live between scare-quotes: it suggests that the operative considerations in the area so called are merely political, steamrolling the genuine reasons of principle for which we ought to be acting…”[2]

In the American Speech journal article “Cultural Sensitivity and Political Correctness: The Linguistic Problem of Naming” (1996), Edna Andrews said that the usage of culturally inclusive and gender-neutral language is based upon the concept that “language represents thought, and may even control thought”.[50] Andrews’ proposition is conceptually derived from the SapirWhorf Hypothesis, which proposes that the grammatical categories of a language shape the ideas, thoughts, and actions of the speaker. Moreover, Andrews said that politically moderate conceptions of the languagethought relationship suffice to support the “reasonable deduction … [of] cultural change via linguistic change” reported in the Sex Roles journal article “Development and Validation of an Instrument to Measure Attitudes Toward Sexist/Nonsexist Language” (2000), by Janet B. Parks and Mary Ann Robinson.[citation needed]

Liberal commentators have argued that the conservatives and reactionaries who used the term did so in effort to divert political discussion away from the substantive matters of resolving societal discrimination such as racial, social class, gender, and legal inequality against people whom the right-wing do not consider part of the social mainstream.[4][20][51][52][53][54][55] Commenting in 2001, one such British journalist,[56][57]Polly Toynbee, said “the phrase is an empty, right-wing smear, designed only to elevate its user”, and, in 2010 “…the phrase “political correctness” was born as a coded cover for all who still want to say Paki, spastic, or queer…”[56][57][58][59] Another British journalist, Will Hutton,[60][61][62][63] wrote in 2001:

Political correctness is one of the brilliant tools that the American Right developed in the mid1980s, as part of its demolition of American liberalism…. What the sharpest thinkers on the American Right saw quickly was that by declaring war on the cultural manifestations of liberalism by levelling the charge of “political correctness” against its exponents they could discredit the whole political project.

Glenn Loury described the situation in 1994 as such:

To address the subject of “political correctness,” when power and authority within the academic community is being contested by parties on either side of that issue, is to invite scrutiny of one’s arguments by would-be “friends” and “enemies.” Combatants from the left and the right will try to assess whether a writer is “for them” or “against them.”

In the US, the term has been widely used in the intellectual media, but in Britain, usage has been confined mainly to the popular press.[65] Many such authors and popular-media figures, particularly on the right, have used the term to criticize what they see as bias in the media.[2][17] William McGowan argues that journalists get stories wrong or ignore stories worthy of coverage, because of what McGowan perceives to be their liberal ideologies and their fear of offending minority groups.[66] Robert Novak, in his essay “Political Correctness Has No Place in the Newsroom”, used the term to blame newspapers for adopting language use policies that he thinks tend to excessively avoid the appearance of bias. He argued that political correctness in language not only destroys meaning but also demeans the people who are meant to be protected.[67][68][69] Authors David Sloan and Emily Hoff claim that in the US, journalists shrug off concerns about political correctness in the newsroom, equating the political correctness criticisms with the old “liberal media bias” label.[70]

Jessica Pinta and Joy Yakubu caution against political incorrectness in media and other uses, writing in the Journal of Educational and Social Research: “…linguistic constructs influence our way of thinking negatively, peaceful coexistence is threatened and social stability is jeopardized.” What may result, they add as example “the effect of political incorrect use of language” in some historical occurrences:

Conflicts were recorded in Northern Nigeria as a result of insensitive use of language. In Kaduna for instance violence broke out on the 16th November 2002 following an article credited to one Daniel Isioma which was published in This Day Newspaper, where the writer carelessly made a remark about the Prophet Mohammed and the beauty queens of the Miss World Beauty Pageant that was to be hosted in the Country that year (Terwase n.d). In this crisis, He reported that over 250 people were killed and churches destroyed. In the same vein, crisis erupted on 18th February 2006 in Borno because of a cartoon of the Prophet Mohammed in Iyllands-posten Newspaper (Terwase n.d). Here over 50 people were killed and 30 churches burnt.

Much of the modern debate on the term was sparked by conservative critiques of liberal bias in academia and education,[4] and conservatives have used it as a major line of attack since.[5] University of Pennsylvania professor Alan Charles Kors and lawyer Harvey A. Silverglate connect speech codes in US universities to philosopher Herbert Marcuse. They claim that speech codes create a “climate of repression”, arguing that they are based on “Marcusean logic”.[relevant? discuss] The speech codes, “mandate a redefined notion of “freedom”, based on the belief that the imposition of a moral agenda on a community is justified”, a view which, “requires less emphasis on individual rights and more on assuring “historically oppressed” persons the means of achieving equal rights.” They claim:

Our colleges and universities do not offer the protection of fair rules, equal justice, and consistent standards to the generation that finds itself on our campuses. They encourage students to bring charges of harassment against those whose opinions or expressions “offend” them. At almost every college and university, students deemed members of “historically oppressed groups” above all, women, blacks, gays, and Hispanics are informed during orientation that their campuses are teeming with illegal or intolerable violations of their “right” not to be offended. Judging from these warnings, there is a racial or sexual bigot, to borrow the mocking phrase of McCarthy’s critics, “under every bed.”[72][relevant? discuss]

Kors and Silverglate later established the Foundation for Individual Rights in Education (FIRE), which campaigns against infringement of rights of due process, rights of religion and speech, in particular “speech codes”.[73] Similarly, a common conservative criticism of higher education in the United States is that the political views of the faculty are much more liberal than the general population, and that this situation contributes to an atmosphere of political correctness.[74]

Jessica Pinta and Joy Yakubu write that political correctness is useful in education, in the Journal of Educational and Social Research:

Political correctness is a useful area of consideration when using English language particularly in second language situations. This is because both social and cultural contexts of language are taken into consideration. Zabotkina (1989) says political correctness is not only an essential, but an interesting area of study in English as a Second Language (ESL) or English as Foreign Language (EFL) classrooms. This is because it presents language as used in carrying out different speech acts which provoke reactions as it can persuade, incite, complain, condemn, and disapprove. Language is used for communication and creating social linkages, as such must be used communicatively. Using language communicatively involves the ability to use language at the grammatical level, sociolinguistic level, discourse and strategic levels (Canale & Swain 1980). Understanding language use at these levels center around the fact that differences exist among people, who must communicate with one another, and the differences could be religious, cultural, social, racial, gender or even ideological. Therefore, using language to suit the appropriate culture and context is of great significance.

Groups who oppose certain generally accepted scientific views about evolution, second-hand tobacco smoke, AIDS, global warming, race, and other politically contentious scientific matters have said that PC liberal orthodoxy of academia is the reason why their perspectives of those matters have been rejected by the scientific community.[75] For example, in Lamarck’s Signature: How Retrogenes are Changing Darwin’s Natural Selection Paradigm (1999), Prof. Edward J. Steele said:

We now stand on the threshold of what could be an exciting new era of genetic research…. However, the ‘politically correct’ thought agendas of the neoDarwinists of the 1990s are ideologically opposed to the idea of ‘Lamarckian Feedback’, just as the Church was opposed to the idea of evolution based on natural selection in the 1850s![76]

Zoologists Robert Pitman and Susan Chivers complained about popular and media negativity towards their discovery of two different types of killer whales, a “docile” type and a “wilder” type that ravages sperm whales by hunting in packs: “The forces of political correctness and media marketing seem bent on projecting an image of a more benign form (the Free Willy or Shamu model), and some people urge exclusive use of the name ‘orca’ for the species, instead of what is perceived as the more sinister label of “killer whale.”[77]

Stephen Morris, an economist and a game theorist, built a game model on the concept of political correctness, where “a speaker (advisor) communicates with the objective of conveying information, but the listener (decision maker) is initially unsure if the speaker is biased. There were three main insights from that model. First, in any informative equilibrium, certain statements will lower the reputation of the speaker, independent of whether they turn out to be true. Second, if reputational concerns are sufficiently important, no information is conveyed in equilibrium. Third, while instrumental reputational concerns might arise for many reasons, a sufficient reason is that speakers wish to be listened to.”[78][79][80][81]The Economist writes that “Mr Morris’s model suggests that the incentive to be politically correct fades as society’s population of racists, to take his example, falls.”[79] He credits Glenn Loury with the basis of his work.[78][relevant? discuss]

“Political correctness” is a label typically used for left-wing terms and actions, but not for equivalent attempts to mold language and behavior on the right. However, the term “right-wing political correctness” is sometimes applied by commentators drawing parallels: in 1995, one author used the term “conservative correctness” arguing, in relation to higher education, that “critics of political correctness show a curious blindness when it comes to examples of conservative correctness. Most often, the case is entirely ignored or censorship of the Left is justified as a positive virtue. […] A balanced perspective was lost, and everyone missed the fact that people on all sides were sometimes censored.”[22][82][83][84]

In 2003, Dixie Chicks, a U.S. country music group, criticized the then U.S. President George W. Bush for launching the war against Iraq.[85] They were criticized[86] and labeled “treasonous” by some U.S. right-wing commentators (including Ann Coulter and Bill O’Reilly).[23] Three years later, claiming that at the time “a virulent strain of right wing political correctness [had] all but shut down debate about the war in Iraq,” journalist Don Williams wrote that “[the ongoing] campaign against the Chicks represents political correctness run amok” and observed, “the ugliest form of political correctness occurs whenever there’s a war on.”[23]

In 2003, French fries and French toast were renamed “Freedom fries” and “Freedom toast”[87] in three U.S. House of Representatives cafeterias in response to France’s opposition to the proposed invasion of Iraq. This was described as “polluting the already confused concept of political correctness.”[88] In 2004, then Australian Labor leader Mark Latham described conservative calls for “civility” in politics as “the new political correctness.”[89]

In 2012, Paul Krugman wrote that “the big threat to our discourse is right-wing political correctness, which unlike the liberal version has lots of power and money behind it. And the goal is very much the kind of thing Orwell tried to convey with his notion of Newspeak: to make it impossible to talk, and possibly even think, about ideas that challenge the established order.”[24]

In a 2015 Harris poll it was found that “Republicans are almost twice as likely 42 percent vs. 23percent as Democrats to say that there are any books that should be banned completely….Republicans were also more likely to say that some video games, movies and television programs should be banned.”[90][91]

In 2015 and 2016, leading up to the 2016 United States presidential election, Republican candidate Donald Trump used political correctness as common target in his rhetoric.[90][92][93][94] Eric Mink in a column for the Huffington Post describes in disagreeing voice Trump’s concept of “political correctness”:

political correctness is a controversial social force in a nation with a constitutional guarantee of freedom of expression, and it raises legitimate issues well worth discussing and debating.

But thats not what Trump is doing. Hes not a rebel speaking unpopular truths to power. Hes not standing up for honest discussions of deeply contentious issues. Hes not out there defying rules handed down by elites to control what we say.

All Trumps defying is common decency.[93]

Columnists Blatt and Young of the The Federalist agree, with Blatt stating that “Trump is being rude, not politically incorrect” and that “PC is about preventing debate, not protecting rudeness”.[95][96]

In light of the sexual assault scandals and the criticism the victims faced from Trump supporters, Vox (website) notes that after railing so much against political correctness they simply practice a different kind of repression and shaming: “If the prepolitical correctness era was really so open, why is it only now that these women are speaking out?”[94]

Some right-wing commentators in the West argue that “political correctness” and multiculturalism are part of a conspiracy with the ultimate goal of undermining Judeo-Christian values. This theory, which holds that political correctness originates from the critical theory of the Frankfurt School as part of a conspiracy that its proponents call “Cultural Marxism”, is generally known as the Frankfurt School conspiracy theory by academics.[97][98] The theory originated with Michael Minnicino’s 1992 essay “New Dark Age: Frankfurt School and ‘Political Correctness'”, published in a Lyndon LaRouche movement journal.[99] In 2001, conservative commentator Patrick Buchanan wrote in The Death of the West that “political correctness is cultural Marxism”, and that “its trademark is intolerance”.[100]

In the United States, left forces of “political correctness” have been blamed for censorship, with Time citing campaigns against violence on network television as contributing to a “mainstream culture [which] has become cautious, sanitized, scared of its own shadow” because of “the watchful eye of the p.c. police”, even though in John Wilson’s view protests and advertiser boycotts targeting TV shows are generally organized by right-wing religious groups campaigning against violence, sex, and depictions of homosexuality on television.[101]

In the United Kingdom, some newspapers reported that a nursery school had altered the nursery rhyme “Baa Baa Black Sheep” to read “Baa Baa Rainbow Sheep” and had banned the original.[102] But it was later reported that in fact the Parents and Children Together (PACT) nursery had the children “turn the song into an action rhyme…. They sing happy, sad, bouncing, hopping, pink, blue, black and white sheep etc.”[103] This story was widely circulated and later extended to suggest that other language bans applied to the terms “black coffee” and “blackboard”.[104]Private Eye magazine reported that similar stories had been published in the British press since The Sun first ran them in 1986.[105]

Political correctness is often satirized, for example in The PC Manifesto (1992) by Saul Jerushalmy and Rens Zbignieuw X,[106] and Politically Correct Bedtime Stories (1994) by James Finn Garner, which presents fairy tales re-written from an exaggerated politically correct perspective. In 1994, the comedy film PCU took a look at political correctness on a college campus.

Other examples include the television program Politically Incorrect, George Carlins “Euphemisms” routine, and The Politically Correct Scrapbook.[107] The popularity of the South Park cartoon program led to the creation of the term “South Park Republican” by Andrew Sullivan, and later the book South Park Conservatives by Brian C. Anderson.[108] In its Season 19, South Park has constantly been poking fun at the principle of political correctness, embodied in the show’s new character, PC Principal.[109][110][111]

The Colbert Report’s host Stephen Colbert often talked, satirically, about the “PC Police”.[112][113]

Graham Good, an academic at the University of British Columbia, wrote that the term was widely used in debates on university education in Canada. Writing about a 1995 report on the Political Science department at his university, he concluded: “Political correctness” has become a popular phrase because it catches a certain kind of self-righteous and judgmental tone in some and a pervasive anxiety in others who, fearing that they may do something wrong, adjust their facial expressions, and pause in their speech to make sure they are not doing or saying anything inappropriate. The climate this has created on campuses is at least as bad in Canada as in the United States.[114]

In Hong Kong, as the 1997 handover drew nearer, greater control over the press was exercised by both owners and the Chinese state. This had a direct impact on news coverage of relatively sensitive political issues. The Chinese authorities exerted pressure on individual newspapers to take pro-Beijing stances on controversial issues.[115][116][117]Tung Chee-hwa’s policy advisers and senior bureaucrats increasingly linked their actions and remarks to “political correctness.” Zhaojia Liu and Siu-kai Lau, writing in The first Tung Chee-hwa administration: the first five years of the Hong Kong Special Administrative Region, said that “Hong Kong has traditionally been characterized as having freedom of speech and freedom of press, but that an unintended consequence of emphasizing political ‘correctness’ is to limit the space for such freedom of expression.”[118]

In New Zealand, controversies over PC surfaced during the 1990s regarding the social studies school curriculum.[119][120]

According to ThinkProgress, the “ongoing conversation about P.C. often relies on anecdotal evidence rather than data”.[121] In 2014, researchers at Cornell University reported that political correctness increased creativity in mixed-sex work teams,[122] saying “the effort to be P.C. can be justified not merely on moral grounds but also by the practical and potentially profitable consequences.”[121][clarification needed]

The term “politically correct”, with its suggestion of Stalinist orthodoxy, is spoken more with irony and disapproval than with reverence. But, across the country the term “P.C.”, as it is commonly abbreviated, is being heard more and more in debates over what should be taught at the universities.

More:

Political correctness – Wikipedia

Posted in Political Correctness | Comments Off on Political correctness – Wikipedia

Free Speech: Ten Principles for a Connected World …

Posted: October 27, 2016 at 11:59 am

Admirably clear, . . . wise, up-to-the-minute and wide-ranging. . . . Free Speech encourages us to take a breath, look hard at the facts, and see how well-tried liberal principles can be applied and defended in daunting new circumstances.Edmund Fawcett, New York Times Book Review

A major piece of cultural analysis, sane, witty and urgently important.Timothy Garton Ash exemplifies the robust civility he recommends as an antidote to the pervasive unhappiness, nervousness and incoherence around freedom of speech, rightly seeing the basic challenge as how we create a cultural and moral climate in which proper public argument is possible and human dignity affirmed.–Rowan Williams, Master of Magdalene College, Cambridge, and former Archbishop of Canterbury

Timothy Garton Ash aspires to articulate norms that should govern freedom of communication in a transnational world. His work is original and inspiring. Free Speech is an unfailingly eloquent and learned book that delights as well as instructs.–Robert Post, Dean and Sol & Lillian Goldman Professor of Law, Yale Law School

“A thorough and well-argued contribution to the quest for global free speech norms.”Kirkus Reviews

“There are still countless people risking their lives to defend free speech and struggling to makelonely voices heard in corners around the world where voices are hard to hear. Let us hope that this book will bring confidence and hope to this world-as-city. I believe it will exert great influence.–Murong Xuecun, author of Leave Me Alone: A Novel of Chengdu

“Garton Ash impresses with fact-filled, ideas-rich discussion that is routinely absorbing and illuminating.”Malcolm Forbes, The American Interest

“Particularly timely. . . . Garton Ash argues forcefully that . . . there is an increasing need for freer speech . . . A powerful, comprehensive book.”Economist

Timothy Garton Ash rises to the task of directing us how to live civilly in our connected diversity.John Lloyd, Financial Times

Free Speech is a resource, a weapon, an encyclopedia of anecdote, example and exemplum that reaches toward battling restrictions on expression with mountains of data, new ideas, liberating ideas.Diane Roberts, Prospect

Illuminating and thought-provoking. . . . [Garton Ashs] larger project is not merely to defend freedom of expression, but to promote civil, dispassionate discourse, within and across cultures, even about the most divisive and emotive subjects.Faramerz Dabhoiwala, The Guardian

“Timothy Garton Ashs new book Free Speech: Ten Principles for a Connected World is a rare thing: a worthwhile contribution to a debate without two developed sides. Ash does an excellent job laying out the theoretical and practical bases for the western liberal positions on free speech.”Malcolm Harris, New Republic

“An informative and bracing defense of free speech liberalism in the Internet age . . . In a world where free speech can never be taken for granted, Garton Ashs free speech liberalism is a good place to start any discussion”David Luban, New York Review of Books

See the article here:
Free Speech: Ten Principles for a Connected World …

Posted in Free Speech | Comments Off on Free Speech: Ten Principles for a Connected World …

Freedom of Speech Essay – 2160 Words – StudyMode

Posted: October 15, 2016 at 5:23 am

Freedom of Speech

With varying opinions and beliefs, our society needs to have unlimited freedom to speak about any and everything that concerns us in order to continually improve our society. Those free speech variables would be speech that creates a positive, and not negative, scenario in both long-terms and short-terms. Dictionary.com defines Freedom of Speech as, the right of people to express their opinions publicly without governmental interference, subject to the laws against libel, incitement to violence or rebellion, etc. Freedom of speech is also known as free speech or freedom of expression. Freedom of speech is also known as freedom of expression because a persons beliefs and thoughts can also be expressed in other ways other than speech. These ways could be art, writings, songs, and other forms of expression. If speaking freely and expressing ourselves freely is supposed to be without any consequence, then why are there constant law suits and consequences for people who do. Freedom of speech and freedom of expression should be exactly what they mean. Although most people believe that they can speak about anything without there being consequences, this is very untrue. One of those spoken things that have consequences is speaking about the president in such a negative way that it sends red flags about your intentions. Because of the high terrorist alerts, people have to limit what they say about bombs, 9/11, and anything they may say out of anger about our government or country. In the documentary called Fahrenheit 9/11, Michael Moore spoke of a man who went to his gym and had a conversation with some of his gym buddies in a joking way. He made a joke about George W. Bush bombing us in oil profits. The next morning the FBI was at his front door because someone had reported what he freely spoke. Although the statements might have been derogatory, they were still his opinion, and he had a right to say whatever he wanted to about the president. In the past seven years there have been laws made that have obstructed our freedom of speech, and our right to privacy. Many of us have paused in the recent years when having a conversation because we are afraid that we are eavesdropped on. Even the eavesdropping would not be a problem if it were not for fear that there would be some legal action taken because of what you say. As mentioned in TalkLeft about the awkwardness in our current day conversations, We stop suddenly, momentarily afraid that our words might be taken out of context, then we laugh at our paranoia and go on. But our demeanor has changed, and our words are subtly altered. This is the loss of freedom we face when our privacy is taken from us. This is life in former East Germany, or life in Saddam Hussein’s Iraq. And it’s our future as we allow an ever-intrusive eye into our personal, private lives. Because of tighter security and defense by the United States there have been visible and invisible changes to the meaning of freedom of speech and expression. One wrong word or thing could lead to a disastrous consequence.

Another topic that has been limited for a long period of time is religion. Speaking about religion in certain places is severely frowned upon. One of those places is schools. Since I could remember, schools have always had a rule that certain things could not be spoken of related to religion. If they were, that person could receive consequences. As a young child I could never understand why students and staff members could not openly express their love for God. I also thought that prayer was not permitted in schools when they are. Prayers are permitted in school, but not in classrooms during class time. Also wearing religious symbols or clothing is banned in schools. If we are free to speak our thoughts and feelings, then how are we banned to do these things? It is like saying that we are free to speak whatever we want, but we may not say anything. In the article A…

Let your classmates know about this document and more at StudyMode.com

{“hostname”:”studymode.com”,”essaysImgCdnUrl”:”//images-study.netdna-ssl.com/pi/”,”useDefaultThumbs”:true,”defaultThumbImgs”:[“//stm-study.netdna-ssl.com/stm/images/placeholders/default_paper_1.png”,”//stm-study.netdna-ssl.com/stm/images/placeholders/default_paper_2.png”,”//stm-study.netdna-ssl.com/stm/images/placeholders/default_paper_3.png”,”//stm-study.netdna-ssl.com/stm/images/placeholders/default_paper_4.png”,”//stm-study.netdna-ssl.com/stm/images/placeholders/default_paper_5.png”],”thumb_default_size”:”160×220″,”thumb_ac_size”:”80×110″,”isPayOrJoin”:false,”essayUpload”:false,”site_id”:1,”autoComplete”:false,”isPremiumCountry”:false,”logPixelPath”:”//www.smhpix.com/pixel.gif”,”tracking_url”:”//www.smhpix.com/pixel.gif”,”essay”:{“essayId”:33424465,”categoryName”:”Fiction”,”categoryParentId”:”17″,”currentPage”:1,”format”:”text”,”pageMeta”:{“text”:{“startPage”:1,”endPage”:6,”pageRange”:”1-6″,”totalPages”:6}},”access”:”premium”,”title”:”Freedom of Speech Essay”,”additionalIds”:[9,103,2,3],”additional”:[“Entertainment”,”Entertainment/Film”,”Awards u0026 Events”,”Business u0026 Economy”],”loadedPages”:{“html”:[],”text”:[1,2,3,4,5,6]}},”user”:null,”canonicalUrl”:”http://www.studymode.com/essays/Freedom-Of-Speech-Essay-223535.html”,”pagesPerLoad”:50,”userType”:”member_guest”,”ct”:10,”ndocs”:”1,500,000″,”pdocs”:”6,000″,”cc”:”10_PERCENT_1MO_AND_6MO”,”signUpUrl”:”https://www.studymode.com/signup/”,”joinUrl”:”https://www.studymode.com/join”,”payPlanUrl”:”/checkout/pay/100241″,”upgradeUrl”:”/checkout/upgrade”,”freeTrialUrl”:”https://www.studymode.com/signup/?redirectUrl=https%3A%2F%2Fwww.studymode.com%2Fcheckout%2Fpay%2Ffree-trialu0026bypassPaymentPage=1″,”showModal”:”get-access”,”showModalUrl”:”https://www.studymode.com/signup/?redirectUrl=https%3A%2F%2Fwww.studymode.com%2Fjoin”,”joinFreeUrl”:”/essays/?newuser=1″,”siteId”:1,”facebook”:{“clientId”:”306058689489023″,”version”:”v2.2″,”language”:”en_US”},”analytics”:{“googleId”:”UA-32718321-1″}}

See more here:
Freedom of Speech Essay – 2160 Words – StudyMode

Posted in Freedom of Speech | Comments Off on Freedom of Speech Essay – 2160 Words – StudyMode

Debate: Freedom of Speech | Debate.org

Posted: at 5:23 am

To begin, I am greatly happy that you, Mdal, joined my debate. It appears that your arguments appeals to logic, which is, in my opinion the most persuasive type of argument. I will primarily be appealing to logic, however will also touch on the ideals of value, as it is one of the main moral reasons I support this idea. I have also adapted the format of my arguments to suit your style.

Voltaire, an enlightenment thinker, regarded with as intuitive and influential a mind as Montesquieu, Rousseau, and Locke. All influential people who host beliefs that influenced the framers of the Constitution, and all of which created ideals that support, and influence my own belief on restricting the rights of the first amendment to hate group’s gathering in public areas.

I agree with your definition of what the constitution is advancing us towards, “a stable, liberty driven, peaceful, prosperous state” and would in turn like to define hate groups as any groups that gather with the intentions of breeding fear, terror, hate, or violence towards any particular group of people (defined as a group of similar races, religion, or belief [such as sexual orientation].) More specifically, I will be focusing on, and discussing the two groups you mentioned, the Ku Klux Klan, and the Aryan Brotherhood.

Now, before I begin my own arguments, I will answer your question: “who gets to say what is ok and what isn’t?”

I have long meditated in search of a proper way for our nation to adapt to such a monumental change as I have proposed. The only way that I could think of was to add a fourth branch to our current system of checks and balances. This branch would be in charge of adapting the constitution to better suit the nation as it evolves (including any exceptions the members of this branch deem necessary to create.) They would have equal power to the executive, legislative and judicial branches, and would their adjustments would be checked by both the legislative branch (requiring a majority vote as opposed to the current two thirds vote necessary to create an amendment) and the judicial branch to make sure that any and all changes and exceptions created by this new branch follow the main ideals that are upheld within our nation, and do not violate the main intentions of the framers ideals. I realize that this is also a very controversial topic, and would love to hear any and all concerns you have regarding this issue; however, I do not want this to distract us from the main topic of our debate.

Rebuttal #1: In response to the “slippery-slope” argument Logic: The system of checks and balances was created in order to stop one particular group from gaining power. Adapting this system by creating another branch should quite any worries you had about the “slippery-slope” that may occur, as the extent of the branches power will be modified by two other branches, the Legislative and the Judicial. Therefore, the new branch will not be able to abuse this power, and they, because of these restrictions, would not be able to quiet the entire, “market place of ideas.”

Rebuttal #2: In response to the argument that this will limit the market place of ideas Logic: You brought up the argument that if we allow bad ideas to mix with good ideas, then the good ideas will “rise to the top.” In response to this, I would like to bring up the case of Osama Bin Laden, a terrorist who has, what are commonly assumed to be “bad ideas.” Because of Bin Laden’s influential abilities, his bad ideas were able to rise above the good ideas, and eventually led to a great influx of new members into terrorist beliefs, and further led to the tragic destruction of the World Trade Center in 2001.

I am in no way saying that the KKK or the Aryan Brotherhood has equal power to Terrorists, but I am instead proposing that they have similar bad ideas focused on fear and hatred towards a group of people. If the KKK were to gain an influential leader (horrendous, but influential none-the-less) as Osama Bin Laden, who’s to say whether or not our current small national terrorist group the KKK would turn into a world-wide terrorist organization such as that created by Osama Bin Laden?

It is better to regulate the public meetings of these organizations now, as opposed to later when their power may exceed that of the government they are encompassed by.

Rebuttal #3: In response to the argument that Free speech keeps our government accountable. Logic: As the government is not a group of people regulated by race, religion, or belief (refer to definition of groups of people). And the branch will only have the power to regulate hate groups from publicly discussing (note I am not restricting their right to gather in privacy, purely in public) their ideas, the proposition will have no effect on those who wish to speak out against the government.

Now onto my main argument:

Argument: We are currently not fully acknowledging people’s natural rights Logic: According to the natural rights originally proposed, and supported by enlightenment thinkers such as Locke, Montesquieu, and Rousseau all people are born with the right to live his/her life any way he/she likes without causing physical harm to another individual, directly or indirectly.

What I question within this right is the restriction, “without causing physical harm to another individual, directly or indirectly.” I concede that I am working under the assumption that hate groups gather with a common goal to assert their superiority (through violence or terror) over a different group of people. I also concede that I work under the assumption that mental harm can become so intense that it can eventually harm a person physically (I only state this because this was not common knowledge around the time of the enlightenment, and therefore was not included in their right.) I believe that these are fairly common assumptions, and therefore will continue with my argument. If we allow groups that have a goal of asserting superiority over a specific group of people, whether they currently act upon this goal, or whether they plan on accomplishing this goal in the future, they either directly or indirectly threaten the safety of others.

I also could go on, however do not wish to state all of my arguments in the first round of our five round discussion.

Thank you again for accepting this debate, so far it proves to be quite promising.

I will first respond to tsmart’s rebuttals to my 3 opening arguments, from there I will counter tsmart’s single argument, finally I must respond to the possible creation of a 4th branch of government as the actor created by tsmart in this case. Though I too do not want this debate dramatically side tracked by a debate about the actor who will create the proposed new laws set forth by tsmart. However as he uses this new 4th branch as an answer to my 3rd argument it has become very important to the core of this debate and will thus be discussed when answering Tsmart’s first rebuttal.

With this signposting finished, lets get to some arguments.

Rebuttal #1: Tsmart’s Rebuttal assures us that through the creation of the 4th branch of government who’s sole job is two interpret freedom of speech, and decide what is and what is not allowable under our new laws which limit certain types of speech. Tsmart’s exact quote of what the 4th branch of government would be is: “This branch would be in charge of adapting the constitution to better suit the nation as it evolves (including any exceptions the members of this branch deem necessary to create.) They would have equal power to the executive, legislative and judicial branches, and would their adjustments would be checked by both the legislative branch (requiring a majority vote as opposed to the current two thirds vote necessary to create an amendment) and the judicial branch to make sure that any and all changes and exceptions created by this new branch follow the main ideals that are upheld within our nation, and do not violate the main intentions of the framers ideals.”

My response: Whooooooo eeee! Where to start on this one?

To begin with it seems at first blush that the 4th branch is going to usurp what has been the power of the Supreme Court, namely interpreting the constitution. However upon closer examination it seems that Tsmart actually has created a body whose job is much more than merely interpreting the constitution, it is actually a body whose job is to CHANGE the constitution. So basically this new body is invented to abridge and thus destroy the power of the 1st amendment (one of the most important amendments in our constitution, one who has been upheld through countless court cases) take the power of the states and congress (the governmental structures who usually keep all of the checks and balances on the creation of new amendments)and given it all to this new 4th branch. Basically we have reorganized the very makeup of American government for the express reason of censoring people. *****In a cost benefit analysis the cost of destabilizing the government by shifting around the powers set in our government by our founding fathers to a new, strange, and untested power structure for the possibly non-existent benefit of censoring hate groups seems dramatically unbalanced. Under this cost benefit analysis it seems as if any marginal benefits we might get from censorship are DRAMATICALLY outweighed by the dangers of the radical upsetting of our governmental structure and thus shows that the CON’s proposed solutions just aren’t worth the trouble.

Rebuttal #2: In response to my argument for an open Market Place of Ideas (something we have now but will lose if we lose Freedom of Speech) Tsmart brings up the example of Osoma Bin Laden and how his ideas have risen to the top in some places and beat out better ideas, so we should instead keep these sort of ideas out of the public’s purview.

My Response: Tsmart actually just proved my point by using the example of Osoma Bin Laden, tell me readers (and Tsmart) have you been convinced by listening to Bin Laden on our television? It wasn’t hidden from us. Everyone in the US is allowed to listen to what Bin Laden has to say, yet HERE in the US where the market place of ideas flourishes Bin Laden’s brand of extremism hasn’t gained a foothold. The places where he is much more popular don’t have the myriad of view points like we have the capacity of getting here in the States, instead in places like Iran, Saudi Arabia, Afghanistan, Pakistan and other nations in the Middle East we find a correlation between the free-er the speech, the less extremist the views in the country. This is because when the market place of ideas is allowed to work, people are able to make well informed decisions and that usually leads them away from extremist views and towards the center ground when considering an issue. Thus we can see how Tsmart’s example just proves exactly how important the market place of ideas really is and how important it is to keep from abridging the first amendment which is SO key to keeping the market place of ideas viable.

Rebuttal #3: I stated that freedom of speech is a huge check on the government. Tsmart says: “…the branch will only have the power to regulate hate groups from publicly discussing (note I am not restricting their right to gather in privacy, purely in public) their ideas, the proposition will have no effect on those who wish to speak out against the government.” My Response: What about the hate groups Tsmart? What happens if an incredibly racist, cruel, mean, hate filled Neo Nazi has a well conceived critique of the the government, but wants to express this brilliant critique in hate filled language? His speech, though offensive to you and me, will also give a benefit to the society because he will point out something about the government which needs to be looked at. Re-reading your quote you say that the hate group will be unable to discuss their ideas in public, what if their ideas have to do with the government? Is this a new exception? Are Hate groups allowed to talk about the government? You see how restricting even a small part of Freedom of Speech has huge ramifications for everyone in our society? Rather than risk the benefit of one of the best checks on our government (freedom of speech) we should play it safe and not try to silence people we don’t agree with.

On to Tsmart’s argument of expanded natural rights, His claim is that if people are railed against in public by hate groups they may be harmed mentally and that may eventually lead to physical harm. Thus we should protect these minorities and targeted groups from the hate groups.

Response to Tsmart’s Argument: Tsmart, it seems as though you have come to an overreaching understanding of what the government is supposed to do in situations like this. Your solution is to take preemptive action by taking away freedoms from people who might threaten others. However it seems as though the goal you are trying to accomplish is to make certain that the targeted minority groups ARE safe as well as help them FEEL safe. This goal can be met much better by an investment in anti-hate laws which will increase the punishment for hate crimes, or better yet you could increase the capabilities of the police and thus keep extremist groups like the hate organizations in line. However abridging freedom of speech is not the best, or even a decent, way of defending targeted minority groups.

Read more:
Debate: Freedom of Speech | Debate.org

Posted in Freedom of Speech | Comments Off on Debate: Freedom of Speech | Debate.org

History of artificial intelligence – Wikipedia, the free …

Posted: August 30, 2016 at 11:03 pm

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with “an ancient wish to forge the gods.”

The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The Turing test was proposed by British mathematician Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: “I propose to consider the question, ‘Can machines think?'” The term ‘Artificial Intelligence’ was created at a conference held at Dartmouth College in 1956.[2]Allen Newell, J. C. Shaw, and Herbert A. Simon pioneered the newly created artificial intelligence field with the Logic Theory Machine (1956), and the General Problem Solver in 1957.[3] In 1958, John McCarthy and Marvin Minsky started the MIT Artificial Intelligence lab with $50,000.[4] John McCarthy also created LISP in the summer of 1958, a programming language still important in artificial intelligence research.[5]

In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and withdrew funding again.

McCorduck (2004) writes “artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized,” expressed in humanity’s myths, legends, stories, speculation and clockwork automatons.

Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion’s Galatea.[7] In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as Jbir ibn Hayyn’s Takwin, Paracelsus’ homunculus and Rabbi Judah Loew’s Golem.[8] By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots), and speculation, such as Samuel Butler’s “Darwin among the Machines.” AI has continued to be an important element of science fiction into the present.

Realistic humanoid automatons were built by craftsman from every civilization, including Yan Shi,[11]Hero of Alexandria,[12]Al-Jazari and Wolfgang von Kempelen.[14] The oldest known automatons were the sacred statues of ancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotionHermes Trismegistus wrote that “by discovering the true nature of the gods, man has been able to reproduce it.”[15][16]

Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanicalor “formal”reasoning has a long history. Chinese, Indian and Greek philosophers all developed structured methods of formal deduction in the first millennium BCE. Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), Muslim mathematician al-Khwrizm (who developed algebra and gave his name to “algorithm”) and European scholastic philosophers such as William of Ockham and Duns Scotus.[17]

Majorcan philosopher Ramon Llull (12321315) developed several logical machines devoted to the production of knowledge by logical means;[18] Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge.[19] Llull’s work had a great influence on Gottfried Leibniz, who redeveloped his ideas.[20]

In the 17th century, Leibniz, Thomas Hobbes and Ren Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry.[21]Hobbes famously wrote in Leviathan: “reason is nothing but reckoning”.[22]Leibniz envisioned a universal language of reasoning (his characteristica universalis) which would reduce argumentation to calculation, so that “there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let us calculate.”[23] These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research.

In the 20th century, the study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible. The foundations had been set by such works as Boole’s The Laws of Thought and Frege’s Begriffsschrift. Building on Frege’s system, Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the Principia Mathematica in 1913. Inspired by Russell’s success, David Hilbert challenged mathematicians of the 1920s and 30s to answer this fundamental question: “can all of mathematical reasoning be formalized?”[17] His question was answered by Gdel’s incompleteness proof, Turing’s machine and Church’s Lambda calculus.[17][24] Their answer was surprising in two ways.

First, they proved that there were, in fact, limits to what mathematical logic could accomplish. But second (and more important for AI) their work suggested that, within these limits, any form of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. The key insight was the Turing machinea simple theoretical construct that captured the essence of abstract symbol manipulation. This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines.[17][26]

Calculating machines were built in antiquity and improved throughout history by many mathematicians, including (once again) philosopher Gottfried Leibniz. In the early 19th century, Charles Babbage designed a programmable computer (the Analytical Engine), although it was never built. Ada Lovelace speculated that the machine “might compose elaborate and scientific pieces of music of any degree of complexity or extent”.[27] (She is often credited as the first programmer because of a set of notes she wrote that completely detail a method for calculating Bernoulli numbers with the Engine.)

The first modern computers were the massive code breaking machines of the Second World War (such as Z3, ENIAC and Colossus). The latter two of these machines were based on the theoretical foundation laid by Alan Turing[28] and developed by John von Neumann.[29]

In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. The field of artificial intelligence research was founded as an academic discipline in 1956.

The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 30s, 40s and early 50s. Recent research in neurology had shown that the brain was an electrical network of neurons that fired in all-or-nothing pulses. Norbert Wiener’s cybernetics described control and stability in electrical networks. Claude Shannon’s information theory described digital signals (i.e., all-or-nothing signals). Alan Turing’s theory of computation showed that any form of computation could be described digitally. The close relationship between these ideas suggested that it might be possible to construct an electronic brain.[30]

Examples of work in this vein includes robots such as W. Grey Walter’s turtles and the Johns Hopkins Beast. These machines did not use computers, digital electronics or symbolic reasoning; they were controlled entirely by analog circuitry.[31]

Walter Pitts and Warren McCulloch analyzed networks of idealized artificial neurons and showed how they might perform simple logical functions. They were the first to describe what later researchers would call a neural network.[32] One of the students inspired by Pitts and McCulloch was a young Marvin Minsky, then a 24-year-old graduate student. In 1951 (with Dean Edmonds) he built the first neural net machine, the SNARC.[33]Minsky was to become one of the most important leaders and innovators in AI for the next 50 years.

In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of creating machines that think.[34] He noted that “thinking” is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was “thinking”. This simplified version of the problem allowed Turing to argue convincingly that a “thinking machine” was at least plausible and the paper answered all the most common objections to the proposition.[35] The Turing Test was the first serious proposal in the philosophy of artificial intelligence.

In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess.[36]Arthur Samuel’s checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur.[37]Game AI would continue to be used as a measure of progress in AI throughout its history.

When access to digital computers became possible in the middle fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines.[38]

In 1955, Allen Newell and (future Nobel Laureate) Herbert A. Simon created the “Logic Theorist” (with help from J. C. Shaw). The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead’s Principia Mathematica, and find new and more elegant proofs for some.[39] Simon said that they had “solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind.”[40] (This was an early statement of the philosophical position John Searle would later call “Strong AI”: that machines can contain minds just as human bodies do.)[41]

The Dartmouth Conference of 1956[42] was organized by Marvin Minsky, John McCarthy and two senior scientists: Claude Shannon and Nathan Rochester of IBM. The proposal for the conference included this assertion: “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it”.[43] The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would create important programs during the first decades of AI research.[44] At the conference Newell and Simon debuted the “Logic Theorist” and McCarthy persuaded the attendees to accept “Artificial Intelligence” as the name of the field.[45] The 1956 Dartmouth conference was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI.[46]

The years after the Dartmouth conference were an era of discovery, of sprinting across new ground. The programs that were developed during this time were, to most people, simply “astonishing”:[47] computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such “intelligent” behavior by machines was possible at all.[48] Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years.[49] Government agencies like ARPA poured money into the new field.[50]

There were many successful programs and new directions in the late 50s and 1960s. Among the most influential were these:

Many early AI programs used the same basic algorithm. To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze, backtracking whenever they reached a dead end. This paradigm was called “reasoning as search”.[51]

The principal difficulty was that, for many problems, the number of possible paths through the “maze” was simply astronomical (a situation known as a “combinatorial explosion”). Researchers would reduce the search space by using heuristics or “rules of thumb” that would eliminate those paths that were unlikely to lead to a solution.[52]

Newell and Simon tried to capture a general version of this algorithm in a program called the “General Problem Solver”.[53] Other “searching” programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Herbert Gelernter’s Geometry Theorem Prover (1958) and SAINT, written by Minsky’s student James Slagle (1961).[54] Other programs searched through goals and subgoals to plan actions, like the STRIPS system developed at Stanford to control the behavior of their robot Shakey.[55]

An important goal of AI research is to allow computers to communicate in natural languages like English. An early success was Daniel Bobrow’s program STUDENT, which could solve high school algebra word problems.[56]

A semantic net represents concepts (e.g. “house”,”door”) as nodes and relations among concepts (e.g. “has-a”) as links between the nodes. The first AI program to use a semantic net was written by Ross Quillian[57] and the most successful (and controversial) version was Roger Schank’s Conceptual dependency theory.[58]

Joseph Weizenbaum’s ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program. But in fact, ELIZA had no idea what she was talking about. She simply gave a canned response or repeated back what was said to her, rephrasing her response with a few grammar rules. ELIZA was the first chatterbot.[59]

In the late 60s, Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI research should focus on artificially simple situations known as micro-worlds. They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused on a “blocks world,” which consists of colored blocks of various shapes and sizes arrayed on a flat surface.[60]

This paradigm led to innovative work in machine vision by Gerald Sussman (who led the team), Adolfo Guzman, David Waltz (who invented “constraint propagation”), and especially Patrick Winston. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing the blocks world to life. The crowning achievement of the micro-world program was Terry Winograd’s SHRDLU. It could communicate in ordinary English sentences, plan operations and execute them.[61]

The first generation of AI researchers made these predictions about their work:

In June 1963, MIT received a $2.2 million grant from the newly created Advanced Research Projects Agency (later known as DARPA). The money was used to fund project MAC which subsumed the “AI Group” founded by Minsky and McCarthy five years earlier. DARPA continued to provide three million dollars a year until the 70s.[66]DARPA made similar grants to Newell and Simon’s program at CMU and to the Stanford AI Project (founded by John McCarthy in 1963).[67] Another important AI laboratory was established at Edinburgh University by Donald Michie in 1965.[68] These four institutions would continue to be the main centers of AI research (and funding) in academia for many years.[69]

The money was proffered with few strings attached: J. C. R. Licklider, then the director of ARPA, believed that his organization should “fund people, not projects!” and allowed researchers to pursue whatever directions might interest them.[70] This created a freewheeling atmosphere at MIT that gave birth to the hacker culture,[71] but this “hands off” approach would not last.

In the 70s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared.[72] At the same time, the field of connectionism (or neural nets) was shut down almost completely for 10 years by Marvin Minsky’s devastating criticism of perceptrons.[73] Despite the difficulties with public perception of AI in the late 70s, new ideas were explored in logic programming, commonsense reasoning and many other areas.[74]

In the early seventies, the capabilities of AI programs were limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, “toys”.[75] AI researchers had begun to run into several fundamental limits that could not be overcome in the 1970s. Although some of these limits would be conquered in later decades, others still stymie the field to this day.[76]

The agencies which funded AI research (such as the British government, DARPA and NRC) became frustrated with the lack of progress and eventually cut off almost all funding for undirected research into AI. The pattern began as early as 1966 when the ALPAC report appeared criticizing machine translation efforts. After spending 20 million dollars, the NRC ended all support.[84] In 1973, the Lighthill report on the state of AI research in England criticized the utter failure of AI to achieve its “grandiose objectives” and led to the dismantling of AI research in that country.[85] (The report specifically mentioned the combinatorial explosion problem as a reason for AI’s failings.)[86]DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at CMU and canceled an annual grant of three million dollars.[87] By 1974, funding for AI projects was hard to find.

Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues. “Many researchers were caught up in a web of increasing exaggeration.”[88] However, there was another issue: since the passage of the Mansfield Amendment in 1969, DARPA had been under increasing pressure to fund “mission-oriented direct research, rather than basic undirected research”. Funding for the creative, freewheeling exploration that had gone on in the 60s would not come from DARPA. Instead, the money was directed at specific projects with clear objectives, such as autonomous tanks and battle management systems.[89]

Several philosophers had strong objections to the claims being made by AI researchers. One of the earliest was John Lucas, who argued that Gdel’s incompleteness theorem showed that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could.[90]Hubert Dreyfus ridiculed the broken promises of the 60s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little “symbol processing” and a great deal of embodied, instinctive, unconscious “know how”.[91][92]John Searle’s Chinese Room argument, presented in 1980, attempted to show that a program could not be said to “understand” the symbols that it uses (a quality called “intentionality”). If the symbols have no meaning for the machine, Searle argued, then the machine can not be described as “thinking”.[93]

These critiques were not taken seriously by AI researchers, often because they seemed so far off the point. Problems like intractability and commonsense knowledge seemed much more immediate and serious. It was unclear what difference “know how” or “intentionality” made to an actual computer program. Minsky said of Dreyfus and Searle “they misunderstand, and should be ignored.”[94] Dreyfus, who taught at MIT, was given a cold shoulder: he later said that AI researchers “dared not be seen having lunch with me.”[95]Joseph Weizenbaum, the author of ELIZA, felt his colleagues’ treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus’ positions, he “deliberately made it plain that theirs was not the way to treat a human being.”[96]

Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote DOCTOR, a chatterbot therapist. Weizenbaum was disturbed that Colby saw his mindless program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby did not credit Weizenbaum for his contribution to the program. In 1976, Weizenbaum published Computer Power and Human Reason which argued that the misuse of artificial intelligence has the potential to devalue human life.[97]

A perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt, who had been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI researchers, he was optimistic about their power, predicting that “perceptron may eventually be able to learn, make decisions, and translate languages.” An active research program into the paradigm was carried out throughout the 60s but came to a sudden halt with the publication of Minsky and Papert’s 1969 book Perceptrons. It suggested that there were severe limitations to what perceptrons could do and that Frank Rosenblatt’s predictions had been grossly exaggerated. The effect of the book was devastating: virtually no research at all was done in connectionism for 10 years. Eventually, a new generation of researchers would revive the field and thereafter it would become a vital and useful part of artificial intelligence. Rosenblatt would not live to see this, as he died in a boating accident shortly after the book was published.[73]

Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal.[98] In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 60s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[99] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[100] Prolog uses a subset of logic (Horn clauses, closely related to “rules” and “production rules”) that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum’s expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[101]

Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[102] McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problemsnot machines that think as people do.[103]

Among the critics of McCarthy’s approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like “story understanding” and “object recognition” that required a machine to think like a person. In order to use ordinary concepts like “chair” or “restaurant” they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that “using precise language to describe essentially imprecise concepts doesn’t make them any more precise.”[104]Schank described their “anti-logic” approaches as “scruffy”, as opposed to the “neat” paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[105]

In 1975, in a seminal paper, Minsky noted that many of his fellow “scruffy” researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be “logical”, but these structured sets of assumptions are part of the context of everything we say and think. He called these structures “frames”. Schank used a version of frames he called “scripts” to successfully answer questions about short stories in English.[106] Many years later object-oriented programming would adopt the essential idea of “inheritance” from AI research on frames.

In the 1980s a form of AI program called “expert systems” was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart. Once again, AI had achieved success.

An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach.[107]

Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be useful: something that AI had not been able to achieve up to this point.[108]

In 1980, an expert system called XCON was completed at CMU for the Digital Equipment Corporation. It was an enormous success: it was saving the company 40 million dollars annually by 1986.[109] Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp and Aion.[110]

The power of expert systems came from the expert knowledge they contained. They were part of a new direction in AI research that had been gaining ground throughout the 70s. “AI researchers were beginning to suspectreluctantly, for it violated the scientific canon of parsimonythat intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways,”[111] writes Pamela McCorduck. “[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay”.[112]Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s.[113]

The 1980s also saw the birth of Cyc, the first attempt to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows. Douglas Lenat, who started and led the project, argued that there is no shortcut the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand. The project was not expected to be completed for many decades.[114]

Chess playing programs HiTech and Deep Thought defeated chess masters in 1989. Both were developed by Carnegie Mellon University; Deep Thought development paved the way for the Deep Blue.[115]

In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings.[116] Much to the chagrin of scruffies, they chose Prolog as the primary computer language for the project.[117]

Other countries responded with new programs of their own. The UK began the 350 million Alvey project. A consortium of American companies formed the Microelectronics and Computer Technology Corporation (or “MCC”) to fund large scale projects in AI and information technology.[118][119]DARPA responded as well, founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988.[120]

In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a “Hopfield net”) could learn and process information in a completely new way. Around the same time, David Rumelhart popularized a new method for training neural networks called “backpropagation” (discovered years earlier by Paul Werbos). These two discoveries revived the field of connectionism which had been largely abandoned since 1970.[119][121]

The new field was unified and inspired by the appearance of Parallel Distributed Processing in 1986a two volume collection of papers edited by Rumelhart and psychologist James McClelland. Neural networks would become commercially successful in the 1990s, when they began to be used as the engines driving programs like optical character recognition and speech recognition.[119][122]

The business community’s fascination with AI rose and fell in the 80s in the classic pattern of an economic bubble. The collapse was in the perception of AI by government agencies and investors the field continued to make advances despite the criticism. Rodney Brooks and Hans Moravec, researchers from the related field of robotics, argued for an entirely new approach to artificial intelligence.

The term “AI winter” was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow.[123] Their fears were well founded: in the late 80s and early 90s, AI suffered a series of financial setbacks.

The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight.[124]

Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were “brittle” (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts.[125]

In the late 80s, the Strategic Computing Initiative cut funding to AI “deeply and brutally.” New leadership at DARPA had decided that AI was not “the next wave” and directed funds towards projects that seemed more likely to produce immediate results.[126]

By 1991, the impressive list of goals penned in 1981 for Japan’s Fifth Generation Project had not been met. Indeed, some of them, like “carry on a casual conversation” had not been met by 2010.[127] As with other AI projects, expectations had run much higher than what was actually possible.[127]

In the late 80s, several researchers advocated a completely new approach to artificial intelligence, based on robotics.[128] They believed that, to show real intelligence, a machine needs to have a body it needs to perceive, move, survive and deal with the world. They argued that these sensorimotor skills are essential to higher level skills like commonsense reasoning and that abstract reasoning was actually the least interesting or important human skill (see Moravec’s paradox). They advocated building intelligence “from the bottom up.”[129]

The approach revived ideas from cybernetics and control theory that had been unpopular since the sixties. Another precursor was David Marr, who had come to MIT in the late 70s from a successful background in theoretical neuroscience to lead the group studying vision. He rejected all symbolic approaches (both McCarthy’s logic and Minsky’s frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place. (Marr’s work would be cut short by leukemia in 1980.)[130]

In a 1990 paper, “Elephants Don’t Play Chess,”[131] robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since “the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough.”[132] In the 80s and 90s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the embodied mind thesis.[133]

The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. Inside the field there was little agreement on the reasons for AI’s failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of “artificial intelligence”.[134] AI was both more cautious and more successful than it had ever been.

On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.[135] The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second. The event was broadcast live over the internet and received over 74 million hits.[136]

In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail.[137] Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.[138] In February 2011, in a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[139]

These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous power of computers today.[140] In fact, Deep Blue’s computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951.[141] This dramatic increase is measured by Moore’s law, which predicts that the speed and memory capacity of computers doubles every two years. The fundamental problem of “raw computer power” was slowly being overcome.

A new paradigm called “intelligent agents” became widely accepted during the 90s.[142] Although earlier researchers had proposed modular “divide and conquer” approaches to AI,[143] the intelligent agent did not reach its modern form until Judea Pearl, Allen Newell and others brought concepts from decision theory and economics into the study of AI.[144] When the economist’s definition of a rational agent was married to computer science’s definition of an object or module, the intelligent agent paradigm was complete.

An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. By this definition, simple programs that solve specific problems are “intelligent agents”, as are human beings and organizations of human beings, such as firms. The intelligent agent paradigm defines AI research as “the study of intelligent agents”. This is a generalization of some earlier definitions of AI: it goes beyond studying human intelligence; it studies all kinds of intelligence.[145]

The paradigm gave researchers license to study isolated problems and find solutions that were both verifiable and useful. It provided a common language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like economics and control theory. It was hoped that a complete agent architecture (like Newell’s SOAR) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents.[144][146]

AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past.[147] There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics, economics or operations research. The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable; AI had become a more rigorous “scientific” discipline. Russell & Norvig (2003) describe this as nothing less than a “revolution” and “the victory of the neats”.[148][149]

Judea Pearl’s highly influential 1988 book[150] brought probability and decision theory into AI. Among the many new tools in use were Bayesian networks, hidden Markov models, information theory, stochastic modeling and classical optimization. Precise mathematical descriptions were also developed for “computational intelligence” paradigms like neural networks and evolutionary algorithms.[148]

Algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems[151] and their solutions proved to be useful throughout the technology industry,[152] such as data mining, industrial robotics, logistics,[153]speech recognition,[154] banking software,[155] medical diagnosis[155] and Google’s search engine.[156]

The field of AI receives little or no credit for these successes. Many of AI’s greatest innovations have been reduced to the status of just another item in the tool chest of computer science.[157]Nick Bostrom explains “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.”[158]

Many researchers in AI in 1990s deliberately called their work by other names, such as informatics, knowledge-based systems, cognitive systems or computational intelligence. In part, this may be because they considered their field to be fundamentally different from AI, but also the new names help to procure funding. In the commercial world at least, the failed promises of the AI Winter continue to haunt AI research, as the New York Times reported in 2005: “Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.”[159][160][161]

In 1968, Arthur C. Clarke and Stanley Kubrick had imagined that by the year 2001, a machine would exist with an intelligence that matched or exceeded the capability of human beings. The character they created, HAL 9000, was based on a belief shared by many leading AI researchers that such a machine would exist by the year 2001.[162]

Marvin Minsky asks “So the question is why didn’t we get HAL in 2001?”[163] Minsky believes that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blames the qualification problem.[164] For Ray Kurzweil, the issue is computer power and, using Moore’s Law, he predicts that machines with human-level intelligence will appear by 2029.[165]Jeff Hawkins argues that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[166] There are many other explanations and for each there is a corresponding research program underway.

.

Go here to read the rest:

History of artificial intelligence – Wikipedia, the free …

Posted in Ai | Comments Off on History of artificial intelligence – Wikipedia, the free …

First Amendment – Watchdog.org

Posted: August 25, 2016 at 4:20 pm

By M.D. Kittle / August 14, 2016 / First Amendment, Free Speech, News, Power Abuse, Wisconsin / No Comments

There is a vital need for citizens to have an effective remedy against government officials who investigate them principally because of their partisan affiliation and political speech.

By M.D. Kittle / August 8, 2016 / Commentary, First Amendment, Free Speech, National, Wisconsin / No Comments

Thats precisely what I expected from a party whose platform includes rewriting the First Amendment

By M.D. Kittle / August 3, 2016 / First Amendment, Free Speech, News, Power Abuse, Wisconsin / No Comments

The question that arises is do conservatives have civil rights before Judge Lynn Adelman?

By M.D. Kittle / August 2, 2016 / First Amendment, News, Power Abuse, Wisconsin / No Comments

Now, years after defendants unlawfully seized and catalogued millions of our sensitive documents, we ask the court to vindicate our rights under federal law.

By M.D. Kittle / July 25, 2016 / First Amendment, National, News, Politics & Elections, Wisconsin / No Comments

Moore has uttered some of the more inflammatory, ill-informed statements in Congress.

By M.D. Kittle / July 14, 2016 / First Amendment, Judiciary, News, Power Abuse, Wisconsin / No Comments

The process continues to be the punishment for people who were found wholly innocent of any wrongdoing, she said.

View post:
First Amendment – Watchdog.org

Posted in First Amendment | Comments Off on First Amendment – Watchdog.org

Trump: Maybe ‘2nd Amendment People’ Can Stop Clinton’s …

Posted: August 10, 2016 at 9:08 pm

Republican presidential nominee Donald Trump raised eyebrows Tuesday when he suggested there is “nothing” that can be done to stop Hillary Clinton’s Supreme Court picks, except “maybe” the “Second Amendment people.”

“Hillary wants to abolish, essentially abolish the Second Amendment,” Trump said to the crowd of supporters gathered in the Trask Coliseum at North Carolina University in Wilmington. “If she gets to pick her judges, nothing you can do, folks.

“Although the Second Amendment people, maybe there is. I don’t know.”

After the speech, Clinton’s campaign seized on the remarks.

“This is simple what Trump is saying is dangerous,” read a statement from campaign manager Robby Mook. “A person seeking to be president of the United States should not suggest violence in any way.”

ABC News reached out to the Secret Service for response to Trump’s comment, and the agency said it was aware of the remarks.

The Trump campaign insisted the candidate’s words referred to the power of “Second Amendment people” to unify.

“It’s called the power of unification 2nd Amendment people have amazing spirit and are tremendously unified, which gives them great political power,” read a statement, titled “Trump Campaign Statement Against Dishonest Media,” from senior communications adviser Jason Miller.

In a tweet Tuesday night, Trump tried to explain his remarks.

And in an interview with Fox News Tuesday night, Trump told the network: “This is a strong, powerful movement, the Second Amendment” and called the NRA “terrific people.”

“There can be no other interpretation,” he said of his earlier remarks. “I mean, give me a break.”

Trump’s running mate Mike Pence rose to the candidate’s defense and said Trump was not insinuating that there should be violence against Clinton.

“Donald Trump is clearly saying is that people who cherish that right, who believe that firearms in the hands of law-abiding citizens makes our communities more safe, not less safe, should be involved in the political process and let their voice be heard,” Pence said today in an interview with NBC10, a local Philadelphia TV station.

Clinton’s running mate, Virginia Sen. Tim Kaine told reporters today in Trump’s comments “revealed this complete temperamental misfit with the character thats required to do the job and in a nation.”

“We gotta be pulling together and countenancing violence is not something any leader should do,” Kaine said.

Connecticut Democratic Sen. Chris Murphy, who led a 15-hour filibuster in June to force a vote on gun control measures, took to Twitter to voice his displeasure with Trump’s comments.

“This isn’t play,” wrote Murphy. “Unstable people with powerful guns and an unhinged hatred for Hillary are listening to you, @realDonaldTrump.”

And Rep. Eric Swalwell, D-Calif., who wrote in a tweet that because he believed Trump “suggested someone kill Sec. Clinton,” called for a Secret Service investigation.

See the rest here:
Trump: Maybe ‘2nd Amendment People’ Can Stop Clinton’s …

Posted in Second Amendment | Comments Off on Trump: Maybe ‘2nd Amendment People’ Can Stop Clinton’s …