Breaking News and Updates
- Abolition Of Work
- Alternative Medicine
- Artificial Intelligence
- Atlas Shrugged
- Ayn Rand
- Basic Income Guarantee
- Conscious Evolution
- Cosmic Heaven
- Designer Babies
- Ethical Egoism
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom of Speech
- Gene Medicine
- Genetic Engineering
- Germ Warfare
- Golden Rule
- Government Oppression
- High Seas
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Longevity
- Immortality Medicine
- Intentional Communities
- Life Extension
- Mars Colonization
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- New Utopia
- Personal Empowerment
- Political Correctness
- Politically Incorrect
- Post Human
- Post Humanism
- Private Islands
- Resource Based Economy
- Ron Paul
- Second Amendment
- Second Amendment
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Teilhard De Charden
- The Singularity
- Tor Browser
- Transhuman News
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Zeitgeist Movement
The Evolutionary Perspective
Tag Archives: world
Posted: July 18, 2016 at 3:37 pm
There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles–all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.
A singularity is a sign that your model doesn’t apply past a certain point, not infinity arriving in real life
A singularity, as most commonly used, is a point at which expected rules break down. The term comes from mathematics, where a point on a curve that has a sudden break in slope is considered to have a slope of undefined or infinite value; such a point is known as a singularity.
The term has extended into other fields; the most notable use is in astrophysics, where a singularity is a point (usually, but perhaps not exclusively, at the center a of black hole) where curvature of spacetime approaches infinity.
This article, however, is not about the mathematical or physics uses of the term, but rather the borrowing of it by various futurists. They define a technological singularity as the point beyond which we can know nothing about the world. So, of course, they then write at length on the world after that time.
It’s intelligent design for the IQ 140 people. This proposition that we’re heading to this point at which everything is going to be just unimaginably different – it’s fundamentally, in my view, driven by a religious impulse. And all of the frantic arm-waving can’t obscure that fact for me, no matter what numbers he marshals in favor of it. He’s very good at having a lot of curves that point up to the right.
In transhumanist belief, the “technological singularity” refers to a hypothetical point beyond which human technology and civilization is no longer comprehensible to the current human mind. The theory of technological singularity states that at some point in time humans will invent a machine that through the use of artificial intelligence will be smarter than any human could ever be. This machine in turn will be capable of inventing new technologies that are even smarter. This event will trigger an exponential explosion of technological advances of which the outcome and effect on humankind is heavily debated by transhumanists and singularists.
Many proponents of the theory believe that the machines eventually will see no use for humans on Earth and simply wipe us out their intelligence being far superior to humans, there would be probably nothing we could do about it. They also fear that the use of extremely intelligent machines to solve complex mathematical problems may lead to our extinction. The machine may theoretically respond to our question by turning all matter in our solar system or our galaxy into a giant calculator, thus destroying all of humankind.
Critics, however, believe that humans will never be able to invent a machine that will match human intelligence, let alone exceed it. They also attack the methodology that is used to “prove” the theory by suggesting that Moore’s Law may be subject to the law of diminishing returns, or that other metrics used by proponents to measure progress are totally subjective and meaningless. Theorists like Theodore Modis argue that progress measured in metrics such as CPU clock speeds is decreasing, refuting Moore’s Law. (As of 2015, not only Moore’s Law is beginning to stall, Dennard scaling is also long dead, returns in raw compute power from transistors is subjected to diminishing returns as we use more and more of them, there is also Amdahl’s Law and Wirth’s law to take into account, and also that raw computing power simply doesn’t scale up linearly at providing real marginal utility. Then even after all those things, we still haven’t taken into account of the fundamental limitations of conventional computing architecture. Moore’s law suddenly doesn’t look to be the panacea to our problems now, does it?)
Transhumanist thinkers see a chance of the technological singularity arriving on Earth within the twenty first century, a concept that most[Who?]rationalists either consider a little too messianic in nature or ignore outright. Some of the wishful thinking may simply be the expression of a desire to avoid death, since the singularity is supposed to bring the technology to reverse human aging, or to upload human minds into computers. However, recent research, supported by singularitarian organizations including MIRI and the Future of Humanity Institute, does not support the hypothesis that near-term predictions of the singularity are motivated by a desire to avoid death, but instead provides some evidence that many optimistic predications about the timing of a singularity are motivated by a desire to “gain credit for working on something that will be of relevance, but without any possibility that their prediction could be shown to be false within their current career”.
Don’t bother quoting Ray Kurzweil to anyone who knows a damn thing about human cognition or, indeed, biology. He’s a computer science genius who has difficulty in perceiving when he’s well out of his area of expertise.
Eliezer Yudkowsky identifies three major schools of thinking when it comes to the singularity. While all share common ground in advancing intelligence and rapidly developing technology, they differ in how the singularity will occur and the evidence to support the position.
Under this school of thought, it is assumed that change and development of technology and human (or AI assisted) intelligence will accelerate at an exponential rate. So change a decade ago was much faster than change a century ago, which was faster than a millennium ago. While thinking in exponential terms can lead to predictions about the future and the developments that will occur, it does mean that past events are an unreliable source of evidence for making these predictions.
The “event horizon” school posits that the post-singularity world would be unpredictable. Here, the creation of a super-human artificial intelligence will change the world so dramatically that it would bear no resemblance to the current world, or even the wildest science fiction. This school of thought sees the singularity most like a single point event rather than a process indeed, it is this thesis that spawned the term “singularity.” However, this view of the singularity does treat transhuman intelligence as some kind of magic.
This posits that the singularity is driven by a feedback cycle between intelligence enhancing technology and intelligence itself. As Yudkowsky (who endorses this view) “What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that theyd design the next generation of brain-computer interfaces.” When this feedback loop of technology and intelligence begins to increase rapidly, the singularity is upon us.
There is also a fourth singularity school which is much more popular than the other three: It’s all a load of baloney! This position is not popular with high-tech billionaires.
This is largely dependent on your definition of “singularity”.
The intelligence explosion singularity is by far the most unlikely. According to present calculations, a hypothetical future supercomputer may well not be able to replicate a human brain in real time. We presently don’t even understand how intelligence works, and there is no evidence that intelligence is self-iterative in this manner – indeed, it is not unlikely that improvements on intelligence are actually more difficult the smarter you become, meaning that each improvement on intelligence is increasingly difficult to execute. Indeed, how much smarter it is possible for something to even be than a human being is an open question. Energy requirements are another issue; humans can run off of Doritos and Mountain Dew Dr. Pepper, while supercomputers require vast amounts of energy to function. Unless such an intelligence can solve problems better than groups of humans, its greater intelligence may well not matter, as it may not be as efficient as groups of humans working together to solve problems.
Another major issue arises from the nature of intellectual development; if an artificial intelligence needs to be raised and trained, it may well take twenty years or more between generations of artificial intelligences to get further improvements. More intelligent animals seem to generally require longer to mature, which may put another limitation on any such “explosion”.
Accelerating change is questionable; in real life, the rate of patents per capita actually peaked in the 20th century, with a minor decline since then, despite the fact that human beings have gotten more intelligent and gotten superior tools. As noted above, Moore’s Law has been in decline, and outside of the realm of computers, the rate of increase in other things has not been exponential – airplanes and cars continue to improve, but they do not improve at the ridiculous rate of computers. It is likely that once computers hit physical limits of transistor density, their rate of improvement will fall off dramatically, and already even today, computers which are “good enough” continue to operate for many years, something which was unheard of in the 1990s, where old computers were rapidly and obviously obsoleted by new ones.
According to this point of view, the Singularity is a past event, and we live in a post-Singularity world.
The rate of advancement has actually been in decline in recent times, as patents per-capita has gone down, and the rate of increase of technology has declined rather than risen, though the basal rate is higher than it was in centuries past. According to this point of view, the intelligence explosion and increasing rate of change already happened with computers, and now that everyone has handheld computing devices, the rate of increase is going to decline as we hit natural barriers in how much additional benefit we gain out of additional computing power. The densification of transistors on microchips has slowed by about a third, and the absolute limit to transistors is approaching – a true, physical barrier which cannot be bypassed or broken, and which would require an entirely different means of computing to create a denser still microchip.
From the point of view of travel, humans have gone from walking to sailing to railroads to highways to airplanes, but communication has now reached the point where a lot of travel is obsolete – the Internet is omnipresent and allows us to effectively communicate with people on any corner of the planet without travelling at all. From this point of view, there is no further point of advancement, because we’re already at the point where we can be anywhere on the planet instantly for many purposes, and with improvements in automation, the amount of physical travel necessary for the average human being has declined over recent years. Instant global communication and the ability to communicate and do calculations from anywhere are a natural physical barrier, beyond which further advancement is less meaningful, as it is mostly just making things more convenient – the cost is already extremely low.
The prevalence of computers and communications devices has completely changed the world, as has the presence of cheap, high-speed transportation technology. The world of the 21st century is almost unrecognizable to people from the founding of the United States in the latter half of the 18th century, or even to people from the height of the industrial era at the turn of the 20th century.
Extraterrestrial technological singularities might become evident from acts of stellar/cosmic engineering. One such possibility for example would be the construction of Dyson Spheres that would result in the altering of a star’s electromagnetic spectrum in a way detectable from Earth. Both SETI and Fermilab have incorporated that possibility into their searches for alien life. 
A different view of the concept of singularity is explored in the science fiction book Dragon’s Egg by Robert Lull Forward, in which an alien civilization on the surface of a neutron star, being observed by human space explorers, goes from Stone Age to technological singularity in the space of about an hour in human time, leaving behind a large quantity of encrypted data for the human explorers that are expected to take over a million years (for humanity) to even develop the technology to decrypt.
No signs of extraterrestrial civilizations have been found as of 2016.
Read the rest here:
Posted: July 12, 2016 at 6:16 am
(Photo: Peter Hapak/New York Magazine; Hair by Kelsey Bauer, Make-up by Amber Doty/Mirror Mirror)
Martine prefers not to limit herself to available words: Shes suggested using Pn., for person, in place of Mr. and Ms., and spice to mean husband or wife. But trans is a prefix she likes a lot, for it contains her self-image as an explorer who crosses barriers into strange new lands. (When she feels a connection to a new acquaintance, she says that she transcends.) And these days Martine sees herself less as transgender and more as what is known as transhumanist, a particular kind of futurist who believes that technology can liberate humans from the limits of their biologyincluding infertility, disease, and decay, but also, incredibly, death. Now, in her spare time, when shes not running a $5 billion company, or flying her new helicopter up and down the East Coast, or attending to her large family and three dogs, shes tinkering with ways that technology might push back that ultimate limit. She believes in a foreseeable future in which the beloved dead will live again as digital beings, reanimated by sophisticated artificial-intelligence programs that will be as cheap and accessible to every person as iTunes. I know this sounds messianic or even childlike, she wrote to me in one of many emails over the summer. But I believe it is simply practical and technologically inevitable.
During our first conversation, in the beige United Therapuetics outpost in Burlington, Vermont, Martine made a distinction between boundaries and borders. Borders, denials, limitsthese are Martines siren calls, pulling her toward and beyond them even as she, a pharma executive responsible to shareholders and a board, must survive every day within regulations and laws. She was sprawled across from me on a sectional couch, her hair in a ponytail and her long legs before her. At times I sort of feel like Queen Elizabeth, she said. You know, she lives in a world of limitations, having the appearance of great authority and being able to transcend any limitations. But in reality she is in a little cage.
Martin Rothblatt was raised by observant Jewish parents in a working-class suburb of San Diego; his father was a dentist. His mother, Rosa Lee, says she always believed her first child was destined for greatness. Days after Martins birth, I was walking back and forth in the living room and I was holding him like a football. And I remember saying, Menashe, honeythats his Hebrew nameI dont know what it is, but theres something special about you. You will make a difference in this world. And she is.
The Rothblatts were the only Jewish family in a mostly Hispanic neighborhood, and Martin grew up obsessed with difference, seeking out families unlike his own. Rosa Lee remembers her child as a fanatical reader, the kind of kid who would spend an entire family vacation with his nose in Siddhartha, and Martine herself sent me a list of the books that as an adolescent had been influential: Exodus, by Leon Uris; anything by Isaac Asimov; and especially Black Like Me, by John Howard Griffin. But Martin was an unmotivated student and dropped out of UCLA after freshman year, because he wanted to see the world; he had read that the Seychelles were like a paradise, and with a few hundred dollars in his pocket he made his way there.
The Seychelles disappointed. Cockroaches covered the floor of his hut at night, and when he turned on the light, moths or locusts would swarm in through the open windows. But a friend of a friend was working at an Air Force base tracking satellites for NASA, and one day Martin was invited to visit. Outside, there was a big, giant, satellite dish. Inside, it was like we stepped into the future, Martine told me. Everything was crisp and clean, she said, like a vision out of science fiction made real. It seemed to me the satellite engineer was making the whole world come together. Like that was the center of the world. Martin hightailed it back to California to re-enroll at UCLA and transform himself into an expert in the law of space.
Martin first met Bina at a networking event in Hollywood in 1979. There was a DJ, and the music started, and there was a disco ball and a dance floor, Martine remembers. I saw Bina sitting over there, and I just felt an enormous attraction to her and just walked over and asked her to dance. And she agreed to dance. We danced, we sat down, talked, and weve been together ever since. They were from different worlds: Martin was a white Jewish man on his way to getting a J.D.-M.B.A.; Bina, who is African-American, grew up in Compton and was working as a real-estate agent. But they had much in commonstarting with the fact that they were both single parents. Martin had met a woman in Kenya on his way home from the Seychelles; the relationship had not worked out, but had produced a son, Eli, who was 3. Binas daughter, Sunee, was about the same age.
See the original post here:
Martine Rothblatt Is the Highest-Paid Female CEO in …
Posted: July 7, 2016 at 4:10 pm
Human evolution is the lengthy process of change by which people originated from apelike ancestors. Scientific evidence shows that the physical and behavioral traits shared by all people originated from apelike ancestors and evolved over a period of approximately six million years.
One of the earliest defining human traits, bipedalism — the ability to walk on two legs — evolved over 4 million years ago. Other important human characteristics — such as a large and complex brain, the ability to make and use tools, and the capacity for language — developed more recently. Many advanced traits — including complex symbolic expression, art, and elaborate cultural diversity — emerged mainly during the past 100,000 years.
Humans are primates. Physical and genetic similarities show that the modern human species, Homo sapiens, has a very close relationship to another group of primate species, the apes. Humans and the great apes (large apes) of Africa — chimpanzees (including bonobos, or so-called pygmy chimpanzees) and gorillas — share a common ancestor that lived between 8 and 6 million years ago. Humans first evolved in Africa, and much of human evolution occurred on that continent. The fossils of early humans who lived between 6 and 2 million years ago come entirely from Africa.
Most scientists currently recognize some 15 to 20 different species of early humans. Scientists do not all agree, however, about how these species are related or which ones simply died out. Many early human species — certainly the majority of them left no living descendants. Scientists also debate over how to identify and classify particular species of early humans, and about what factors influenced the evolution and extinction of each species.
Early humans first migrated out of Africa into Asia probably between 2 million and 1.8 million years ago. They entered Europe somewhat later, between 1.5 million and 1 million years. Species of modern humans populated many parts of the world much later. For instance, people first came to Australia probably within the past 60,000 years and to the Americas within the past 30,000 years or so. The beginnings of agriculture and the rise of the first civilizations occurred within the past 12,000 years.
Paleoanthropology is the scientific study of human evolution. Paleoanthropology is a subfield of anthropology, the study of human culture, society, and biology. The field involves an understanding of the similarities and differences between humans and other species in their genes, body form, physiology, and behavior. Paleoanthropologists search for the roots of human physical traits and behavior. They seek to discover how evolution has shaped the potentials, tendencies, and limitations of all people. For many people, paleoanthropology is an exciting scientific field because it investigates the origin, over millions of years, of the universal and defining traits of our species. However, some people find the concept of human evolution troubling because it can seem not to fit with religious and other traditional beliefs about how people, other living things, and the world came to be. Nevertheless, many people have come to reconcile their beliefs with the scientific evidence.
Early human fossils and archeological remains offer the most important clues about this ancient past. These remains include bones, tools and any other evidence (such as footprints, evidence of hearths, or butchery marks on animal bones) left by earlier people. Usually, the remains were buried and preserved naturally. They are then found either on the surface (exposed by rain, rivers, and wind erosion) or by digging in the ground. By studying fossilized bones, scientists learn about the physical appearance of earlier humans and how it changed. Bone size, shape, and markings left by muscles tell us how those predecessors moved around, held tools, and how the size of their brains changed over a long time. Archeological evidence refers to the things earlier people made and the places where scientists find them. By studying this type of evidence, archeologists can understand how early humans made and used tools and lived in their environments.
The process of evolution involves a series of natural changes that cause species (populations of different organisms) to arise, adapt to the environment, and become extinct. All species or organisms have originated through the process of biological evolution. In animals that reproduce sexually, including humans, the term species refers to a group whose adult members regularly interbreed, resulting in fertile offspring — that is, offspring themselves capable of reproducing. Scientists classify each species with a unique, two-part scientific name. In this system, modern humans are classified as Homo sapiens.
Evolution occurs when there is change in the genetic material — the chemical molecule, DNA — which is inherited from the parents, and especially in the proportions of different genes in a population. Genes represent the segments of DNA that provide the chemical code for producing proteins. Information contained in the DNA can change by a process known as mutation. The way particular genes are expressed that is, how they influence the body or behavior of an organism — can also change. Genes affect how the body and behavior of an organism develop during its life, and this is why genetically inherited characteristics can influence the likelihood of an organisms survival and reproduction.
Evolution does not change any single individual. Instead, it changes the inherited means of growth and development that typify a population (a group of individuals of the same species living in a particular habitat). Parents pass adaptive genetic changes to their offspring, and ultimately these changes become common throughout a population. As a result, the offspring inherit those genetic characteristics that enhance their chances of survival and ability to give birth, which may work well until the environment changes. Over time, genetic change can alter a species’ overall way of life, such as what it eats, how it grows, and where it can live. Human evolution took place as new genetic variations in early ancestor populations favored new abilities to adapt to environmental change and so altered the human way of life.
Dr. Rick Potts provides a video short introduction to some of the evidence for human evolution, in the form of fossils and artifacts.
Posted: July 3, 2016 at 12:22 pm
Order Your Free Brochures
On April 27, 2016, Sirena, the newest member of the Oceania Cruises fleet, was christened in Barcelona, Spain. Watch the event as it happened live, including opening remarks from Oceania Cruises President Jason Montague, Sirenas Godmother Claudine Ppin, the christening of the ship, and all the festivities!
Filled with a spectacular array of diverse and exotic destinations, your world awaits your discovery. There is simply no better way to explore it than aboard the elegant ships of Oceania Cruises. Our unique itineraries are wide-ranging, featuring the most fascinating destinations throughout the world. Regatta, Insignia, Nautica, Sirena, Marina and Riviera are all intimate and luxurious, with each calling on the worlds most desirable ports, from historic cities and modern meccas to seaside villages and faraway islands. On a voyage with Oceania Cruises, each day offers the rewarding opportunity to experience the history, culture and cuisine of a wondrous new destination.
Relax on board our luxurious ships and savor cuisine renowned as the finest at sea, rivaling even Michelin-starred restaurants ashore. Inspired by Master Chef Jacques Ppin, these culinary delights have always been a hallmark that distinguishes the Oceania Cruises experience from any other. Considering the uncompromising quality, perhaps the most remarkable aspect of an Oceania Cruises voyage is its incredible value. Lavish complimentary amenities abound, and there are never supplemental charges in any of the onboard restaurants. Value packages ensure that sipping a glass of vintage wine, surfing the Internet or enjoying a shore excursion is both convenient and affordable.
Here is the original post:
Posted: July 1, 2016 at 9:53 pm
A towering philosophical novel that is the summation of her Objectivist philosophy, Ayn Rand’s Atlas Shrugged is the saga of the enigmatic John Galt, and his ambitious plan to ‘stop the motor of the world’, published in Penguin Modern Classics.
Opening with the enigmatic question ‘Who is John Galt?’, Atlas Shrugged envisions a world where the ‘men of talent’ – the great innovators, producers and creators – have mysteriously disappeared. With the US economy now faltering, businesswoman Dagny Taggart is struggling to get the transcontinental railroad up and running. For her John Galt is the enemy, but as she will learn, nothing in this situation is quite as it seems. Hugely influential and grand in scope, this story of a man who stopped the motor of the world expounds Rand’s controversial philosophy of Objectivism, which champions competition, creativity and human greatness.
Ayn Rand (1905-82), born Alisa Rosenbaum in St. Petersburg, Russia, emigrated to America with her family in January 1926, never to return to her native land. Her novel The Fountainhead was published in 1943 and eventually became a bestseller. Still occasionally working as a screenwriter, Rand moved to New York City in 1951 and published Atlas Shrugged in 1957. Her novels espoused what came to be called Objectivism, a philosophy that champions capitalism and the pre-eminence of the individual.
If you enjoyed Atlas Shrugged, you might like Rand’s The Fountainhead, also available in Penguin Modern Classics.
‘A writer of great power … she writes brilliantly, beautifully, bitterly’ The New York Times
‘Atlas Shrugged … is a celebration of life and happiness’ Alan Greenspan
Read the original:
Posted: at 9:47 pm
The human system can be quantified, manipulated, and optimized. The human drive to self-improve is timeless, but modern technologies now allow us to enhance in precise and measurable ways like never before.
As a group of biohackers, technologists, and researchers, we believe life should be lived to its fullest potential. That potential is tested and ultimately judged by the work we produce. Weve realized that the world around us is made by people no smarter than you or me, and we too can make a dent in the world with what we can create.
When it comes to our offerings, we take the same mentality. Nootrobox researches, develops, and manufactures nootropics with state of the art manufacturing techniques and 100% FDA generally regarded as safe (GRAS) components. This guarantees nootropics that are effective, precise, and safe.
Were driving some of latest research with top academic collaborators in the world to better and understand human cognition and biohacking. This expertise and data is used to constantly evolve and improve our offerings. Thus, our products and formulations are at the forefront of the latest science and research.
Our goal is to make nootropics for everyone. A smarter society is a better society, so lets build and live in that future together.
Posted: at 2:34 pm
This article is about the meta-ethical position. For a more general discussion of amoralism, see Amorality.
Moral nihilism (also known as ethical nihilism) is the meta-ethical view that nothing is intrinsically moral or immoral. For example, a moral nihilist would say that killing someone, for whatever reason, is neither inherently right nor inherently wrong. Moral nihilists consider morality to be constructed, a complex set of rules and recommendations that may give a psychological, social, or economical advantage to its adherents, but is otherwise without universal or even relative truth in any sense.
Moral nihilism is distinct from moral relativism, which does allow for actions to be right or wrong relative to a particular culture or individual, and moral universalism, which holds actions to be right or wrong in the same way for everyone everywhere. Insofar as only true statements can be known, moral nihilism implies moral skepticism.
According to Sinnott-Armstrong (2006a), the basic thesis of moral nihilism is that “nothing is morally wrong” (3.4). There are, however, several forms that this thesis can take (see Sinnott-Armstrong, 2006b, pp.3237 and Russ Shafer-Landau, 2003, pp.813). There are two important forms of moral nihilism: error theory and expressivism p.292.
One form of moral nihilism is expressivism. Expressivism denies the principle that our moral judgments try and fail to describe the moral features, because expressivists believe when someone says something is immoral they are not saying it is right or wrong. Expressivists are not trying to speak the truth when making moral judgments; they are simply trying to express their feelings. “We are not making an effort to describe the way the world is. We are not trying to report on the moral features possessed by various actions, motives, or policies. Instead, we are venting our emotions, commanding others to act in certain ways, or revealing a plan of action. When we condemn torture, for instance, we are expressing our opposition to it, indicating our disgust at it, publicizing our reluctance to perform it, and strongly encouraging others not to go in for it. We can do all of these things without trying to say anything that is true.” p.293.
This makes expressivism a form of non-cognitivism. Non-cognitivism in ethics is the view that moral statements lack truth-value and do not assert genuine propositions. This involves a rejection of the cognitivist claim, shared by other moral philosophies, that moral statements seek to “describe some feature of the world” (Garner 1967, 219-220). This position on its own is logically compatible with realism about moral values themselves. That is, one could reasonably hold that there are objective moral values but that we cannot know them and that our moral language does not seek to refer to them. This would amount to an endorsement of a type of moral skepticism, rather than nihilism.
Typically, however, the rejection of the cognitivist thesis is combined with the thesis that there are, in fact, no moral facts (van Roojen, 2004). But if moral statements cannot be true, and if one cannot know something that is not true, non-cognitivism implies that moral knowledge is impossible (Garner 1967, 219-220).
Not all forms of non-cognitivism are forms of moral nihilism, however: notably, the universal prescriptivism of R.M. Hare is a non-cognitivist form of moral universalism, which holds that judgements about morality may be correct or not in a consistent, universal way, but do not attempt to describe features of reality and so are not, strictly speaking, truth-apt.
Error theory is built on three principles:
Thus, we always lapse into error when thinking in moral terms. We are trying to state the truth when we make moral judgments. But since there is no moral truth, all of our moral claims are mistaken. Hence the error. These three principles lead to the conclusion that there is no moral knowledge. Knowledge requires truth. If there is no moral truth, there can be no moral knowledge. Thus moral values are purely chimerical.
Error theorists combine the cognitivist thesis that moral language consists of truth-apt statements with the nihilist thesis that there are no moral facts. Like moral nihilism itself, however, error theory comes in more than one form: Global falsity and Presupposition failure.
The first, which one might call the global falsity form of error theory, claims that moral beliefs and assertions are false in that they claim that certain moral facts exist that in fact do not exist. J. L. Mackie (1977) argues for this form of moral nihilism. Mackie argues that moral assertions are only true if there are moral properties that are intrinsically motivating, but there is good reason to believe that there are no such intrinsically motivating properties (see the argument from queerness and motivational internalism).
The second form, which one might call the presupposition failure form of error theory, claims that moral beliefs and assertions are not true because they are neither true nor false. This is not a form of non-cognitivism, for moral assertions are still thought to be truth-apt. Rather, this form of moral nihilism claims that moral beliefs and assertions presuppose the existence of moral facts that do not exist. This is analogous to presupposition failure in cases of non-moral assertions. Take, for example, the claim that the present king of France is bald. Some argue[who?] that this claim is truth-apt in that it has the logical form of an assertion, but it is neither true nor false because it presupposes that there is currently a king of France, but there is not. The claim suffers from “presupposition failure.” Richard Joyce (2001) argues for this form of moral nihilism under the name “fictionalism.”
The philosophy of Niccol Machiavelli is sometimes presented as a model of moral nihilism, but this is at best ambiguous. His book Il Principe (The Prince) praised many acts of violence and deception, which shocked a European tradition that throughout the Middle Ages had inculcated moral lessons in its political philosophies. Machiavelli does say that the Prince must override traditional moral rules in favor of power-maintaining reasons of State, but he also says, particularly in his other works, that the successful ruler should be guided by Pagan rather than Christian virtues. Hence, Machiavelli presents an alternative to the ethical theories of his day, rather than an all-out rejection of all morality.
Closer to being an example of moral nihilism is Thrasymachus, as portrayed in Plato’s Republic. Thrasymachus argues, for example, that rules of justice are structured to benefit those who are able to dominate political and social institutions. Thrasymachus can, however, be interpreted as offering a revisionary account of justice, rather than a total rejection of morality and normative discourse.
Glover has cited realist views of amoralism held by early Athenians, and in some ethical positions affirmed by Joseph Stalin.
Criticisms of moral nihilism come primarily from moral realists, who argue that there are positive moral truths. Still, criticisms do arise out of the other anti-realist camps (i.e. subjectivists and relativists). Not only that, but each school of moral nihilism has its own criticisms of one another (e.g. the non-cognitivists’ critique of error theory for accepting the semantic thesis of moral realism).
Still other detractors deny that the basis of moral objectivity need be metaphysical. The moral naturalist, though a form of moral realist, agrees with the nihilists’ critique of metaphysical justifications for right and wrong. Moral naturalists prefer to define “morality” in terms of observables, some even appealing to a science of morality.
Here is the original post:
Posted: June 21, 2016 at 11:13 pm
Is the surface of our planet — and maybe every planet we can get our hands on — going to be carpeted in paper clips (and paper clip factories) by a well-intentioned but misguided artificial intelligence (AI) that ultimately cannibalizes everything in sight, including us, in single-minded pursuit of a seemingly innocuous goal? Nick Bostrom, head of Oxford’s Future of Humanity Institute, thinks that we can’t guarantee it _won’t_ happen, and it worries him. It doesn’t require Skynet and Terminators, it doesn’t require evil geniuses bent on destroying the world, it just requires a powerful AI with a moral system in which humanity’s welfare is irrelevant or defined very differently than most humans today would define it. If the AI has a single goal and is smart enough to outwit our attempts to disable or control it once it has gotten loose, Game Over, argues Professor Bostrom in his book _Superintelligence_.
This is perhaps the most important book I have read this decade, and it has kept me awake at night for weeks. I want to tell you why, and what I think, but a lot of this is difficult ground, so please bear with me. The short form is that I am fairly certain that we _will_ build a true AI, and I respect Vernor Vinge, but I have long been skeptical of the Kurzweilian notions of inevitability, doubly-exponential growth, and the Singularity. I’ve also been skeptical of the idea that AIs will destroy us, either on purpose or by accident. Bostrom’s book has made me think that perhaps I was naive. I still think that, on the whole, his worst-case scenarios are unlikely. However, he argues persuasively that we can’t yet rule out any number of bad outcomes of developing AI, and that we need to be investing much more in figuring out whether developing AI is a good idea. We may need to put a moratorium on research, as was done for a few years with recombinant DNA starting in 1975. We also need to be prepared for the possibility that such a moratorium doesn’t hold. Bostrom also brings up any number of mind-bending dystopias around what qualifies as human, which we’ll get to below.
(snips to my review, since Goodreads limits length)
In case it isn’t obvious by now, both Bostrom and I take it for granted that it’s not only possible but nearly inevitable that we will create a strong AI, in the sense of it being a general, adaptable intelligence. Bostrom skirts the issue of whether it will be conscious, or “have qualia”, as I think the philosophers of mind say.
Where Bostrom and I differ is in the level of plausibility we assign to the idea of a truly exponential explosion in intelligence by AIs, in a takeoff for which Vernor Vinge coined the term “the Singularity.” Vinge is rational, but Ray Kurzweil is the most famous proponent of the Singularity. I read one of Kurzweil’s books a number of years ago, and I found it imbued with a lot of near-mystic hype. He believes the Universe’s purpose is the creation of intelligence, and that that process is growing on a double exponential, starting from stars and rocks through slime molds and humans and on to digital beings.
I’m largely allergic to that kind of hooey. I really don’t see any evidence of the domain-to-domain acceleration that Kurzweil sees, and in particular the shift from biological to digital beings will result in a radical shift in the evolutionary pressures. I see no reason why any sort of “law” should dictate that digital beings will evolve at a rate that *must* be faster than the biological one. I also don’t see that Kurzweil really pays any attention to the physical limits of what will ultimately be possible for computing machines. Exponentials can’t continue forever, as Danny Hillis is fond of pointing out. http://www.kurzweilai.net/ask-ray-the…
So perhaps my opinion is somewhat biased by a dislike of Kurzweil’s circus barker approach, but I think there is more to it than that. Fundamentally, I would put it this way:
Being smart is hard.
And making yourself smarter is also hard. My inclination is that getting smarter is at least as hard as the advantages it brings, so that the difficulty of the problem and the resources that can be brought to bear on it roughly balance. This will result in a much slower takeoff than Kurzweil reckons, in my opinion. Bostrom presents a spectrum of takeoff speeds, from “too fast for us to notice” through “long enough for us to develop international agreements and monitoring institutions,” but he makes it fairly clear that he believes that the probability of a fast takeoff is far too large to ignore. There are parts of his argument I find convincing, and parts I find less so.
To give you a little more insight into why I am a little dubious that the Singularity will happen in what Bostrom would describe as a moderate to fast takeoff, let me talk about the kinds of problems we human beings solve, and that an AI would have to solve. Actually, rather than the kinds of questions, first let me talk about the kinds of answers we would like an AI (or a pet family genius) to generate when given a problem. Off the top of my head, I can think of six:
[Speed] Same quality of answer, just faster. [Ply] Look deeper in number of plies (moves, in chess or go). [Data] Use more, and more up-to-date, data. [Creativity] Something beautiful and new. [Insight] Something new and meaningful, such as a new theory; probably combines elements of all of the above categories. [Values] An answer about (human) values.
The first three are really about how the answers are generated; the last three about what we want to get out of them. I think this set is reasonably complete and somewhat orthogonal, despite those differences.
So what kinds of problems do we apply these styles of answers to? We ultimately want answers that are “better” in some qualitative sense.
Humans are already pretty good at projecting the trajectory of a baseball, but it’s certainly conceivable that a robot batter could be better, by calculating faster and using better data. Such a robot might make for a boring opponent for a human, but it would not be beyond human comprehension.
But if you accidentally knock a bucket of baseballs down a set of stairs, better data and faster computing are unlikely to help you predict the exact order in which the balls will reach the bottom and what happens to the bucket. Someone “smarter” might be able to make some interesting statistical predictions that wouldn’t occur to you or me, but not fill in every detail of every interaction between the balls and stairs. Chaos, in the sense of sensitive dependence on initial conditions, is just too strong.
In chess, go, or shogi, a 1000x improvement in the number of plies that can be investigated gains you maybe only the ability to look ahead two or three moves more than before. Less if your pruning (discarding unpromising paths) is poor, more if it’s good. Don’t get me wrong — that’s a huge deal, any player will tell you. But in this case, humans are already pretty good, when not time limited.
Go players like to talk about how close the top pros are to God, and the possibly apocryphal answer from a top pro was that he would want a three-stone (three-move) handicap, four if his life depended on it. Compared this to the fact that a top pro is still some ten stones stronger than me, a fair amateur, and could beat a rank beginner even if the beginner was given the first forty moves. Top pros could sit across the board from an almost infinitely strong AI and still hold their heads up.
In the most recent human-versus-computer shogi (Japanese chess) series, humans came out on top, though presumabl
y this won’t last much longer.
In chess, as machines got faster, looked more plies ahead, carried around more knowledge, and got better at pruning the tree of possible moves, human opponents were heard to say that they felt the glimmerings of insight or personality from them.
So again we have some problems, at least, where plies will help, and will eventually guarantee a 100% win rate against the best (non-augmented) humans, but they will likely not move beyond what humans can comprehend.
Simply being able to hold more data in your head (or the AI’s head) while making a medical diagnosis using epidemiological data, or cross-correlating drug interactions, for example, will definitely improve our lives, and I can imagine an AI doing this. Again, however, the AI’s capabilities are unlikely to recede into the distance as something we can’t comprehend.
We know that increasing the amount of data you can handle by a factor of a thousand gains you 10x in each dimension for a 3-D model of the atmosphere or ocean, up until chaotic effects begin to take over, and then (as we currently understand it) you can only resort to repeated simulations and statistical measures. The actual calculations done by a climate model long ago reached the point where even a large team of humans couldn’t complete them in a lifetime. But they are not calculations we cannot comprehend, in fact, humans design and debug them.
So for problems with answers in the first three categories, I would argue that being smarter is helpful, but being a *lot* smarter is *hard*. The size of computation grows quickly in many problems, and for many problems we believe that sheer computation is fundamentally limited in how well it can correspond to the real world.
But those are just the warmup. Those are things we already ask computers to do for us, even though they are “dumber” than we are. What about the latter three categories?
I’m no expert in creativity, and I know researchers study it intensively, so I’m going to weasel through by saying it is the ability to generate completely new material, which involves some random process. You also need the ability either to generate that material such that it is aesthetically pleasing with high probability, or to prune those new ideas rapidly using some metric that achieves your goal.
For my purposes here, insight is the ability to be creative not just for esthetic purposes, but in a specific technical or social context, and to validate the ideas. (No implication that artists don’t have insight is intended, this is just a technical distinction between phases of the operation, for my purposes here.) Einstein’s insight for special relativity was that the speed of light is constant. Either he generated many, many hypotheses (possibly unconsciously) and pruned them very rapidly, or his hypothesis generator was capable of generating only a few good ones. In either case, he also had the mathematical chops to prove (or at least analyze effectively) his hypothesis; this analysis likewise involves generating possible paths of proofs through the thicket of possibilities and finding the right one.
So, will someone smarter be able to do this much better? Well, it’s really clear that Einstein (or Feynman or Hawking, if your choice of favorite scientist leans that way) produced and validated hypotheses that the rest of us never could have. It’s less clear to me exactly how *much* smarter than the rest of us he was; did he generate and prune ten times as many hypotheses? A hundred? A million? My guess is it’s closer to the latter than the former. Even generating a single hypothesis that could be said to attack the problem is difficult, and most humans would decline to even try if you asked them to.
Making better devices and systems of any kind requires all of the above capabilities. You must have insight to innovate, and you must be able to quantitatively and qualitatively analyze the new systems, requiring the heavy use of data. As systems get more complex, all of this gets harder. My own favorite example is airplane engines. The Wright Brothers built their own engines for their planes. Today, it takes a team of hundreds to create a jet turbine — thousands, if you reach back into the supporting materials, combustion and fluid flow research. We humans have been able to continue to innovate by building on the work of prior generations, and especially harnessing teams of people in new ways. Unlike Peter Thiel, I don’t believe that our rate of innovation is in any serious danger of some precipitous decline sometime soon, but I do agree that we begin with the low-lying fruit, so that harvesting fruit requires more effort — or new techniques — with each passing generation.
The Singularity argument depends on the notion that the AI would design its own successor, or even modify itself to become smarter. Will we watch AIs gradually pull even with us and then ahead, but not disappear into the distance in a Roadrunner-like flash of dust covering just a few frames of film in our dull-witted comprehension?
Ultimately, this is the question on which continued human existence may depend: If an AI is enough smarter than we are, will it find the process of improving itself to be easy, or will each increment of intelligence be a hard problem for the system of the day? This is what Bostrom calls the “recalcitrance” of the problem.
I believe that the range of possible systems grows rapidly as they get more complex, and that evaluating them gets harder; this is hard to quantify, but each step might involve a thousand times as many options, or evaluating each option might be a thousand times harder. Growth in computational power won’t dramatically overbalance that and give sustained, rapid and accelerating growth that moves AIs beyond our comprehension quickly. (Don’t take these numbers seriously, it’s just an example.)
Bostrom believes that recalcitrance will grow more slowly than the resources the AI can bring to bear on the problem, resulting in continuing, and rapid, exponential increases in intelligence — the arrival of the Singularity. As you can tell from the above, I suspect that the opposite is the case, or that they very roughly balance, but Bostrom argues convincingly. He is forcing me to reconsider.
What about “values”, my sixth type of answer, above? Ah, there’s where it all goes awry. Chapter eight is titled, “Is the default scenario doom?” and it will keep you awake.
What happens when we put an AI in charge of a paper clip factory, and instruct it to make as many paper clips as it can? With such a simple set of instructions, it will do its best to acquire more resources in order to make more paper clips, building new factories in the process. If it’s smart enough, it will even anticipate that we might not like this and attempt to disable it, but it will have the will and means to deflect our feeble strikes against it. Eventually, it will take over every factory on the planet, continuing to produce paper clips until we are buried in them. It may even go on to asteroids and other planets in a single-minded attempt to carpet the Universe in paper clips.
I suppose it goes without saying that Bostrom thinks this would be a bad outcome. Bostrom reasons that AIs ultimately may or may not be similar enough to us that they count as our progeny, but doesn’t hesitate to view them as adversaries, or at least rivals, in the pursuit of resources and even existence. Bostrom clearly roots for humanity here. Which means it’s incumbent on us to find a way to prevent this from happening.
Bostrom thinks that instilling valu
es that are actually close enough to ours that an AI will “see things our way” is nigh impossible. There are just too many ways that the whole process can go wrong. If an AI is given the goal of “maximizing human happiness,” does it count when it decides that the best way to do that is to create the maximum number of digitally emulated human minds, even if that means sacrificing some of the physical humans we already have because the planet’s carrying capacity is higher for digital than organic beings?
As long as we’re talking about digital humans, what about the idea that a super-smart AI might choose to simulate human minds in enough detail that they are conscious, in the process of trying to figure out humanity? Do those recursively digital beings deserve any legal standing? Do they count as human? If their simulations are stopped and destroyed, have they been euthanized, or even murdered? Some of the mind-bending scenarios that come out of this recursion kept me awake nights as I was reading the book.
He uses a variety of names for different strategies for containing AIs, including “genies” and “oracles”. The most carefully circumscribed ones are only allowed to answer questions, maybe even “yes/no” questions, and have no other means of communicating with the outside world. Given that Bostrom attributes nearly infinite brainpower to an AI, it is hard to effectively rule out that an AI could still find some way to manipulate us into doing its will. If the AI’s ability to probe the state of the world is likewise limited, Bsotrom argues that it can still turn even single-bit probes of its environment into a coherent picture. It can then decide to get loose and take over the world, and identify security flaws in outside systems that would allow it to do so even with its very limited ability to act.
I think this unlikely. Imagine we set up a system to monitor the AI that alerts us immediately when the AI begins the equivalent of a port scan, for whatever its interaction mechanism is. How could it possibly know of the existence and avoid triggering the alert? Bostrom has gone off the deep end in allowing an intelligence to infer facts about the world even when its data is very limited. Sherlock Holmes always turns out to be right, but that’s fiction; in reality, many, many hypotheses would suit the extremely slim amount of data he has. The same will be true with carefully boxed AIs.
At this point, Bostrom has argued that containing a nearly infinitely powerful intelligence is nearly impossible. That seems to me to be effectively tautological.
If we can’t contain them, what options do we have? After arguing earlier that we can’t give AIs our own values (and presenting mind-bending scenarios for what those values might actually mean in a Universe with digital beings), he then turns around and invests a whole string of chapters in describing how we might actually go about building systems that have those values from the beginning.
At this point, Bostrom began to lose me. Beyond the systems for giving AIs values, I felt he went off the rails in describing human behavior in simplistic terms. We are incapable of balancing our desire to reproduce with a view of the tragedy of the commons, and are inevitably doomed to live out our lives in a rude, resource-constrained existence. There were some interesting bits in the taxonomies of options, but the last third of the book felt very speculative, even more so than the earlier parts.
Bostrom is rational and seems to have thought carefully about the mechanisms by which AIs may actually arise. Here, I largely agree with him. I think his faster scenarios of development, though, are unlikely: being smart, and getting smarter, is hard. He thinks a “singleton”, a single, most powerful AI, is the nearly inevitable outcome. I think populations of AIs are more likely, but if anything this appears to make some problems worse. I also think his scenarios for controlling AIs are handicapped in their realism by the nearly infinite powers he assigns them. In either case, Bostrom has convinced me that once an AI is developed, there are many ways it can go wrong, to the detriment and possibly extermination of humanity. Both he and I are opposed to this. I’m not ready to declare a moratorium on AI research, but there are many disturbing possibilities and many difficult moral questions that need to be answered.
The first step in answering them, of course, is to begin discussing them in a rational fashion, while there is still time. Read the first 8 chapters of this book!
Read more here:
Posted: at 6:47 am
iStock Vectors / Getty Images
Updated February 29, 2016.
Definition: Oppression is a type of injustice. Oppression is the inequitable use of authority, law, or physical force to prevent others from being free or equal. The verb oppress can mean to keep someone down in a social sense, such as an authoritarian government might do in an oppressive society. It can also mean to mentally burden someone, such as with the psychological weight of an oppressive idea.
Feminists fight against the oppression of women. Women have been unjustly held back from achieving full equality for much of human history in many societies around the world. Feminist theorists of the 1960s and 1970s looked for new ways to analyze this oppression, often concluding that there were both overt and insidious forces in society that oppressed women.
These feminists also drew on the work of earlier authors who had analyzed the oppression of women, including Simone de Beauvoir in The Second Sex and Mary Wollstonecraft in A Vindication of the Rights of Woman.
Many common types of oppression are described as isms such as sexism, racism and so on.
The opposite of oppression would be liberation (to remove oppression) or equality (absence of oppression).
In much of the written literature of the ancient and medieval world, we have evidence of women’s oppression by men in European, Middle Eastern and African cultures. Women did not have the same legal and political rights as men, and were under control of fathers and husbands in almost all societies.
In some societies in which women had few options for supporting their life if not supported by a husband, there was even a practice of ritual widow suicide or murder. (Asia continued this practice into the 20th century with some cases occurring in the present as well.)
In Greece, often held up as a model of democracy, women did not have basic rights, and could own no property nor could they participate directly in the political system.
In both Rome and Greece, women’s very movement in public was limited. There are cultures today where women rarely leave their own homes.
Many cultures and religions justify the oppression of women by attributing sexual power to them, that men must then rigidly control in order to maintain their own purity and power. Reproductive functions — including childbirth and menstruation, sometimes breast-feeding and pregnancy — are seen as disgusting. Thus, in these cultures, women are often required to cover their bodies and faces to keep men, assumed not to be in control of their own sexual actions, from being overpowered.
Women are also treated either like children or like property in many cultures and religions. For example, the punishment for rape in some cultures is that the rapist’s wife is given over to the rape victim’s husband or father to rape as he wishes, as revenge. Or a woman who is involved in adultery or other sex acts outside monogamous marriage is punished more severely than the man who is involved, and a woman’s word about rape is not taken as seriously as a man’s word about being robbed would be.
Women’s status as somehow lesser than men is used to justify men’s power over women.
In Marxism, women’s oppression is a key issue. Engels called the working woman “a slave of a slave,” and his analysis in particular was that oppression of women rose with the rise of a class society, about 6,000 years ago. Engels’ discussion of the development of women’s oppression is primarily in “The Origin of the Family, Private Property and the State,” and drew on anthropologist Lewis Morgan and German writer Bachofen. Engels writes of “the world historical defeat of the female sex” when Mother-right was overthrown by males in order to control inheritance of property. Thus, he argued, it was the concept of property that led to women’s oppression.
Critics of this analysis point out that while there is much anthropological evidence for matrilineal descent in primal societies, that does not equate to matriarchy or women’s equality.
In the Marxist view, the oppression of women is a creation of culture.
Cultural oppression of women can take many forms, including shaming and ridiculing women to reinforce their supposed inferior “nature,” or physical abuse, as well as the more commonly acknowledged means of oppression including fewer political, social and economic rights.
In some psychological views, the oppression of women is an outcome of the more aggressive and competitive nature of males due to testosterone levels. Others attribute it to a self-reinforcing cycle where men compete for power and control.
Psychological views are used to justify views that women think differently or less well than men, though such studies don’t hold up to scrutiny.
Continue reading here:
Posted: at 6:37 am
As of January 2016, the site has been accessed hundreds of thousands of times by people searching for facts about fuel-saving scams and psychics’ claims from Sensing Murder in particular. We have added another fuel scam – Fuel360. Read all about it here.
We have had feedback from people all over the world who have learned that fuel-saving devices don’t work, saving them thousands of dollars in wasted payments to scammers.
Have a look around, and if you want to ask a question, or get us to investigate a scam, email email@example.com and we’ll get right on the case!
While I love to take on all scams and blatant bullshit, I am The Atheist, and my prime target is the stupidity, delusion and bullshit that make up the world’s religions.
In the year 2016, when science can attempt to create a mini “big bang” at CERN, can replace almost every organ in the human body with a high degree of success, and can cure cancers that were deadly only a generation ago, we live in times where rationality and reason should take precedence over everything else.
Alas, that is not so, and as time marches on, religion is strengthening its hold on more and more people.
Many atheists rejoice in censuses showing decline in the “religion” identifier question, but I believe that is false hope, as the number of people who attend church has risen dramatically over the past decade. 40 years ago, almost no people aged between 18 and 40 went to church, nowadays, they have overflowing carparks.
The impact of these increased numbers – and therefore money and power – is easy to see if you know where to look. From Family First to the Maxim Institute, religions have set up fronts as “family-focused” organisations as pressure groups, and because they’re funded by morons giving 10% of their wedge every week, they ensure they’re heard, with an array of fulltime workers and ring-in “experts”.
Look at the thousands of people who have protested recently against homosexual & marital laws and non-smacking of children. “Spare the rod and you’ll spoil the child” the bible tells the religionistas and they believe it. You can bet that every single protestor against anti-smacking laws was a theist of some description or other. (update August 2012 – note the current massive spending campaign by Family First against proposals to allow people of all genders to marry)
I will not stand by and watch these deluded wankers have it all their own way.
Some – especially agnostics – people cry about “evangelical” or militant atheism as though it were a bad thing.
I say that without the soldiers of atheism being in the faces of religion, they would seek to destroy more than they already have. USA is a prime example, where even 87 years after the Scopes monkey trial, religion is trying to take over school curricula and replace science with mythology.
Other reading on the subject includes this article from Huffington Post, the best part of which is not the idiotic, unreferenced and unresearched article itself, but the comments that follow it and give the true picture of atheism and religion in USA.
The idea that “new” atheism is unnecessary or overdone is just more romantic bullshit from the promoters and apologists for religion.
If you follow the kind of religion like the Anglican Church promotes, I have no beef with you. If you find that the delusion of god makes you happy, then I’m happy. It’s only when you try to impose your will on others I get pissed off.
But if you’re the kind of religionista who feels that the world must conform to your delusion of a sky-daddy, then the only thing which separates you from islamist extremist with AK47s is that you’re living in the privileged western world. As Jesus Camp showed us, even allegedly christian religions have elements of extremism every bit as spiteful and abusive as the worst excesses of Wahabiism.
I cringe at people whose delusions are so powerful they would rather watch someone die in agony rather than allow to die with dignity.
You can bet your last buck that the same cruel theist who would deny euthanasia for a human will be off to the vet to euthanase a loved pet rather than watch it suffer a lingering, painful death.
Double standards. Love ’em.
And for the best help you can find for employment disputes, seven of the best!
onetwothree four five sixseven
Read more from the original source: