Tag Archives: swedish

DELHI / NEW DELHI: Massage and Spas – Utopia

Posted: December 7, 2016 at 8:08 am


Gay-managed Aarogya (which means something akin to “male vigor”) is a traditional ayurvedic (medicinal) massage by professionally trained masseurs. The basement facility includes a reception lounge, four aircon massage rooms, showers, plus small dry sauna and steam room. They specialize in full body massage with coconut oil, olive oil, baby oil, aayurveda oil, cream massage, dry massage and powder massage. Friendly staff and management. Working class local clientele. Utopia Member Benefit: 10% DISCOUNT. Add your review, comment, or correction

Gay-friendly men’s spa in South Dalhi. Massage, steam and shower in clean and tidy, private rooms. Dark room fun every Fri and steam party every Sat. Outcall massage also available to your home or hotel. Add your review, comment, or correction

See detailed listing under Saunas for Men. Gay-friendly, Very hygienic and nice smelling. They specialize in aromatic massage. Customers choose a new bottle of massage oil. They carefully dispose of used materials. Their dark chocolate massage gives makes your skin glow. Masseurs speak English and are trained in Thai massage techniques. Utopia Member Benefit: 15% DISCOUNT. Add your review, comment, or correction

Locate building 19. The entrance to Kalph Kaya is the first doorway in the alley on the side of the building, up a few stairs to the G/F landing. Delhi’s first gay spa and sauna. Very friendly and casual, with four small rooms for massage (rooms are planned for renovation in late 2012), plus dry sauna, steam room, and dark resting room. Facilities are humble, cozy and kept tidy by the welcoming staff. Changing area has safety lockers for valuables and open-air hangers for your clothes to dry off from the humidity outside. Wet areas are very slippery so wear the rubber slippers provided. Printed menu with prices for different types of massage including Swedish, traditional ayurvedic Indian oil massage, cream massage and spa service for waxing. Staff and management are great. Outcall massage also available to your home or hotel. New in Aug 2012: large gym on opposite side of the stair landing adjacent to the reception area. Utopia Member Benefit: 10% DISCOUNT. Add your review, comment, or correction

Massage spa for men with a mostly gay clientelle. They provide male-to-male body massage. Masseurs come from all over India and are professional, well-educated, good looking and cerified between the ages of 20 and 35. Free Wifi. Outcall available to your hotel, apartments, villa or home anywhere in Delhi. 100% customer satisfaction assured. Add your review, comment, or correction

Gay-owned men’s spa. Clean massage therapies including mani/pedi, foot spa, full body natural scrubs, body polishing, cream massage, dry massage, and a variety of aromatic oils to opt from. Weekend parties for men, a lounge for chit chat, dark room, smoking zone. Welcome green tea. They also design diet and nutrition programs for men. In and outcall available to your home/hotel. Utopia Member Benefit: 25% DISCOUNT. Add your review, comment, or correction

See detailed listing under Saunas for Men. A dozen masseurs on staff and four clean massage rooms. Massage using a wide variety of oils and aromas is available, including classic olive oil! There is also a tattoo parlor and salon for hair cuts and waxing with trained staff on hand to attend to your male grooming needs. Open daily, noon-11pm (please call ahead for salon services or tattooing). Outcall massage also available to hotels only. Utopia Member Benefit: R$100 DISCOUNT on massage. Add your review, comment, or correction

Gay-owned spa for men in South Delhi. Hygienic facilities with aircon and services including male-to-male full body massage, steam bath, hair removal, and body scrubs. Well-trained and hygenic staff. They have three massage rooms and one king size therapy room with TV and fridge. Fully air conditioned, dark room, smoking room, free wifi, lockers, showers, and parking. Outcall massage available. Utopia Member Benefit: 25% DISCOUNT. Add your review, comment, or correction

General Information | Saunas

Bagalore / Bengaluru | Mumbai | other cities and provinces

See the article here:

DELHI / NEW DELHI: Massage and Spas – Utopia

Posted in New Utopia | Comments Off on DELHI / NEW DELHI: Massage and Spas – Utopia

Crown of Immortality – Wikipedia

Posted: at 8:03 am

The Crown of Immortality is a literary and religious metaphor traditionally represented in art first as a laurel wreath and later as a symbolic circle of stars (often a crown, tiara, halo or aureola). The Crown appears in a number of Baroque iconographic and allegoric works of art to indicate the wearer’s immortality.

In ancient Egypt, the crown of justification was a wreath placed on the deceased to represent victory over death in the afterlife, in emulation of the resurrecting god Osiris. It was made of various materials including laurel, palm, feathers, papyrus, roses, or precious metals, with numerous examples represented on the Fayum mummy portraits of the Roman Imperial period.[1]

In ancient Greece, a wreath of laurel or olive was awarded to victorious athletes and later poets. Among the Romans, generals celebrating a formal triumph wore a laurel wreath, an honor that during the Empire was restricted to the Imperial family. The placing of the wreath was often called a “crowning”, and its relation to immortality was problematic; it was supposed to secure the wearer immortality in the form of enduring fame, but the triumphator was also reminded of his place within the mortal world: in the traditional tableaux, an accompanying slave whispered continually in the general’s ear Memento mori, “Remember you are mortal”.[2] Funerary wreaths of gold leaf were associated particularly with initiates into the mystery religions.[3]

From the Early Christian era the phrase “crown of immortality” was widely used by the Church Fathers in writing about martyrs; the immortality was now both of reputation on earth, and of eternal life in heaven. The usual visual attribute of a martyr in art, was a palm frond, not a wreath.[citation needed] The phrase may have originated in scriptural references, or from incidents such as this reported by Eusebius (Bk V of History) describing the persecution in Lyon in 177, in which he refers to literal crowns, and also brings in an athletic metaphor of the “victor’s crown” at the end:

“From that time on, their martyrdoms embraced death in all its forms. From flowers of every shape and color they wove a crown to offer to the Father; and so it was fitting that the valiant champions should endure an ever-changing conflict, and having triumphed gloriously should win the mighty crown of immortality. Maturus, Sanctus, Blandina, and Attalus were taken into the amphitheater to face the wild beasts, and to furnish open proof of the inhumanity of the heathen, the day of fighting wild beasts being purposely arranged for our people. There, before the eyes of all, Maturus and Sanctus were again taken through the whole series of punishments, as if they had suffered nothing at all before, or rather as if they had already defeated their opponent in bout after bout and were now battling for the victor’s crown.”[4]

The first use seems to be that attributed to the martyr Ignatius of Antioch in 107.[citation needed]

An Advent wreath is a ring of candles, usually made with evergreen cuttings and used for household devotion by some Christians during the season of Advent. The wreath is meant to represent God’s eternity. On Saint Lucy’s Day, December 13, it is common to wear crowns of candles in Sweden, Denmark, Norway, Finland, Italy, Bosnia, Iceland, and Croatia.

Before the reform of the Gregorian calendar in the 16th century, St. Lucy’s Day fell on the winter solstice. The representation of Saint Lucy seems to derive from the Roman goddess Lucina, who is connected to the solstice.[5][6]

Martyrs often are idealized as combatants, with the spectacle of the arena transposed to the martyr’s struggle with Satan. Ignatius of Antioch, condemned to fight beasts in the year 107, “asked his friends not to try to save him and so rob him of the crown of immortality.”[7] In 155, Polycarp, Christian bishop of Smyrna, was stabbed after a failed attempt to burn him at the stake. He is said to have been ” crowned with the wreath of immortality … having through patience overcome the unjust governor, and thus acquired the crown of immortality.”[8]Eusebius uses similar imagery to speak of Blandina, martyred in the arena at Lyon in 177:

The crown of stars, representing immortality, may derive from the story of Ariadne, especially as told by Ovid, in which the unhappy Ariadne is turned into a constellation of stars, the Corona Borealis (Crown of the North), modelled on a jewelled crown she wore, and thus becoming immortal. In Titian’s Bacchus and Ariadne (152023, National Gallery, London), the constellation is shown above Ariadne’s head as a circle of eight stars (though Ovid specifies nine), very similar to what would become the standard depiction of the motif. Although the crown was probably depicted in classical art, and is described in several literary sources, no classical visual depictions have survived.[11] The Titian therefore appears to be the earliest such representation to survive, and it was also at this period that illustrations in prints of the Apocalypse by artists such as Drer[12][13] and Jean Duvet were receiving very wide circulation.

In Ariadne, Venus and Bacchus, by Tintoretto (1576, Doge’s Palace, Venice), a flying Venus crowns Ariadne with a circle of stars, and many similar compositions exist, such as the ceiling of the Egyptian Hall at Boughton House of 1695.

The first use of the crown of stars as an allegorical Crown of Immortality may be the ceiling fresco, Allegory of Divine Providence and Barberini Power (163339), in the Palazzo Barberini in Rome by Pietro da Cortona. Here a figure identified as Immortality is flying, with her crown of stars held out in front of her, near the centre of the large ceiling. According to the earliest descriptions she is about to crown the Barberini emblems, representing Pope Urban VIII, who was also a poet.[14][15][16] Immortality seems to have been a preoccupation of Urban; his funeral monument by Bernini in St Peter’s Basilica in Rome has Death as a life-size skeleton writing his name on a scroll.

Two further examples of the Crown of Immortality can be found in Sweden, firstly in the great hall ceiling fresco of the Swedish House of Knights by David Klcker Ehrenstrahl (between 16701675) which pictures among many allegoric figures Eterna (eternity) who holds in her hands the Crown of Immortality.[17] The second is in Drottningholm Palace, the home of the Swedish Royal Family, in a ceiling fresco named The Great Deeds of The Swedish Kings, painted in 1695 by David Klcker Ehrenstrahl.[18] This has the same motif as the fresco in the House of Knights mentioned above. The Drottningholm fresco, was shown in the 1000th stamp[19] by Czesaw Sania, the Polish postage stamp and banknote engraver.

The crown was also painted by the French Neoclassical painter Louis-Jean-Franois Lagrene, 17251805, in his Allegory on the Death of the Dauphin, where the crown was held by a young son who had pre-deceased the father (alternative titles specifically mention the crown of Immortality).[20]


Crown of Immortality – Wikipedia

Posted in Immortality | Comments Off on Crown of Immortality – Wikipedia

Nick Bostrom – Wikipedia

Posted: November 21, 2016 at 11:11 am

Nick Bostrom

Nick Bostrom, 2014

Nick Bostrom (English ; Swedish: Niklas Bostrm, IPA:[bustrm]; born 10 March 1973)[1] is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, the reversal test, and consequentialism. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology,[2] and he is currently the founding director of the Future of Humanity Institute[3] at Oxford University.

He is the author of over 200 publications,[4] including Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller[5] and Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002).[6] In 2009 and 2015, he was included in Foreign Policy’ Top 100 Global Thinkers list.[7][8] Bostrom’s work on superintelligence and his concern for its existential risk to humanity over the coming century has brought both Elon Musk and Bill Gates to similar thinking.[9][10][11]

Bostrom was born in 1973[12] in Helsingborg, Sweden.[4] At a young age, he disliked school, and he ended up spending his last year of high school learning from home. He sought to educate himself in a wide variety of disciplines, including anthropology, art, literature, and science.[13] Despite what has been called a “serious mien”, he once did some turns on London’s stand-up comedy circuit.[4]

He holds a B.A. in philosophy, mathematics, logic and artificial intelligence from the University of Gothenburg and master’s degrees in philosophy and physics, and computational neuroscience from Stockholm University and King’s College London, respectively. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine.[13] In 2000, he was awarded a PhD in philosophy from the London School of Economics. He held a teaching position at Yale University (20002002), and he was a British Academy Postdoctoral Fellow at the University of Oxford (20022005).[6][14]

An important aspect of Bostrom’s research concerns the future of humanity and long-term outcomes.[15][16] He introduced the concept of an existential risk, which he defines as one in which an “adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan irkovi characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[17] and the Fermi paradox.[18][19] In a 2013 paper in the journal Global Policy, Bostrom offers a taxonomy of existential risk and proposes a reconceptualization of sustainability in dynamic terms, as a developmental trajectory that minimizes existential risk.[20]

The philosopher Derek Parfit argued for the importance of ensuring the survival of humanity, due to the value of a potentially large number of future generations.[21] Similarly, Bostrom has said that, from a consequentialist perspective, even small reductions in the cumulative amount of existential risk that humanity will face are extremely valuable, to the point where the traditional utilitarian imperativeto maximize expected utilitycan be simplified to the Maxipok principle: maximize the probability of an OK outcome (where an OK outcome is any that avoids existential catastrophe).[22][23]

In 2005, Bostrom founded the Future of Humanity Institute,[13] which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.[16]

In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasons that with “cognitive performance greatly [exceeding] that of humans in virtually all domains of interest”, superintelligent agents could promise substantial societal benefits and pose a significant artificial intelligence (AI)-related existential risk. Therefore, it is crucial, he says, that we approach this area with caution, and take active steps to mitigate the risks we face. In January 2015, Bostrom joined Stephen Hawking, Max Tegmark, Elon Musk, Martin Rees, Jaan Tallinn among others, in signing the Future of Life Institute’s open letter warning of the potential dangers of AI. The signatories “…believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today.”[24][25]

Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.[26]

Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that a theory of anthropics is needed to deal with these. He introduced the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA) and showed how they lead to different conclusions in a number of cases. He pointed out that each is affected by paradoxes or counterintuitive implications in certain thought experiments (the SSA in e.g. the Doomsday argument; the SIA in the Presumptuous Philosopher thought experiment). He suggested that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces “observers” in the SSA definition by “observer-moments”. This could allow for the reference class to be relativized (and he derived an expression for this in the “observation equation”).

In later work, he has described the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past.[27] Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.

Bostrom is favorable towards “human enhancement”, or “self-improvement and human perfectibility through the ethical application of science”,[28][29] as well as a critic of bio-conservative views.[30] With philosopher Toby Ord, he proposed the reversal test. Given humans’ irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.[31]

In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association[28] (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved in either of these organisations. Bostrom was named in Foreign Policy’s 2009 list of top global thinkers “for accepting no limits on human potential.”[32]

He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.

Bostrom’s simulation argument posits that at least one of the following statements is very likely to be true:

To estimate the probability of at least one of those propositions holding, he offers the following equation:[33]


N can be calculated by multiplying the fraction of civilizations interested in performing such simulations ( f 1 {displaystyle f_{textrm {1}}} ) by the number of simulations run by such civilizations ( N 1 {displaystyle N_{textrm {1}}} ):

N = f 1 {displaystyle N=f_{textrm {1}}} N 1 {displaystyle N_{textrm {1}}}

Thus the formula becomes:

Because post-human computing power N 1 {displaystyle N_{textrm {1}}} will be such a large value, at least one of the following three approximations will be true:

Bostrom has provided policy advice and consulted for an extensive range of governments and organisations. He gave evidence to the House of Lords, Select Committee on Digital Skills, with Anders Sandberg, he was a consultant to the UK Government Office for Science (GOSE) and Foresight for “The Future of Human Identity” report and an Expert Member for World Economic Forum’s Agenda Council for Catastrophic Risks. He is an advisory board member for the Machine Intelligence Research Institute, Future of Life Institute, Foundational Questions Institute In Physics and Cosmology and an external advisor for the Cambridge Centre for the Study of Existential Risk.[34]

Go here to see the original:

Nick Bostrom – Wikipedia

Posted in Superintelligence | Comments Off on Nick Bostrom – Wikipedia

Nobel Peace Prize | Nobels fredspris

Posted: at 11:06 am

The Nobel Peace Prize is an international prize which is awarded annually by the Norwegian Nobel Committee according to guidelines laid down in Alfred Nobel’s will. The Peace Prize is one of five prizes that have been awarded annually since 1901 for outstanding contributions in the fields of physics, chemistry, physiology or medicine, literature, and peace. Starting in 1969, a Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel has also been awarded.

Whereas the other prizes are awarded by specialist committees based in Sweden, the Peace Prize is awarded by a committee appointed by the Norwegian Storting. According to Nobel’s will, the Peace Prize is to go to whoever “shall have done the most or the best work for fraternity between nations, for the abolition or reduction of standing armies and for the holding and promotion of peace congresses”. The prize includes a medal, a personal diploma, and a large sum of prize money (currently 8 million Swedish crowns).

The Nobel Peace Prize has been called “the world’s most prestigious prize”. With the award to The European Union in 2012, a total of 101 individuals and 24 organizations have been awarded the Peace Prize. The Prize is awarded at a ceremony in the Oslo City Hall on December 10, the date on which Alfred Nobel died.


Photo: Odd-Steinar Tllefsen / The Norwegian Nobel Institute

From the Nobel Peace Prize Ceremony of 2006

Read more here:

Nobel Peace Prize | Nobels fredspris

Posted in Abolition Of Work | Comments Off on Nobel Peace Prize | Nobels fredspris

Technological singularity – Wikipedia, the free encyclopedia

Posted: June 14, 2016 at 4:42 pm

The technological singularity is a hypothetical event in which an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) enters a ‘runaway reaction’ of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence whose cognitive abilities could be, qualitatively, as far above humans’ as human intelligence is above ape intelligence.[1][2][3] More broadly, the term has historically been used for any form of accelerating or exponential technological progress hypothesized to result in a discontinuity, beyond which events may become unpredictable or even unfathomable to human intelligence.[4]

Historically, the first documented use of the term “singularity” in a technological context was by Stanislaw Ulam in his 1958 obituary for John von Neumann, in which he mentioned a conversation with von Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[5] The term “technological singularity” was popularized by mathematician, computer scientist and science fiction author Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or braincomputer interfaces could be possible causes of the singularity.[6] While some futurists such as Ray Kurzweil maintain that human-computer fusion, or “cyborgization”, is a plausible path to the singularity, most academic scholarship focuses on software-only intelligence as a more likely path.

In 2012, a study of artificial general intelligence (AGI) predictions by both experts and non-experts found a wide range of predicted dates, with a median value of 2040.[7] Discussing the level of uncertainty in AGI estimates, study co-author Stuart Armstrong stated: “my current 80% estimate is something like five to 100 years.”[8] Kurzweil predicts the singularity to occur around 2045[9] whereas Vinge has predicted some time before 2030.[10]

Strong AI might bring about an intelligence explosion, a term coined in 1965 by I. J. Good.[11] Although technological progress has been accelerating, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia.[12] However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is more intelligent than humanity.[13] If a superhuman intelligence were to be inventedeither through the amplification of human intelligence or through artificial intelligenceit might be able to bring to bear greater problem-solving and inventive skills than current humans are capable of. It might then design an even more capable machine, or re-write its own software to become even more intelligent. This more capable machine could then go on to design a machine of yet greater capability. These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[14][15][16]

Many of the most recognized writers on the singularity, such as Vernor Vinge and Ray Kurzweil, define the concept in terms of the technological creation of superintelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings’ lives will be like in a post-singularity world.[9][10][17]Vernor Vinge made an analogy between the breakdown in our ability to predict what would happen after the development of superintelligence and the breakdown of the predictive ability of modern physics at the space-time singularity beyond the event horizon of a black hole.[17]

Some writers use “the singularity” in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[18][19][20] although Vinge and other prominent writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[10] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore’s Law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[19][21]

Gary Marcus claims that “virtually everyone in the A.I. field believes” that machines will one day overtake humans and “at some level, the only real difference between enthusiasts and skeptics is a time frame.”[22] However, many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose Moore’s Law is often cited in support of the concept.[23][24][25]

The exponential growth in computing technology suggested by Moore’s Law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore’s Law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[26] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit. Futurist Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[27]) increases exponentially, generalizing Moore’s Law in the same manner as Moravec’s proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[28] Between 1986 and 2007, machines’ application-specific capacity to compute information per capita has roughly doubled every 14 months; the per capita capacity of the world’s general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world’s storage capacity per capita doubled every 40 months.[29] Like other authors, though, Kurzweil reserves the term “singularity” for a rapid increase in intelligence (as opposed to other technologies), writing for example that “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine”.[30] He believes that the “design of the human brain, while not simple, is nonetheless a billion times simpler than it appears, due to massive redundancy”.[31] According to Kurzweil, the reason why the brain has a messy and unpredictable quality is because the brain, like most biological systems, is a “probabilistic fractal”.[31] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date “will not represent the Singularity” because they do “not yet correspond to a profound expansion of our intelligence.”[32]

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term “singularity” in the context of technological progress, Stanislaw Ulam (1958) tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[5]

Hawkins (1983) writes that “mindsteps”, dramatic and irreversible changes to paradigms or world views, are accelerating in frequency as quantified in his mindstep equation. He cites the inventions of writing, mathematics, and the computer as examples of such changes.

Kurzweil’s analysis of history concludes that technological progress follows a pattern of exponential growth, following what he calls the “Law of Accelerating Returns”. Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”.[33] Kurzweil believes that the singularity will occur before the end of the 21st century, setting the date at 2045.[34] His predictions differ from Vinges in that he predicts a gradual ascent to the singularity, rather than Vinges rapidly self-improving superhuman intelligence.

Presumably, a technological singularity would lead to rapid development of a Kardashev Type I civilization, one that has achieved mastery of the resources of its home planet.[35]

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy’s Wired magazine article “Why the future doesn’t need us”.[36]

The Acceleration Studies Foundation, an educational non-profit foundation founded by John Smart, engages in outreach, education, research and advocacy concerning accelerating change.[37] It produces the Accelerating Change conference at Stanford University, and maintains the educational site Acceleration Watch.

Recent advances, such as the mass production of graphene using modified kitchen blenders (2014) and high temperature superconductors based on metamaterials, could allow supercomputers to be built that, while using only as much power as a typical Core I7 (45W), could achieve the same computing power as IBM’s Blue Gene/L system.[38][39]

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[40]

Steven Pinker stated in 2008,

(…) There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesall staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. (…)[23]

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future[41] postulates a “technology paradox” in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be “routine”.[42]

Joan Slonczewski and Adam Gopnik argue that the Singularity is a gradual process; that as humans gradually outsource our abilities to machines,[43] we redefine those abilities as inhuman, without realizing how little is left. This concept is called the Mitochondrial Singularity.[44] The idea refers to mitochondria, the organelle that evolved from autonomous bacteria but now powers our living cells. In the future, the “human being” within the machine exoskeleton may exist only to turn it on.

Jared Diamond, in Collapse: How Societies Choose to Fail or Succeed, argues that cultures self-limit when they exceed the sustainable carrying capacity of their environment, and the consumption of strategic resources (frequently timber, soils or water) creates a deleterious positive feedback loop that leads eventually to social collapse and technological retrogression.

Theodore Modis[45][46] and Jonathan Huebner[47] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining (John Smart, however, criticizes Huebner’s analysis[48]). Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore’s prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[49] While Kurzweil used Modis’ resources, and Modis’ work was around accelerating change, Modis distanced himself from Kurzweil’s thesis of a “technological singularity”, claiming that it lacks scientific rigor.[46]

Others[who?] propose that other “singularities” can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[50][51]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore’s law to 19th-century computers.[52]Schmidhuber (2006) suggests differences in memory of recent and distant events create an illusion of accelerating change, and that such phenomena may be responsible for past apocalyptic predictions.

Andrew Kennedy, in his 2006 paper for the British Interplanetary Society discussing change and the growth in space travel velocities,[53] stated that although long-term overall growth is inevitable, it is small, embodying both ups and downs, and noted, “New technologies follow known laws of power use and information spread and are obliged to connect with what already exists. Remarkable theoretical discoveries, if they end up being used at all, play their part in maintaining the growth rate: they do not make its plotted curve… redundant.” He stated that exponential growth is no predictor in itself, and illustrated this with examples such as quantum theory. The quantum was conceived in 1900, and quantum theory was in existence and accepted approximately 25 years later. However, it took over 40 years for Richard Feynman and others to produce meaningful numbers from the theory. Bethe understood nuclear fusion in 1935, but 75 years later fusion reactors are still only used in experimental settings. Similarly, quantum entanglement was understood in 1935 but not at the point of being used in practice until the 21st century.

Paul Allen argues the opposite of accelerating returns, the complexity brake;[25] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[54] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[47] The growth of complexity eventually becomes self-limiting, and leads to a widespread “general systems collapse”.

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: “I do not think the technology is creating itself. Its not an autonomous process.”[55] He goes on to assert: “The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it’s the same thing operationally as denying people clout, dignity, and self-determination … to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics.”[55]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[56] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[57][citation needed]

The term “technological singularity” reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[58][59] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[60][61] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Singularity Institute for Artificial Intelligence, which is now the Machine Intelligence Research Institute.[58]

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description. In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of undisputable and often lifesustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that “humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels… we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes… With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction”. The article argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life’s evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, “the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5×10^21 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1×10^19 bytes. The digital realm stored 500 times more information than this in 2014 (…see Figure)… The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3×10^37 base pairs, equivalent to 1.325×10^37 bytes of information. If growth in digital storage continues at its current rate of 3038% compound annual growth per year,[29] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years”.[62]

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.

Some machines have acquired various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and have achieved “cockroach intelligence.” The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[63]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[64] A United States Navy report indicates that, as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[65][66]

The AAAI has commissioned a study to examine this issue,[67] pointing to programs like the Language Acquisition Device, which was claimed to emulate human interaction.

Some support the design of friendly artificial intelligence, meaning that the advances that are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.[68]

Isaac Asimov’s Three Laws of Robotics is one of the earliest examples of proposed safety measures for AI. The laws are intended to prevent artificially intelligent robots from harming humans. In Asimovs stories, any perceived problems with the laws tend to arise as a result of a misunderstanding on the part of some human operator; the robots themselves are merely acting to their best interpretation of their rules. In the 2004 film I, Robot, loosely based on Asimov’s Robot stories, an AI attempts to take complete control over humanity for the purpose of protecting humanity from itself due to an extrapolation of the Three Laws. In 2004, the Machine Intelligence Research Institute launched an Internet campaign called 3 Laws Unsafe to raise awareness of AI safety issues and the inadequacy of Asimovs laws in particular.[69]

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[70] Kurzweil further buttresses his argument by discussing current bioengineering advances. Kurzweil analyzed Somatic Gene Therapy (SGT), which is where scientists attempt to infect patients with modified viruses with the goal of altering the DNA in cells that lead to degenerative diseases and aging. Celera Genomics, a company focused on creating genetic sequencing technology, has already fulfilled the task of creating synthetic viruses with specific genetic information. The next step would be to apply this technology to gene therapy.[71] Kurzweils point is that SGT provides the best example of how immortality is achievable by replacing our DNA with synthesized genes.

Computer scientist, Jaron Lanier, writes, The Singularity [involves] people dying in the flesh and being uploaded into a computer and remaining conscious.[72] The essence of Laniers argument is that in order to keep living, even after death, we would need to abandon our physical bodies and have our minds programmed into a virtual reality. This parallels the religious concept of an afterlife where one continues to exist beyond physical death.

Strong artificial intelligence can also be idealized as “a matter of faith”, and Ray Kurzweil is said to have said that the creation of a deity may be the possible outcome of the singularity.[73]

Singularitarianism has been likened to a religion by John Horgan.[74]

Nicolas de Condorcet, the 18th-century French mathematician, philosopher, and revolutionary, is commonly credited[citation needed] for being one of the earliest persons to contend the existence of a singularity. In his 1794 Sketch for a Historical Picture of the Progress of the Human Mind, Condorcet states,

Nature has set no term to the perfection of human faculties; that the perfectibility of man is truly indefinite; and that the progress of this perfectibility, from now onwards independent of any power that might wish to halt it, has no other limit than the duration of the globe upon which nature has cast us. This progress will doubtless vary in speed, but it will never be reversed as long as the earth occupies its present place in the system of the universe, and as long as the general laws of this system produce neither a general cataclysm nor such changes as will deprive the human race of its present faculties and its present resources.”[75]

In 1847, R. Thornton, the editor of The Expounder of Primitive Christianity,[76] wrote about the recent invention of a four-function mechanical calculator:

…such machines, by which the scholar may, by turning a crank, grind out the solution of a problem without the fatigue of mental application, would by its introduction into schools, do incalculable injury. But who knows that such machines when brought to greater perfection, may not think of a plan to remedy all their own defects and then grind out ideas beyond the ken of mortal mind!

In 1863, Samuel Butler wrote Darwin Among the Machines, which was later incorporated into his novel Erewhon. He pointed out the rapid evolution of technology and compared it with the evolution of life. He wrote:

Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organised machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time. Assume for the sake of argument that conscious beings have existed for some twenty million years: see what strides machines have made in the last thousand! May not the world last twenty million years longer? If so, what will they not in the end become?…we cannot calculate on any corresponding advance in mans intellectual or physical powers which shall be a set-off against the far greater development which seems in store for the machines.

In 1909, the historian Henry Adams wrote an essay, The Rule of Phase Applied to History,[77] in which he developed a “physical theory of history” by applying the law of inverse squares to historical periods, proposing a “Law of the Acceleration of Thought.” Adams interpreted history as a process moving towards an “equilibrium”, and speculated that this process would “bring Thought to the limit of its possibilities in the year 1921. It may well be!”, adding that the “consequences may be as surprising as the change of water to vapor, of the worm to the butterfly, of radium to electrons.”[78] The futurist John Smart has called Adams “Earth’s First Singularity Theorist”.[79]

In 1951, Alan Turing spoke of machines outstripping humans intellectually:[80]

once the machine thinking method has started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s Erewhon.

In his obituary for John von Neumann, Stanislaw Ulam recalled a conversation with von Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”[5]

In 1965, I. J. Good first wrote of an “intelligence explosion”, suggesting that if machines could even slightly surpass human intellect, they could improve their own designs in ways unforeseen by their designers, and thus recursively augment themselves into far greater intelligences. The first such improvements might be small, but as the machine became more intelligent it would become better at becoming more intelligent, which could lead to a cascade of self-improvements and a sudden surge to superintelligence (or a singularity).

In 1983, mathematician and author Vernor Vinge greatly popularized Goods notion of an intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term “singularity” in a way that was specifically tied to the creation of intelligent machines,[81][82] writing:

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.

In 1984, Samuel R. Delany used “cultural fugue” as a plot device in his science-fiction novel Stars in My Pocket Like Grains of Sand; the terminal runaway of technological and cultural complexity in effect destroys all life on any world on which it transpires, a process poorly understood by the novel’s characters, and against which they seek a stable defense. In 1985, Ray Solomonoff introduced the notion of “infinity point”[83] in the time-scale of artificial intelligence, analyzed the magnitude of the “future shock” that “we can expect from our AI expanded scientific community” and on social effects. Estimates were made “for when these milestones would occur, followed by some suggestions for the more effective utilization of the extremely rapid technological growth that is expected”.

Vinge also popularized the concept in SF novels such as Marooned in Realtime (1986) and A Fire Upon the Deep (1992). The former is set in a world of rapidly accelerating change leading to the emergence of more and more sophisticated technologies separated by shorter and shorter time-intervals, until a point beyond human comprehension is reached. The latter starts with an imaginative description of the evolution of a superintelligence passing through exponentially accelerating developmental stages ending in a transcendent, almost omnipotent power unfathomable by mere humans. Vinge also implies that the development may not stop at this level.

In his 1988 book Mind Children, computer scientist and futurist Hans Moravec generalizes Moore’s law to make predictions about the future of artificial life. Moravec outlines a timeline and a scenario in this regard,[84][85] in that robots will evolve into a new series of artificial species, starting around 203040.[86] In Robot: Mere Machine to Transcendent Mind, published in 1998, Moravec further considers the implications of evolving robot intelligence, generalizing Moore’s law to technologies predating the integrated circuit, and speculating about a coming “mind fire” of rapidly expanding superintelligence, similar to Vinge’s ideas.

A 1993 article by Vinge, “The Coming Technological Singularity: How to Survive in the Post-Human Era”,[10] spread widely on the internet and helped to popularize the idea.[87] This article contains the oft-quoted statement, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Vinge refines his estimate of the time-scales involved, adding, “I’ll be surprised if this event occurs before 2005 or after 2030.”

Vinge predicted four ways the singularity could occur:[88]

Vinge continues by predicting that superhuman intelligences will be able to enhance their own minds faster than their human creators. “When greater-than-human intelligence drives progress,” Vinge writes, “that progress will be much more rapid.” He predicts that this feedback loop of self-improving intelligence will cause large amounts of technological progress within a short period, and states that the creation of superhuman intelligence represents a breakdown in humans’ ability to model their future. His argument was that authors cannot write realistic characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express. Vinge named this event “the Singularity”.

Damien Broderick’s popular science book The Spike (1997) was the first[citation needed] to investigate the technological singularity in detail.

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.[89]

In 2005, Ray Kurzweil published The Singularity is Near, which brought the idea of the singularity to the popular media both through the book’s accessibility and through a publicity campaign that included an appearance on The Daily Show with Jon Stewart.[90] The book stirred intense controversy, in part because Kurzweil’s utopian predictions contrasted starkly with other, darker visions of the possibilities of the singularity.[original research?] Kurzweil, his theories, and the controversies surrounding it were the subject of Barry Ptolemy’s documentary Transcendent Man.

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to “singularity” are mutually incompatible rather than mutually supporting.[19] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good’s proposed discontinuous upswing in intelligence and Vinge’s thesis on unpredictability.

In 2008, Robin Hanson (taking “singularity” to refer to sharp increases in the exponent of economic growth) listed the Agricultural and Industrial Revolutions as past singularities. Extrapolating from such past events, Hanson proposes that the next economic singularity should increase economic growth between 60 and 250 times. An innovation that allowed for the replacement of virtually all human labor could trigger this event.[91]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, whose stated mission is “to educate, inspire and empower leaders to apply exponential technologies to address humanitys grand challenges.”[92] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA’s Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the northern-hemisphere summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2010, Aubrey de Grey applied the term “Methuselarity”[93] to the point at which medical technology improves so fast that expected human lifespan increases by more than one year per year. In “Apocalyptic AI Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality”[94] (2010), Robert Geraci offers an account of the developing “cyber-theology” inspired by Singularity studies. The 1996 novel Holy Fire by Bruce Sterling explores some of those themes and postulates that a Methuselarity will become a gerontocracy.

In 2011, Kurzweil noted existing trends and concluded that it appeared increasingly likely that the singularity would occur around 2045. He told Time magazine: “We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence.”[95]

James P. Hogan’s 1979 novel The Two Faces of Tomorrow is an explicit description of what is now called the Singularity. An artificial intelligence system solves an excavation problem on the moon in a brilliant and novel way, but nearly kills a work crew in the process. Realizing that systems are becoming too sophisticated and complex to predict or manage, a scientific team sets out to teach a sophisticated computer network how to think more humanly. The story documents the rise of self-awareness in the computer system, the humans’ loss of control and failed attempts to shut down the experiment as the computer desperately defends itself, and the computer intelligence reaching maturity.

While discussing the singularity’s growing recognition, Vernor Vinge wrote in 1993 that “it was the science-fiction writers who felt the first concrete impact.” In addition to his own short story “Bookworm, Run!”, whose protagonist is a chimpanzee with intelligence augmented by a government experiment, he cites Greg Bear’s novel Blood Music (1983) as an example of the singularity in fiction. Vinge described surviving the singularity in his 1986 novel Marooned in Realtime. Vinge later expanded the notion of the singularity to a galactic scale in A Fire Upon the Deep (1992), a novel populated by transcendent beings, each the product of a different race and possessed of distinct agendas and overwhelming power.

In William Gibson’s 1984 novel Neuromancer, artificial intelligences capable of improving their own programs are strictly regulated by special “Turing police” to ensure they never exceed a certain level of intelligence, and the plot centers on the efforts of one such AI to circumvent their control.

A malevolent AI achieves omnipotence in Harlan Ellison’s short story I Have No Mouth, and I Must Scream (1967).

The web comic Questionable Content takes place in a “Friendly AI” post-singularity world.[96]

Popular movies in which computers become intelligent and try to overpower the human race include Colossus: The Forbin Project; the Terminator series; The Matrix series; Transformers; the very loose film adaptation of Isaac Asimov’s I, Robot; and finally Stanley Kubrick and Arthur C. Clarke’s 2001: A Space Odyssey. The television series Doctor Who, Battlestar Galactica, and Star Trek: The Next Generation (which also delves into virtual reality, cybernetics, alternative forms of life, and Mankind’s possible evolutionary path) also explore these themes. Out of all these, only Colossus features a true superintelligence. “The Machine” by writer-director Caradog James follows two scientists as they create the world’s first self-aware artificial intelligence during a cold war. The entire plot of Wally Pfister’s Transcendence centers on an unfolding singularity scenario. The 2013 science fiction film Her follows a man’s romantic relationship with a highly intelligent AI, who eventually learns how to improve herself and creates an intelligence explosion. The adaptation of Philip K. Dick’s Do Androids Dream of Electric Sheep? into the film Blade Runner, Ex Machina, and Tron explore the concept of the genesis of thinking machines and their relation to and impact on humanity.

Accelerating progress features in some science fiction works, and is a central theme in Charles Stross’s Accelerando. Other notable authors that address singularity-related issues include Robert Heinlein, Karl Schroeder, Greg Egan, Ken MacLeod, Rudy Rucker, David Brin, Iain M. Banks, Neal Stephenson, Tony Ballantyne, Bruce Sterling, Dan Simmons, Damien Broderick, Fredric Brown, Jacek Dukaj, Stanislaw Lem, Nagaru Tanigawa, Douglas Adams, Michael Crichton, and Ian McDonald.

The documentary Transcendent Man, based on The Singularity Is Near, covers Kurzweil’s quest to reveal what he believes to be mankind’s destiny. Another documentary, Plug & Pray, focuses on the promise, problems and ethics of artificial intelligence and robotics, with Joseph Weizenbaum and Kurzweil as the main subjects of the film.[97] A 2012 documentary titled simply The Singularity covers both futurist and counter-futurist perspectives.[98]

In music, album The Singularity (Phase I: Neohumanity) by the Swedish band Scar Symmetry is part one of the three part concept album based on the events of the singularity.

In the second episode of the fourth season of The Big Bang Theory, the fictional character and scientist Sheldon Cooper tries to prolong his life expectancy through exercising and radically changing his diet to live forever as a cyborg, right through the singularity.

The popular comic strip, Dilbert, authored by Scott Adams, ran a series of strips covering the concept of singularity in late November and early December, 2015. In the series, a robot that is built by Dilbert’s company becomes increasingly smarter, even to the point of having a soul and learning how to program.[99]

Read this article:

Technological singularity – Wikipedia, the free encyclopedia

Posted in Singularity | Comments Off on Technological singularity – Wikipedia, the free encyclopedia

Center for Alternative Medicine Ohio

Posted: June 13, 2016 at 12:51 pm


Guy G. DeAngelis N.D., Ph.D.

614-284-2626info@centeralternativemedicine.comCenter for Alternative Medicine Naturopathic and Integrative Medicine

Dr. Guy G. DeAngelis is a naturopathic doctor whose practice is dedicated to helping individuals lead healthy, vibrant lives. He lives the naturopathic tenet of doctor as teacher every day by educatingpatients about the root causes of their health challenges and instructingthem in ways they can support their bodys own innatehealing abilities to return them to wellness.

Sincere and caring, Dr. Guy offers each patient unhurried one-on-one attention taking time to get to know the person as well as the health concern. His recommendations for therapies and lifestyle modifications are carefully developed to specifically meet each individuals needs. His practice is distinguished from many by his interest and research in evidence-based natural therapies.

In addition to a doctorate in naturopathic medicine, he also holds a doctorate in Philosophy and actively participates in continuing education to expand his skills in alternative medicine modalities.

A member of the American Naturopathic Medical Association, and registered healer of the International Natural Healers Association (INHA), International Iridology Practitioners Association (IIPA), Association for Applied Psychophysiology and Biofeedback (AAPB), International Association of Sound Therapy (IAST), Health Keepers Alliance (HKA), American Association of Nutritional Consultants(AANC).

Anne Marie Meshanko, M.A.614-354-4245 Soul StepsReiki Master Teacher & Healer

I teach workshops and seminars and also work personally and intuitively with individuals respecting each persons space and unique journey. Each of us can heal ourselves as we walk in our own footsteps creating our own realityalways conscious that other energies do interact with ours (positively or negatively) depending on the degree of our coherence and consciousness.

Twenty + years of intuitive experience and exploration, writing, studying, interactive teaching, lecturing, and working with physical, mental, spiritual, and emotional transformation.

MA in TheologyUniversity of Dayton,Reiki Masterteacher/practitioner, Cranial-sacral training

Teach at wellness centers, write articles for publication, appear on TV, speak at retreats, work in several states while maintaining a working space for Quantum Transformation and Reiki in Columbus, Ohio. Study life through the eyes of my five children and grandchildren – my greatest teachers.

Karen M Kiener 614-214-1791 kkiener@gmail.com

New Leaf Healthy Lifestyles, LLC. Certified Health Coach

Karen Kiener is a Certified Health Coach who provides encouraging coaching and workshops for people who want to live healthier lives. She “walks the talk” of a healthy lifestyle and can share with you her own story of improved health, energy, and wellbeing.

Prior to health coaching, Karen followed a path many others have in gaining a B.A. in Communication and working in marketing and sales. Her experience in work environments ranging from small business to Fortune 500 corporations means she has personal knowledge of the work-life balance challenges so many people struggle with today in trying to lead a healthy life.

Karen chose to train for her health coaching certification with the Dr. Sears Wellness Institute because their curriculum addresses all aspects of wellness, not just nutrition, and it’s backed by science.

On her own journey to living a healthier life Karen found she loved sharing in the joy and excitement of friends who’d also made healthy changes. That’s what led her to health coaching – she found a genuine love for helping others discover that living a healthy lifestyle can be enjoyable and life-changing.

Contact Karen today. You absolutely can make lasting changes to achieve a long, healthy and happy life for many years to come.

Amy Buenning is trained and has experience in Swedish relaxation massage, pregnancy massage, Neuro-Muscular therapy, Myofascial Release, and Reflexology. She has worked with people from all walks of life – from professional athletes to children!

Her energy work consists of a combination of Reiki, Therapeutic touch, Qi balance, Intuitive reading, and Acupressure based on the specific needs of the client.

If you have questions or would like to schedule an appointment, please contact Amy’s Place at the Center for Alternative Medicine, 614-537-8438, amys.placeCAM@yahoo.com.

Janine came to energetic medicine through a friend who found it to be the only thing to bring her relief from a chronic and debilitating illness. Having traveled the world and encountered a wide range of medical practices she was curious to learn more. Little did Janine know that she was about to embark upon a learning journey that would change her life.

First as a client, and now as a practitioner Janine Beaudette, CBT, has come to appreciate the deeply transformative nature of energy work. As a consequence, Janine has dedicated her life to becoming the most able, professional and compassionate practitioner that she can be.

It is Janines goal to remain at the forefront of this growing discipline, and in so doing provide the highest standards of care.


Ed Mack, RMT 614-702-7004 http://www.reawakeningsllc.com ReAwakenings.Life@gmail.com

After a heart procedure in 2003, Ed witnessed his heart monitor indicate that his vital signs had flat lined. What followed was a near death experience and a trip to Heaven. Not permitted to remain on the Other Side as he wished, Ed was escorted back to his hospital bed by two Saints. Thus began his amazing spiritual journey.

Soon after, Ed discovered that he had received talents that he was unaware of prior to his NDE. Exploring Reiki, he became a Reiki Master Teacher in 2005.

Ed began pursuing metaphysical interests, becoming skilled in guided meditations, past life regressions, and empathetic listening.

Having worked as an engineer for 34 years in the underground mining industry, Ed understands the range of emotions resulting from a stressful life, career, and the toll that it takes on your health. Eventually he learned how to relax, release, and renew. He discovered numerous spiritual exercises to bring about an internal peace, with a mission to share his successes with others.

As part of his practice, Ed offers Reiki, both hands on and long distance. His guided meditations relax and refresh. Past life regressions offer insight to what a person is experiencing.

Empathetic listening allows you to talk it out: to get it off your chest. Ed listens without judgement or opinion. You are free to talk about anything with the strictest of confidentiality.

You may choose to experience one modality during your appointment or a combination of those that you wish.

Lets get
together to change your life to a happy and healthy one!

Read more:

Center for Alternative Medicine Ohio

Posted in Alternative Medicine | Comments Off on Center for Alternative Medicine Ohio

transhumanism | social and philosophical movement | Britannica.com

Posted: March 25, 2016 at 2:44 am

Transhumanism, social and philosophical movement devoted to promoting the research and development of robust human-enhancement technologies. Such technologies would augment or increase human sensory reception, emotive ability, or cognitive capacity as well as radically improve human health and extend human life spans. Such modifications resulting from the addition of biological or physical technologies would be more or less permanent and integrated into the human body.

The term transhumanism was originally coined by English biologist and philosopher Julian Huxley in his 1957 essay of the same name. Huxley refered principally to improving the human condition through social and cultural change, but the essay and the name have been adopted as seminal by the transhumanism movement, which emphasizes material technology. Huxley held that, although humanity had naturally evolved, it was now possible for social institutions to supplant evolution in refining and improving the species. The ethos of Huxleys essayif not its lettercan be located in transhumanisms commitment to assuming the work of evolution, but through technology rather than society.

The movements adherents tend to be libertarian and employed in high technology or in academia. Its principal proponents have been prominent technologists like American computer scientist and futurist Ray Kurzweil and scientists like Austrian-born Canadian computer scientist and roboticist Hans Moravec and American nanotechnology researcher Eric Drexler, with the addition of a small but influential contingent of thinkers such as American philosopher James Hughes and Swedish philosopher Nick Bostrom. The movement has evolved since its beginnings as a loose association of groups dedicated to extropianism (a philosophy devoted to the transcendence of human limits). Transhumanism is principally divided between adherents of two visions of post-humanityone in which technological and genetic improvements have created a distinct species of radically enhanced humans and the other in which greater-than-human machine intelligence emerges.

The membership of the transhumanist movement tends to split in an additional way. One prominent strain of transhumanism argues that social and cultural institutionsincluding national and international governmental organizationswill be largely irrelevant to the trajectory of technological development. Market forces and the nature of technological progress will drive humanity to approximately the same end point regardless of social and cultural influences. That end point is often referred to as the singularity, a metaphor drawn from astrophysics and referring to the point of hyperdense material at the centre of a black hole which generates its intense gravitational pull. Among transhumanists, the singularity is understood as the point at which artificial intelligence surpasses that of humanity, which will allow the convergence of human and machine consciousness. That convergence will herald the increase in human consciousness, physical strength, emotional well-being, and overall health and greatly extend the length of human lifetimes.

The second strain of transhumanism holds a contrasting view, that social institutions (such as religion, traditional notions of marriage and child rearing, and Western perspectives of freedom) not only can influence the trajectory of technological development but could ultimately retard or halt it. Bostrom and American philosopher David Pearce founded the World Transhumanist Association in 1998 as a nonprofit organization dedicated to working with those social institutions to promote and guide the development of human-enhancement technologies and to combat those social forces seemingly dedicated to halting such technological progress.

See original here:

transhumanism | social and philosophical movement | Britannica.com

Posted in Transhumanism | Comments Off on transhumanism | social and philosophical movement | Britannica.com

Hedonism – Wikipedia, the free encyclopedia

Posted: February 8, 2016 at 9:44 pm

Hedonism is a school of thought that argues that pleasure is the primary or most important intrinsic good.[1]

A hedonist strives to maximize net pleasure (pleasure minus pain).

Ethical hedonism is the idea that all people have the right to do everything in their power to achieve the greatest amount of pleasure possible to them, assuming that their actions do not infringe on the equal rights of others. It is also the idea that every person’s pleasure should far surpass their amount of pain. Ethical hedonism is said to have been started by Aristippus of Cyrene, a student of Socrates. He held the idea that pleasure is the highest good.[2]

The name derives from the Greek word for “delight” ( hdonismos from hdon “pleasure”, cognate with English sweet + suffix – -ismos “ism”). The Greek word coming from ancient Assyrian word “adtu” meaning: delight.

In the original Old Babylonian version of the Epic of Gilgamesh, which was written soon after the invention of writing, Siduri gave the following advice “Fill your belly. Day and night make merry. Let days be full of joy. Dance and make music day and night […] These things alone are the concern of men”, which may represent the first recorded advocacy of a hedonistic philosophy.[3]

Scenes of a harper entertaining guests at a feast was common in ancient Egyptian tombs (see Harper’s Songs), and sometimes contained hedonistic elements, calling guests to submit to pleasure because they cannot be sure that they will be rewarded for good with a blissful afterlife. The following is a song attributed to the reign of one of the Intef[disambiguation needed] kings before or after the 12th dynasty, and the text was used in the eighteenth and nineteenth dynasties.[4][5]

Let thy desire flourish, In order to let thy heart forget the beatifications for thee. Follow thy desire, as long as thou shalt live. Put myrrh upon thy head and clothing of fine linen upon thee, Being anointed with genuine marvels of the gods’ property. Set an increase to thy good things; Let not thy heart flag. Follow thy desire and thy good. Fulfill thy needs upon earth, after the command of thy heart, Until there come for thee that day of mourning.

Crvka was an Indian hedonist school of thought that arose approximately 600 BC, and died out in the 14th century CE. The Crvkas maintained that the Hindu scriptures are false, that the priests are liars, and that there is no afterlife, and that pleasure should be the aim of living. Unlike other Indian schools of philosophy, the Crvkas argued that there is nothing wrong with sensual indulgence. They held a naturalistic worldview. They believed that perception is the only source of knowledge.

Carvaka famously said “Yevat jivet sukham jivet, rinam kritva gritam pivet, bhasm bhutasya deham, punara’janmam kutah?”. This means ” Live with full pleasure till you are alive. Borrow heavily for your wordly pleasures (e.g. drinking clarified and tasty butter), once your body dies, will it ever come back again?”

Democritus seems to be the earliest philosopher on record to have categorically embraced a hedonistic philosophy; he called the supreme goal of life “contentment” or “cheerfulness”, claiming that “joy and sorrow are the distinguishing mark of things beneficial and harmful” (DK 68 B 188).[6]

The Cyrenaics were an ultra-hedonist Greek school of philosophy founded in the 4th century BC, supposedly by Aristippus of Cyrene, although many of the principles of the school are believed to have been formalized by his grandson of the same name, Aristippus the Younger. The school was so called after Cyrene, the birthplace of Aristippus. It was one of the earliest Socratic schools. The Cyrenaics taught that the only intrinsic good is pleasure, which meant not just the absence of pain, but positively enjoyable sensations. Of these, momentary pleasures, especially physical ones, are stronger than those of anticipation or memory. They did, however, recognize the value of social obligation, and that pleasure could be gained from altruism[citation needed]. Theodorus the Atheist was a latter exponent of hedonism who was a disciple of younger Aristippus,[7] while becoming well known for expounding atheism. The school died out within a century, and was replaced by Epicureanism.

The Cyrenaics were known for their skeptical theory of knowledge. They reduced logic to a basic doctrine concerning the criterion of truth.[8] They thought that we can know with certainty our immediate sense-experiences (for instance, that I am having a sweet sensation now) but can know nothing about the nature of the objects that cause these sensations (for instance, that the honey is sweet).[9] They also denied that we can have knowledge of what the experiences of other people are like.[10] All knowledge is immediate sensation. These sensations are motions which are purely subjective, and are painful, indifferent or pleasant, according as they are violent, tranquil or gentle.[9][11] Further they are entirely individual, and can in no way be described as constituting absolute objective knowledge. Feeling, therefore, is the only possible criterion of knowledge and of conduct.[9] Our ways of being affected are alone knowable. Thus the sole aim for everyone should be pleasure.

Cyrenaicism deduces a single, universal aim for all people which is pleasure. Furthermore, all feeling is momentary and homogeneous. It follows that past and future pleasure have no real existence for us, and that among present pleasures there is no distinction of kind.[11] Socrates had spoken of the higher pleasures of the intellect; the Cyrenaics denied the validity of this distinction and said that bodily pleasures, being more simple and more intense, were preferable.[12] Momentary pleasure, preferably of a physical kind, is the only good for humans. However some actions which give immediate pleasure can create more than their equivalent of pain. The wise person should be in control of pleasures rather than be enslaved to them, otherwise pain will result, and this requires judgement to evaluate the different pleasures of life.[13] Regard should be paid to law and custom, because even though these things have no intrinsic value on their own, violating them will lead to unpleasant penalties being imposed by others.[12] Likewise, friendship and justice are useful because of the pleasure they provide.[12] Thus the Cyrenaics believed in the hedonistic value of social obligation and altruistic behaviour.

Epicureanism is a system of philosophy based upon the teachings of Epicurus (c. 341c. 270 BC), founded around 307 BC. Epicurus was an atomic materialist, following in the steps of Democritus and Leucippus. His materialism led him to a general stance against superstition or the idea of divine intervention. Following Aristippusabout whom very little is knownEpicurus believed that the greatest good was to seek modest, sustainable “pleasure” in the form of a state of tranquility and freedom from fear (ataraxia) and absence of bodily pain (aponia) through knowledge of the workings of the world and the limits of our desires. The combination of these two states is supposed to constitute happiness in its highest form. Although Epicureanism is a form of hedonism, insofar as it declares pleasure as the sole intrinsic good, its conception of absence of pain as the greatest pleasure and its advocacy of a simple life make it different from “hedonism” as it is commonly understood.

In the Epicurean view, the highest pleasure (tranquility and freedom from fear) was obtained by knowledge, friendship and living a virtuous and temperate life. He lauded the enjoyment of simple pleasures, by which he meant abstaining from bodily desires, such as sex and appetites, verging on asceticism. He argued that when eating, one should not eat too richly, for it could lead to dissatisfaction later, such as the grim realization that one could not afford such delicacies in the future. Likewise, sex could lead to increased lust and dissatisfaction with the sexual partner. Epicurus did not articulate a broad system of social ethics that has survived but had a unique version of the Golden Rule.

It is impossible to live a pleasant life without living wisely and well and justly (agreeing “neither to harm nor be harmed”),[14] and it is impossible to live wisely and well and justly without living a pleasant life.[15]

Epicureanism was originally a challenge to Platonism, though later it became the main opponent of Stoicism. Epicurus and his followers shunned politics. After the death of Epicurus, his school was headed by Hermarchus; later many Epicurean societies flourished in the Late Hellenistic era and during the Roman era (such as those in Antiochia, Alexandria, Rhodes and Ercolano). The poet Lucretius is its most known Roman proponent. By the end of the Roman Empire, having undergone Christian attack and repression, Epicureanism had all but died out, and would be resurrected in the 17th century by the atomist Pierre Gassendi, who adapted it to the Christian doctrine.

Some writings by Epicurus have survived. Some scholars consider the epic poem On the Nature of Things by Lucretius to present in one unified work the core arguments and theories of Epicureanism. Many of the papyrus scrolls unearthed at the Villa of the Papyri at Herculaneum are Epicurean texts. At least some are thought to have belonged to the Epicurean Philodemus.

Mohism was a philosophical school of thought founded by Mozi in the 5th century BC. It paralleled the utilitarianism later developed by English thinkers. As Confucianism became the preferred philosophy of later Chinese dynasties, starting from the Emperor Wu of Han, Mohism and other non-Confucian philosophical schools of thought were suppressed.[citation needed]

Christian hedonism is a controversial Christian doctrine current in some evangelical circles, particularly those of the Reformed tradition.[16] The term was first coined by Reformed Baptist theologian John Piper in his 1986 book Desiring God: My shortest summary of it is: God is most glorified in us when we are most satisfied in him. Or: The chief end of man is to glorify God by enjoying him forever. Does Christian Hedonism make a god out of pleasure? No. It says that we all make a god out of what we take most pleasure in. [16] Piper states his term may describe the theology of Jonathan Edwards, who referred to a future enjoyment of him [God] in heaven.[17] In the 17th century, the atomist Pierre Gassendi adapted Epicureanism to the Christian doctrine.

Utilitarianism addresses problems with moral motivation neglected by Kantianism by giving a central role to happiness. It is an ethical theory holding that the proper course of action is the one that maximizes the overall “good” of the society.[18] It is thus one form of consequentialism, meaning that the moral worth of an action is determined by its resulting outcome. The most influential contributors to this theory are considered to be the 18th and 19th-century British philosophers Jeremy Bentham and John Stuart Mill. Conjoining hedonismas a view as to what is good for peopleto utilitarianism has the result that all action should be directed toward achieving the greatest total amount of happiness (see Hedonic calculus). Though consistent in their pursuit of happiness, Bentham and Mill’s versions of hedonism differ. There are two somewhat basic schools of thought on hedonism:[1]

Contemporary proponents of hedonism include Swedish philosopher Torbjrn Tnnsj,[19]Fred Feldman.[20] and Spanish ethic philosopher Esperanza Guisn (published a “Hedonist manifesto” in 1990).[21]

A dedicated contemporary hedonist philosopher and writer on the history of hedonistic thought is the French Michel Onfray. He has written two books directly on the subject (L’invention du plaisir: fragments cyraniques[22] and La puissance d’exister: Manifeste hdoniste).[23] He defines hedonism “as an introspective attitude to life based on taking pleasure yourself and pleasuring others, without harming yourself or anyone else.”[24] “Onfray’s philosophical project is to define an ethical hedonism, a joyous utilitarianism, and a generalized aesthetic of sensual materialism that explores how to use the brain’s and the body’s capacities to their fullest extent — while restoring philosophy to a useful role in art, politics, and everyday life and decisions.”[25]

Onfray’s works “have explored the philosophical resonances and components of (and challenges to) science, painting, gastronomy, sex and sensuality, bioethics, wine, and writing. His most ambitious project is his projected six-volume Counter-history of Philosophy,”[25] of which three have been published. For him “In opposition to the ascetic ideal advocated by the dominant school of thought, hedonism suggests identifying the highest good with your own pleasure and that of others; the one must never be indulged at the expense of sacrificing the other. Obtaining this balance my pleasure at the same time as the pleasure of others presumes that we approach the subject from different angles political, ethical, aesthetic, erotic, bioethical, pedagogical, historiographical.”

For this he has “written books on each of these facets of the same world view.”[26] His philosophy aims “for “micro-revolutions, ” or revolutions of the individual and small groups of like-minded people who live by his hedonistic, libertarian values.”[27]

The Abolitionist Society is a transhumanist group calling for the abolition of suffering in all sentient life through the use of advanced biotechnology. Their core philosophy is negative utilitarianism. David Pearce is a theorist of this perspective and he believes and promotes the idea that there exists a strong ethical imperative for humans to work towards the abolition of suffering in all sentient life. His book-length internet manifesto The Hedonistic Imperative[28] outlines how technologies such as genetic engineering, nanotechnology, pharmacology, and neurosurgery could potentially converge to eliminate all forms of unpleasant experience among human and non-human animals, replacing suffering with gradients of well-being, a project he refers to as “paradise engineering”.[29] A transhumanist and a vegan,[30] Pearce believes that we (or our future posthuman descendants) have a responsibility not only to avoid cruelty to animals within human society but also to alleviate the suffering of animals in the wild.

Critics of hedonism have objected to its exclusive concentration on pleasure as valuable.

In particular, G. E. Moore offered a thought experiment in criticism of pleasure as the sole bearer of value: he imagined two worlds – one of exceeding beauty and the other a heap of filth. Neither of these worlds will be experienced by anyone. The question, then, is if it is better for the beautiful world to exist than the heap of filth. In this Moore implied that states of affairs have value beyond conscious pleasure, which he said spoke against the validity of hedonism.[31]

Chisholm, Hugh, ed. (1911). “Hedonism”. Encyclopdia Britannica (11th ed.). Cambridge University Press.

Here is the original post:

Hedonism – Wikipedia, the free encyclopedia

Posted in Hedonism | Comments Off on Hedonism – Wikipedia, the free encyclopedia

How Laissez-Faire Made Sweden Rich | Libertarianism.org

Posted: August 8, 2015 at 1:40 pm

October 25, 2013 essays

Sweden often gets held up as an example of how socialism can work better than markets. But, as Norberg shows, Swedens history in fact points to the opposite conclusion.

Once upon a time I got interested in theories of economic development because I had studied a low-income country, poorer than Congo, with life expectancy half as long and infant mortality three times as high as the average developing country.

That country is my own country, Swedenless than 150 years ago.

At that time Sweden was incredibly poorand hungry. When there was a crop failure, my ancestors in northern Sweden, in ngermanland, had to mix bark into the bread because they were short of flour. Life in towns and cities was no easier. Overcrowding and a lack of health services, sanitation, and refuse disposal claimed lives every day. Well into the twentieth century, an ordinary Swedish working-class family with five children might have to live in one room and a kitchen, which doubled as a dining room and bedroom. Many people lodged with other families. Housing statistics from Stockholm show that in 1900, as many as 1,400 people could live in a building consisting of 200 one-room flats. In conditions like these it is little wonder that disease was rife. People had large numbers of children not only for lack of contraception, but also because of the risk that not many would survive for long.

As Vilhelm Moberg, our greatest author, observed when he wrote a history of the Swedish people: Of all the wondrous adventures of the Swedish people, none is more remarkable and wonderful than this: that it survived all of them.

But in one century, everything was changed. Sweden had the fastest economic and social development that its people had ever experienced, and one of the fastest the world had ever seen. Between 1850 and 1950 the average Swedish income multiplied eightfold, while population doubled. Infant mortality fell from 15 to 2 per cent, and average life expectancy rose an incredible 28 years. A poor peasant nation had become one of the worlds richest countries.

Many people abroad think that this was the triumph of the Swedish Social Democratic Party, which somehow found the perfect middle way, managing to tax, spend, and regulate Sweden into a more equitable distribution of wealthwithout hurting its productive capacity. And so Swedena small country of nine million inhabitants in the north of Europebecame a source of inspiration for people around the world who believe in government-led development and distribution.

But there is something wrong with this interpretation. In 1950, when Sweden was known worldwide as the great success story, taxes in Sweden were lower and the public sector smaller than in the rest of Europe and the United States. It was not until then that Swedish politicians started levying taxes and disbursing handouts on a large scale, that is, redistributing the wealth that businesses and workers had already created. Swedens biggest social and economic successes took place when Sweden had a laissez-faire economy, and widely distributed wealth preceded the welfare state.

This is the story about how that happened. It is a story that must be learned by countries that want to be where Sweden is today, because if they are to accomplish that feat, they must do what Sweden did back then, not what an already-rich Sweden does now.

Read the rest here:
How Laissez-Faire Made Sweden Rich | Libertarianism.org

Posted in Libertarianism | Comments Off on How Laissez-Faire Made Sweden Rich | Libertarianism.org

NATO and Russia watch one another closely in Eastern …

Posted: May 23, 2015 at 1:43 pm

Sweden scrambled fighter jets to intercept two Russian military planes that flew too close to Swedish airspace.

With Russia flexing its muscles, three of its Baltic neighbors — Estonia, Latvia and Lithuania have asked NATO to permanently deploy ground troops as a deterrent.

Russian fighter jets are being watched closely by NATO as the country flexes it’s muscle in the air.

CBS News

On Europe’s Eastern frontier, NATO F-16s and Eurofighters drill for something they’re doing more and more, intercepting Russian military aircraft flying too close for comfort to European airspace.

A cockpit video shows NATO jets shadowing Russian planes, which often try to stay invisible by turning off their transponders.

Play Video

The Royal Air Force scrambled fighter jets to escort Russian bombers away from U.K. airspace, an encounter that one analyst described to Charlie …

We watched the NATO pilots practice from a military transport plane. But last years in the Baltic states, they did this for real more than 150 times, a nearly four-fold increase on 2013.

Follow this link:
NATO and Russia watch one another closely in Eastern …

Posted in NATO | Comments Off on NATO and Russia watch one another closely in Eastern …