Category Archives: Extropy

DotNetNuke Skins – Home

Posted: February 19, 2017 at 11:33 am

DotNetNuke has traditionally been packaged with default skin packages pre-installed. The installation packages for those skins are available here.

Dark Knight Skin from DNN 06.02.05 Dark Knight Regular Skin package Dark Knight Mobile Skin package

The Minimal Extropy Skin & Container from DNN 5.6.3 Minimal Extropy

The following Skins are currently available: DNN-Blue* DNN-Gray*

These are skins from the DNN 3.1.13 package: DNN-Green* DNN-Red* DNN-Yellow*

Dnn Extropy Beta skin This skin is only here for reference, it is currently not supported and not for use in a production environment. If you find any issue with this skin, please log them in the issues tracker (here, not in the DNN tracker). If we get enough feedback, we will release a “stable” version of this skin. DNN-Extropy Beta*

The skin with the vertical menu might not work in DNN versions above 4.4. You can replace the Treeview menu with the NAV skin object and the DNNTreeNavigationProvider. (read the skinning documentation for more info)

Optimized / Clean Default.css Beta Version This is a beta release of an optimized/cleaned up version of default.css (Contributors are: Cuong Dang, Timo Breumelhof, and Salar Golestanian).

Feel free to report bugs and provide feedback to make this an official version for the next release. Default CSS

See the original post:

DotNetNuke Skins – Home

Posted in Extropy | Comments Off on DotNetNuke Skins – Home

Exchange 2010 – www.extropy.com

Posted: November 27, 2016 at 9:50 am

In my experience, when you have Exchange 2010 in a volatile environment, you open yourself up to the cluster behaviors of the Exchange cluster to behave unexpectedly, or just plain fail. If you’re running in DAC mode, then to gain those benefits, you have to manually fail/fix sites to prevent services from staying down, or worse, going split-brain.

Because Exchange 2010 relies heavily on Microsoft FCS (Failover Clustering Service) and AD (Active Directory), there are many scenarios where these distributed decision making functions can fail. When all the servers fail in the primary data center, the second data center takes over as it should, and when the primary data center comes back online, it does not automatically fail back; this is by design (per Microsoft). I have found that to fail services back, you must do two crucial things:

The sites seem to recover after a few minutes, but the changes are not immediately apparent, and the databases take a few minutes to re-mount. The reasons for these commands were not readily obvious to me, but I’ve come to the conclusion that the following conditions must be considered:

Also, the Microsoft documentation is decent (not great) on this, and is definitely worth reading:http://technet.microsoft.com/en-us/library/dd351049.aspx

A bit about this environment:

More here:

Exchange 2010 – http://www.extropy.com

Posted in Extropy | Comments Off on Exchange 2010 – www.extropy.com

Current Issue | Asimov’s Science Fiction

Posted: November 23, 2016 at 10:03 pm

October/November is our traditional slightly spooky issue, and the 2016 edition is no exception. The magazine is jam-packed with stories about ghosts, angels, demons, souls, curses, and a couple of aliens. Alexander Jablokovs bold new novella brings us a tale of death and danger, a woman with a rather unusual occupation, and The Forgotten Taste of Honey.

Sandra McDonalds cheerful tone belies the horror that lurks for The People in the Building; the souls of the damned are captured in Susan Palwicks poignant Lucite; death and another odd job play a part in Michael Liblings amusing and irreverent tale of Wretched the Romantic; Project Extropy uncovers new mysteries in Dominica Phetteplaces ongoing series; S. N. Dyer draws on history and folklore to explain what happens When Grandfather Returns; seeds of hurt and mistrust are sewn in Rich Larsons Water Scorpions; new author Octavia Cade invites us to spend some time Eating Science With Ghosts; Will Ludwigsen examines the curse of The Leaning Lincoln; and Michael Blumleins heartfelt novella asks us to Choose Poison, Choose Life.

Robert Silverbergs Reflections column dabbles in some Magical Thinking; James Patrick Kellys On the Net prepares to Welcome Our Robot Overlords!; Norman Spinrads On Books takes on Short Stories in a column that features the Nebula Awards Showcase anthologies as well as The Fredric Brown Megapack and Harlan Ellisons Can & Cantankerous; plus well have an array of poetry and other features youre sure to enjoy. Get your copy now!

by Alexander Jablokov

Tromvi trudged up the hill from the harbor, where she had just packed the last of her trade goods into the hull of a ship heading to the east. What she had received in return already weighed on her horses backs. She smiled to herself as she remembered the sea captain, caught between a reluctance to say goodbye and the need to be ready for the receding tide, being uncharacteristically sharp with his crew. READ MORE

by Sandra McDonald

At an office building on Tanner Boulevard, two intelligent elevators whisk workers up from the lobby toward their employment destinations. The people headed for the fifth floor greet each other every morning with nods. The people from the fourth floor sip from their brown coffee cups and read their smartphones.READ MORE

by Lisa Bellamy

Today they jostle among us until sundown, listen to our chatter, nudge each other, read the news over our shoulders; they window-shop READ MORE

by Sheila Williams

Welcome to our annual slightly spooky issue. The fall double issue is always long in the making. Throughout the year, we see stories that land a little outside Asimovs, admittedly rather soft, parameters. While we do publish one or two stories in each issue that could be called fantasy, surreal fiction, or slipstream, our focus is primarily on science fiction. Of course I get a lot of traditional science fiction story submissions, but I see a lot of uncanny submissions, too. READ MORE

by Robert Silverberg

Isaac Asimov, for whom this magazine was named and who was my predecessor as writer of this column, was a totally rational man with no belief whatever in matters supernatural. That didnt stop him from writing the occasional fantasy story or from editing a long series of anthologies with such titles as Devils, Ghosts, Spells, and Magical Wishes.READ MORE

by James Patrick Kelly

My friend John Kessel and I have had a longstanding disagreement about the future of artificial intelligence. Even though we have co-edited a couple of anthologies examining post-cyberpunk…READ MORE

by Norman Spinrad

I have been writing this column for close to four decades now, and, yet, to the best of my recollection, I have never reviewed a book of short stories. During my writing career, I have written and published something like twenty-five novels, but I have also probably written something close to one hundred short stories, if by short stories one means fiction of less than novel length, which is my definition here. READ MORE

by Erwin S. Strauss

October is a busy month. My picks range from coast to coast this time: CONtraflow, Archon, EerieCon, VCon, CapClave (where Ill be), ConClave, ConStellation, ValleyCon, MileHiCon, ICon and NecronomiCon. Whew! Plan now for social weekends with your favorite SF authors, editors, artists, and fellow fans.READ MORE

Link:

Current Issue | Asimov’s Science Fiction

Posted in Extropy | Comments Off on Current Issue | Asimov’s Science Fiction

Executive Team | www.extropy.com

Posted: October 1, 2016 at 1:48 am

Nicholas Montera – Chief Executive Officer (CEO) and Managing Partner

Nicholas is one of the founding members of Extropy and is a leading information technology expert with extensive experience involving large-scale IT initiatives. In his role as CEO Nicholas has developed Extropys strategic approach and focused its efforts to provide world class professional services and solutions. His vision is for Extropy to recruit and cultivate the best talent in the industry expanding Extropys “tribal knowledge” base to deliver innovative solutions founded in deep experience and due diligence. As the Chief Executive Officer of Extropy, he provides leadership to the executive management team; ensuring that the company is focused on the mission and our actions are in alignment with our core values. Nicholas has been deeply involved throughout his career in information technology planning, sales and engagement management. He came to form Extropy after successful careers with both British Telecom and Avaya. Nicholas attended the Florida Institute of Technology studying Mechanical Engineering and Business Administration. He also holds the industries highest networking certification, Cisco Certified Internetworking Expert, (CCIE #11811).

Brett Coover – Chief Technology Officer (CTO) and Managing Partner

Brett is one of the founding members of Extropy and is a thought leader in solutions across many technologies and industries. As CTO Brett focuses Extropys “tribal knowledge” to define and refine our technology and business solutions; always staying ahead of the curve. His vision is centered on creating and delivering the most innovative technology solutions available and developing them into long term growth opportunities. Brett has extensive experience in delivering innovative solutions to Fortune 500 customers, transforming their businesses as a trusted partner. Additionally Brett has experience in developing and operating service provider organizations. Brett’s educational experience spans across the sciences, having attended Clarkson and the University of Buffalo studying physics, information and computer sciences. He also holds numerous certifications, including the industries highest networking certification, Cisco Certified Internetworking Expert, (CCIE #11918), and the most respected security certification, Certified Information Systems Security Professional (CISSP).

Do you know anyone at Extropy ?

Go here to read the rest:

Executive Team | http://www.extropy.com

Posted in Extropy | Comments Off on Executive Team | www.extropy.com

Extropy Institute Resources

Posted: July 29, 2016 at 3:15 am

Extropy Institute Publications: Extropy: The Journal of Transhumanist Thought from 1989 to late 1996 in high-gloss print and distributed at major bookstores.

Cycles

Foundation for the Study of Cycles, Inc.

2600 Michelson Drive, Suite 1570 Irvine CA 92715 714/261-7261 714/261-1708 (fax)

Foresight

Camford Publishing (Colin Blackmon)

Emerald 60/62 Toller Lane Bradford West Yorkshire England BD8 9BY Tel: +44 1274 777700 Fax: +44 1274 785201 http://www.emeraldinsight.com/rpsv/fs.htm

Future Survey: A Monthly Abstract of Books, Articles, and Reports Concerning Forecasts, Trends, and Ideas about the Future

World Futures Society (Michael Marien)

7910 Woodmont Avenue, Suite 450 Bethesda MD 20814 301/656-8274 http://wfs.org/fsurv.htm

Futurecasts

Dan Blatt

http://www.futurecasts.com/Default1.html

Futures: The Journal of Forecasting, Planning and Policy

Elsevier (Zia Sardar)

Elsevier Science 655 Avenue of the Americas New York, NY 10010-5107 http://www.elsevier.nl/inca/publications/store/3/0/4/2/2/

Futures Bulletin

World Futures Studies Federation (Christopher Jones)

WFSF Secretariat PO Box 82488 Phoenix, Arizona 85071-2488 Street Address: 325 East Broadway Tempe, Arizona 85282 1-602-923-3457 phone/fax http://www.worldfutures.org/

Futures Research Quarterly

World Futures Society (Timothy Mack)

7910 Woodmont Avenue, Suite 450 Bethesda MD 20814 301/656-8274 http://www.wfs.org/frq.htm

Futuristics

Minnesota Futurists (Earl Joseph)

Anticipatory Sciences Inc. 365 Summit Ave S. Paul MN 55102http://www.mnfuturists.org/PDF/futurics.pdf

The Futurist

World Futures Society (Cynthia Wagner)

7910 Woodmont Avenue, Suite 450 Bethesda MD 20814 301/656-8274 http://www.wfs.org/wfs/futurist.htm

International Journal of Forecasting

International Institute of Forecasters

The Management School Lancaster University Lancaster, LA1 4YX, England Phone: 44 (0)1524.593879 Fax: 44 (0)1524.844885 http://www.ms.ic.ac.uk/iif/index.htm

The Journal of the Evolutionary Study of the Future

The Society for the Evolutionary Study of the Future (Larry Vandervert)

Dr. Larry R. Vandervert P.O. Box 9804 Spokane, WA 99209-9804 ttp://www.futureandevolution.com/

Journal of Forecasting

Wiley Interscience (Scott Armstrong)

John Wiley & Sons, Inc. 605 Third Avenue New York, New York 10158-0012 Phone: 212-850-6645 Fax: 212-850-6021 http://www.interscience.wiley.com/jpages/0277-6693/

The Manoa Journal of Half-Baked Ideas

Hawaii Research Center for Futures Studies (Jim Dator)

Porteus #720 University of Hawaii 2424 Maile Way Honolulu, HA 96822 808/965-2888 808/956-2889 (fax) http://www.soc.hawaii.edu/~future/titlepage.html

On the Horizon The Environmental Scanning Newsletter for Leaders in Education

Emerald Publishing (Tom Abeles)

Emerald 60/62 Toller Lane Bradford West Yorkshire England BD8 9BY Tel: +44 1274 777700 Fax: +44 1274 785201 http://www.emeraldinsight.com/rpsv/oth.htm

Papers de Prospectiva

Centre UNESCO de Catalunya (Felix Marti)

Mollorca, 285 08037 Barcelona SPAIN (34) 3/201-1716 (34) 3/547-5851 http://www.unescocat.org/ccp/pp/indexang.html

Technological Forecasting and Social Change

Elsevier Science, Inc. (Harold Linstone)

655 Avenue of the Americas New York NY 10010 http://www.elsevier.nl/inca/publications/store/5/0/5/7/4/0/

World Futures: The Journal of General Evolution

Gordon and Breach Science Publlishers (Ervin Laszlo)

P.O. Box 90 Reading, Berkshire RG1 8JL UNITED KINGDOM http://www.gbhap.com/journals/153/153-top.htm

YES A Journal of Positive Futures

Thanks to the University of Houston’s Studies of the Future Program.

Positive Futures Network

P.O. Box 11470 Bainbridge Island WA 98110 206/842-0216 http://www.futurenet.org/

Follow this link:

Extropy Institute Resources

Posted in Extropy | Comments Off on Extropy Institute Resources

Extropy Institute Mission

Posted: June 17, 2016 at 4:59 am

Philosophies of life rooted in centuries-old traditions contain much wisdom concerning personal, organizational, and social living. Many of us also find shortcomings in those traditions. How could they not reach some mistaken conclusions when they arose in pre-scientific times? At the same time, ancient philosophies of life have little or nothing to say about fundamental issues confronting us as advanced technologies begin to enable us to change our identity as individuals and as humans and as economic, cultural, and political forces change global relationships.

The Principles of Extropy first took shape in the late 1980s to outline an alternative lens through which to view the emerging and unprecedented opportunities, challenges, and dangers. The goal was and is to use current scientific understanding along with critical and creative thinking to define a small set of principles or values that could help make sense of the confusing but potentially liberating and existentially enriching capabilities opening up to humanity.

The Principles of Extropy do not specify particular beliefs, technologies, or policies. The Principles do not pretend to be a complete philosophy of life. The world does not need another totalistic dogma. The Principles of Extropy do consist of a handful of principles (or values or perspectives) that codify proactive, life-affirming and life-promoting ideals. Individuals who cannot comfortably adopt traditional value systems often find the Principles of Extropy useful as postulates to guide, inspire, and generate innovative thinking about existing and emerging fundamental personal, organizational, and social issues.

The Principles are intended to be enduring, underlying ideals and standards. At the same time, both in content and by being revised, the Principles do not claim to be eternal truths or certain truths. I invite other independent thinkers who share the agenda of acting as change agents for fostering better futures to consider the Principles of Extropy as an evolving framework of attitudes, values, and standards and as a shared vocabulary to make sense of our unconventional, secular, and life-promoting responses to the changing human condition. I also invite feedback to further refine these Principles.

Extropy The extent of a living or organizational systems intelligence, functional order, vitality, and capacity and drive for improvement

Extropic Actions, qualities, or outcomes that embody or further extropy

A Note on the Use of “Extropy”

For the sake of brevity, I will often write something like extropy seeks or extropy questions You can take this to mean in so far as we act in accordance with these principles, we seek/question/study Extropy is not meant as a real entity or force, but only as a metaphor representing all that contributes to our flourishing. Similarly, when I use we you should take this to refer not to any group but to anyone who agrees with what they are reading. Rather than assuming any reader to be in full agreement with every one of these principles, this usage instead imagines a hypothetical person who has integrated the principles into their life and actions. Each reader is, of course, at liberty to reject, modify, or affirm each principle separately. What this tentative, conjectural approach to the Principles of Extropy loses in terms of compelling emotive power, it gains in terms of reasonableness and openness to innovation and improvement.

Read more here:

Extropy Institute Mission

Posted in Extropy | Comments Off on Extropy Institute Mission

Transhumanism's Extropy Institute – Transhumanism for a …

Posted: June 15, 2016 at 3:28 pm

Extropy Institute continues to support critical research and development of sciences and technologies of human enhancement. For further information on our 2004 Vital Progress Summit please follow this link: About the VP Summit

In late 2006, Extropy Institute closed. ExI’s Strategic Plan explains the details of this decision and the potential for the future of ideas that were generated during ExI’s lifetime.

The philosophy of Extropy continues on into the future.

This website is the “Library of Transhumanism, Extropy and the Future”. The Extropy e-mail list continues to be very active and is the main venue for transhumanists and one of the best places on the Internet to meet transhumanists for challenging and creative discussions about the future. ____________________________________

Welcome to website of Extropy Institute, the original force behind the philosophy and global cultural movement of transhumanism. We welcome you to join our efforts in promoting The Proactionary Principle.

The world needs critical thinkers now! What is Extropy Institute? Extropy Institute is a think tank ideas market for the future of social change brought about by consequential technologies.Our Board of Directors, Advisors and Proactive Supporters bring together diverse ideas about the future.Our approach is proactive, our focus critical, and our ideas are principled in addressing social concerns and questions that will make or break the future of humanity. Extropy Institute has been pioneering critical and creative thinking about the future for the past 17 years.

The Mission of ExI has been to serve its members by ensuring a reputable, open environment for discussing the impacts of emerging technologies and for collaborating with diversely-skilled experts in exploring the future of humanity.

As a philosophical and cultural organization, our goals include being an international resource for strategic thinking about the future. Specific outcomes of our vision over the years have been recognized through publications, conferences, virtual summits, university courses, extropy-chat email list, and members’ projects; working toward designing our future. The outcomes are located on our resources page. _______________________________________________________________]

Support the ideas vital to our future by participating in the globalcommunity and become proactive and support the Proactionary Principle.

The current project: ExI Project No. 1 – PROACTIONARY PRINCIPLE As human lives and the global environment become ever more interconnected with technology, we become increasingly responsible for making wise decisions about how to use it. We need a balanced opinion on how to apply technology to human needs. We should not reject the products of applied science; neither should we implement powerful new technologies without foresight and proactive preparation. Above all, we must not tackle the decisions of the future with the cognitive habits of the past. We need new, smarter ways to evaluate the opportunities and dangers issuing from nanotechnology, genetics, machine intelligence, climate engineering, or neurological modification. The Proactionary Principle (ProP) is designed explicitly for this purpose.

The Mission of ExI in its transformational change is to serve its members by developing a core group to encourage and support the furtherance of the Proactionary Principle.

Vision: Our core group uses the most advanced decision-making and forecasting methods to promote critical and creative thinking about emerging technologies. We advise the public and private sectors on policies and initiatives to better manage risks and maximize benefits and opportunities arising from emerging technologies. Our passion is helping others to improve decision-making about these technologies, especially those presenting challenges without precedentsometimes even affecting the human condition itself.

Read more from the original source:

Transhumanism’s Extropy Institute – Transhumanism for a …

Posted in Extropy | Comments Off on Transhumanism's Extropy Institute – Transhumanism for a …

God Is the Machine | WIRED

Posted: June 12, 2016 at 12:40 am

Skip Article Header. Skip to: Start of Article.

IN THE BEGINNING THERE WAS 0. AND THEN THERE WAS 1. A MIND-BENDING MEDITATION ON THE TRANSCENDENT POWER OF DIGITAL COMPUTATION.

At today’s rates of compression, you could download the entire 3 billion digits of your DNA onto about four CDs. That 3-gigabyte genome sequence represents the prime coding information of a human body your life as numbers. Biology, that pulsating mass of plant and animal flesh, is conceived by science today as an information process. As computers keep shrinking, we can imagine our complex bodies being numerically condensed to the size of two tiny cells. These micro-memory devices are called the egg and sperm. They are packed with information.

This article has been reproduced in a new format and may be missing content or contain faulty links. Contact wiredlabs@wired.com to report an issue.

That life might be information, as biologists propose, is far more intuitive than the corresponding idea that hard matter is information as well. When we bang a knee against a table leg, it sure doesn’t feel like we knocked into information. But that’s the idea many physicists are formulating.

The spooky nature of material things is not new. Once science examined matter below the level of fleeting quarks and muons, it knew the world was incorporeal. What could be less substantial than a realm built out of waves of quantum probabilities? And what could be weirder? Digital physics is both. It suggests that those strange and insubstantial quantum wavicles, along with everything else in the universe, are themselves made of nothing but 1s and 0s. The physical world itself is digital.

The scientist John Archibald Wheeler (coiner of the term “black hole”) was onto this in the ’80s. He claimed that, fundamentally, atoms are made up of of bits of information. As he put it in a 1989 lecture, “Its are from bits.” He elaborated: “Every it every particle, every field of force, even the space-time continuum itself derives its function, its meaning, its very existence entirely from binary choices, bits. What we call reality arises in the last analysis from the posing of yes/no questions.”

To get a sense of the challenge of describing physics as a software program, picture three atoms: two hydrogen and one oxygen. Put on the magic glasses of digital physics and watch as the three atoms bind together to form a water molecule. As they merge, each seems to be calculating the optimal angle and distance at which to attach itself to the others. The oxygen atom uses yes/no decisions to evaluate all possible courses toward the hydrogen atom, then usually selects the optimal 104.45 degrees by moving toward the other hydrogen at that very angle. Every chemical bond is thus calculated.

If this sounds like a simulation of physics, then you understand perfectly, because in a world made up of bits, physics is exactly the same as a simulation of physics. There’s no difference in kind, just in degree of exactness. In the movie The Matrix, simulations are so good you can’t tell if you’re in one. In a universe run on bits, everything is a simulation.

An ultimate simulation needs an ultimate computer, and the new science of digitalism says that the universe itself is the ultimate computer actually the only computer. Further, it says, all the computation of the human world, especially our puny little PCs, merely piggybacks on cycles of the great computer. Weaving together the esoteric teachings of quantum physics with the latest theories in computer science, pioneering digital thinkers are outlining a way of understanding all of physics as a form of computation.

From this perspective, computation seems almost a theological process. It takes as its fodder the primeval choice between yes or no, the fundamental state of 1 or 0. After stripping away all externalities, all material embellishments, what remains is the purest state of existence: here/not here. Am/not am. In the Old Testament, when Moses asks the Creator, “Who are you?” the being says, in effect, “Am.” One bit. One almighty bit. Yes. One. Exist. It is the simplest statement possible.

All creation, from this perch, is made from this irreducible foundation. Every mountain, every star, the smallest salamander or woodland tick, each thought in our mind, each flight of a ball is but a web of elemental yes/nos woven together. If the theory of digital physics holds up, movement (f = ma), energy (E = mc), gravity, dark matter, and antimatter can all be explained by elaborate programs of 1/0 decisions. Bits can be seen as a digital version of the “atoms” of classical Greece: the tiniest constituent of existence. But these new digital atoms are the basis not only of matter, as the Greeks thought, but of energy, motion, mind, and life.

From this perspective, computation, which juggles and manipulates these primal bits, is a silent reckoning that uses a small amount of energy to rearrange symbols. And its result is a signal that makes a difference a difference that can be felt as a bruised knee. The input of computation is energy and information; the output is order, structure, extropy.

Our awakening to the true power of computation rests on two suspicions. The first is that computation can describe all things. To date, computer scientists have been able to encapsulate every logical argument, scientific equation, and literary work that we know about into the basic notation of computation. Now, with the advent of digital signal processing, we can capture video, music, and art in the same form. Even emotion is not immune. Researchers Cynthia Breazeal at MIT and Charles Guerin and Albert Mehrabian in Quebec have built Kismet and EMIR (Emotional Model for Intelligent Response), two systems that exhibit primitive feelings.

The second supposition is that all things can compute. We have begun to see that almost any kind of material can serve as a computer. Human brains, which are mostly water, compute fairly well. (The first “calculators” were clerical workers figuring mathematical tables by hand.) So can sticks and strings. In 1975, as an undergraduate student, engineer Danny Hillis constructed a digital computer out of skinny Tinkertoys. In 2000, Hillis designed a digital computer made of only steel and tungsten that is indirectly powered by human muscle. This slow-moving device turns a clock intended to tick for 10,000 years. He hasn’t made a computer with pipes and pumps, but, he says, he could. Recently, scientists have used both quantum particles and minute strands of DNA to perform computations.

A third postulate ties the first two together into a remarkable new view: All computation is one.

In 1937, Alan Turing, Alonso Church, and Emil Post worked out the logical underpinnings of useful computers. They called the most basic loop which has become the foundation of all working computers a finite-state machine. Based on their analysis of the finite-state machine, Turing and Church proved a theorem now bearing their names. Their conjecture states that any computation executed by one finite-state machine, writing on an infinite tape (known later as a Turing machine), can be done by any other finite-state machine on an infinite tape, no matter what its configuration. In other words, all computation is equivalent. They called this universal computation.

When John von Neumann and others jump-started the first electronic computers in the 1950s, they immediately began extending the laws of computation away from math proofs and into the natural world. They tentatively applied the laws of loops an
d cybernetics to ecology, culture, families, weather, and biological systems. Evolution and learning, they declared, were types of computation. Nature computed.

If nature computed, why not the entire universe? The first to put down on paper the outrageous idea of a universe-wide computer was science fiction writer Isaac Asimov. In his 1956 short story “The Last Question,” humans create a computer smart enough to bootstrap new computers smarter than itself. These analytical engines recursively grow super smarter and super bigger until they act as a single giant computer filling the universe. At each stage of development, humans ask the mighty machine if it knows how to reverse entropy. Each time it answers: “Insufficient data for a meaningful reply.” The story ends when human minds merge into the ultimate computer mind, which takes over the entire mass and energy of the universe. Then the universal computer figures out how to reverse entropy and create a universe.

Such a wacky idea was primed to be spoofed, and that’s what Douglas Adams did when he wrote The Hitchhiker’s Guide to the Galaxy. In Adams’ story the earth is a computer, and to the world’s last question it gives the answer: 42.

Few ideas are so preposterous that no one at all takes them seriously, and this idea that God, or at least the universe, might be the ultimate large-scale computer is actually less preposterous than most. The first scientist to consider it, minus the whimsy or irony, was Konrad Zuse, a little-known German who conceived of programmable digital computers 10 years before von Neumann and friends. In 1967, Zuse outlined his idea that the universe ran on a grid of cellular automata, or CA. Simultaneously, Ed Fredkin was considering the same idea. Self-educated, opinionated, and independently wealthy, Fredkin hung around early computer scientists exploring CAs. In the 1960s, he began to wonder if he could use computation as the basis for an understanding of physics.

Fredkin didn’t make much headway until 1970, when mathematician John Conway unveiled the Game of Life, a particularly robust version of cellular automata. The Game of Life, as its name suggests, was a simple computational model that mimicked the growth and evolution of living things. Fredkin began to play with other CAs to see if they could mimic physics. You needed very large ones, but they seemed to scale up nicely, so he was soon fantasizing huge really huge CAs that would extend to include everything. Maybe the universe itself was nothing but a great CA.

The more Fredkin investigated the metaphor, the more real it looked to him. By the mid-’80s, he was saying things like, “I’ve come to the conclusion that the most concrete thing in the world is information.”

Many of his colleagues felt that if Fredkin had left his observations at the level of metaphor “the universe behaves as if it was a computer” he would have been more famous. As it is, Fredkin is not as well known as his colleague Marvin Minsky, who shares some of his views. Fredkin insisted, flouting moderation, that the universe is a large field of cellular automata, not merely like one, and that everything we see and feel is information.

Many others besides Fredkin recognized the beauty of CAs as a model for investigating the real world. One of the early explorers was the prodigy Stephen Wolfram. Wolfram took the lead in systematically investigating possible CA structures in the early 1980s. By programmatically tweaking the rules in tens of thousands of alterations, then running them out and visually inspecting them, he acquired a sense of what was possible. He was able to generate patterns identical to those seen in seashells, animal skins, leaves, and sea creatures. His simple rules could generate a wildly complicated beauty, just as life could. Wolfram was working from the same inspiration that Fredkin did: The universe seems to behave like a vast cellular automaton.

Even the infinitesimally small and nutty realm of the quantum can’t escape this sort of binary logic. We describe a quantum-level particle’s existence as a continuous field of probabilities, which seems to blur the sharp distinction of is/isn’t. Yet this uncertainty resolves as soon as information makes a difference (as in, as soon as it’s measured). At that moment, all other possibilities collapse to leave only the single yes/no state. Indeed, the very term “quantum” suggests an indefinite realm constantly resolving into discrete increments, precise yes/no states.

For years, Wolfram explored the notion of universal computation in earnest (and in secret) while he built a business selling his popular software Mathematica. So convinced was he of the benefits of looking at the world as a gigantic Turing machine that he penned a 1,200-page magnum opus he modestly calls A New Kind of Science. Self-published in 2002, the book reinterprets nearly every field of science in terms of computation: “All processes, whether they are produced by human effort or occur spontaneously in nature, can be viewed as computation.” (See “The Man Who Cracked the Code to Everything,” Wired 10.6.)

Wolfram’s key advance, however, is more subtly brilliant, and depends on the old Turing-Church hypothesis: All finite-state machines are equivalent. One computer can do anything another can do. This is why your Mac can, with proper software, pretend to be a PC or, with sufficient memory, a slow supercomputer. Wolfram demonstrates that the outputs of this universal computation are also computationally equivalent. Your brain and the physics of a cup being filled with water are equivalent, he says: for your mind to compute a thought and the universe to compute water particles falling, both require the same universal process.

If, as Fredkin and Wolfram suggest, all movement, all actions, all nouns, all functions, all states, all we see, hear, measure, and feel are various elaborate cathedrals built out of this single ubiquitous process, then the foundations of our knowledge are in for a galactic-scale revisioning in the coming decades. Already, the dream of devising a computational explanation for gravity, the speed of light, muons, Higgs bosons, momentum, and molecules has become the holy grail of theoretical physics. It would be a unified explanation of physics (digital physics), relativity (digital relativity), evolution (digital evolution and life), quantum mechanics, and computation itself, and at the bottom of it all would be squirming piles of the universal elements: loops of yes/no bits. Ed Fredkin has been busy honing his idea of digital physics and is completing a book called Digital Mechanics. Others, including Oxford theoretical physicist David Deutsch, are working on the same problem. Deutsch wants to go beyond physics and weave together four golden threads epistemology, physics, evolutionary theory, and quantum computing to produce what is unashamedly referred to by researchers as the Theory of Everything. Based on the primitives of quantum computation, it would swallow all other theories.

Any large computer these days can emulate a computer of some other design. You have Dell computers running Amigas. The Amigas, could, if anyone wanted them to, run Commodores. There is no end to how many nested worlds can be built. So imagine what a universal computer might do. If you had a universally equivalent engine, you could pop it in anywhere, including inside the inside of something else. And if you had a universe-sized computer, it could run all kinds of recursive worlds; it could, for instance, simulate an entire galaxy.

If smaller worlds have smaller worlds running within them, howev
er, there has to be a platform that runs the first among them. If the universe is a computer, where is it running? Fredkin says that all this work happens on the “Other.” The Other, he says, could be another universe, another dimension, another something. It’s just not in this universe, and so he doesn’t care too much about it. In other words, he punts. David Deutsch has a different theory. “The universality of computation is the most profound thing in the universe,” he says. Since computation is absolutely independent of the “hardware” it runs on, studying it can tell us nothing about the nature or existence of that platform. Deutsch concludes it does not exist: “The universe is not a program running somewhere else. It is a universal computer, and there is nothing outside of it.”

Strangely, nearly every mapper of this new digitalism foresees human-made computers taking over the natural universal computer. This is in part because they see nothing to stop the rapid expansion of computation, and in part because well why not? But if the entire universe is computing, why build our own expensive machines, especially when chip fabs cost several billion dollars to construct? Tommaso Toffoli, a quantum computer researcher, puts it best: “In a sense, nature has been continually computing the ‘next state’ of the universe for billions of years; all we have to do and, actually, all we can do is ‘hitch a ride’ on this huge, ongoing Great Computation.”

In a June 2002 article published in the Physical Review Letters, MIT professor Seth Lloyd posed this question: If the universe was a computer, how powerful would it be? By analyzing the computing potential of quantum particles, he calculated the upper limit of how much computing power the entire universe (as we know it) has contained since the beginning of time. It’s a large number: 10^120 logical operations. There are two interpretations of this number. One is that it represents the performance “specs” of the ultimate computer. The other is that it’s the amount required to simulate the universe on a quantum computer. Both statements illustrate the tautological nature of a digital universe: Every computer is the computer.

Continuing in this vein, Lloyd estimated the total amount of computation that has been accomplished by all human-made computers that have ever run. He came up with 10^31 ops. (Because of the fantastic doubling of Moore’s law, over half of this total was produced in the past two years!) He then tallied up the total energy-matter available in the known universe and divided that by the total energy-matter of human computers expanding at the rate of Moore’s law. “We need 300 Moore’s law doublings, or 600 years at one doubling every two years,” he figures, “before all the available energy in the universe is taken up in computing. Of course, if one takes the perspective that the universe is already essentially performing a computation, then we don’t have to wait at all. In this case, we may just have to wait for 600 years until the universe is running Windows or Linux.”

The relative nearness of 600 years says more about exponential increases than it does about computers. Neither Lloyd nor any other scientist mentioned here realistically expects a second universal computer in 600 years. But what Lloyd’s calculation proves is that over the long term, there is nothing theoretical to stop the expansion of computers. “In the end, the whole of space and its contents will be the computer. The universe will in the end consist, literally, of intelligent thought processes,” David Deutsch proclaims in Fabric of Reality. These assertions echo those of the physicist Freeman Dyson, who also sees minds amplified by computers expanding into the cosmos “infinite in all directions.”

Yet while there is no theoretical hitch to an ever-expanding computer matrix that may in the end resemble Asimov’s universal machine, no one wants to see themselves as someone else’s program running on someone else’s computer. Put that way, life seems a bit secondhand.

Yet the notion that our existence is derived, like a string of bits, is an old and familiar one. Central to the evolution of Western civilization from its early Hellenistic roots has been the notion of logic, abstraction, and disembodied information. The saintly Christian guru John writes from Greece in the first century: “In the beginning was the Word, and the Word was with God, and the Word was God.” Charles Babbage, credited with constructing the first computer in 1832, saw the world as one gigantic instantiation of a calculating machine, hammered out of brass by God. He argued that in this heavenly computer universe, miracles were accomplished by divinely altering the rules of computation. Even miracles were logical bits, manipulated by God.

There’s still confusion. Is God the Word itself, the Ultimate Software and Source Code, or is God the Ultimate Programmer? Or is God the necessary Other, the off-universe platform where this universe is computed?

But each of these three possibilities has at its root the mystical doctrine of universal computation. Somehow, according to digitalism, we are linked to one another, all beings alive and inert, because we share, as John Wheeler said, “at the bottom at a very deep bottom, in most instances an immaterial source.” This commonality, spoken of by mystics of many beliefs in different terms, also has a scientific name: computation. Bits minute logical atoms, spiritual in form amass into quantum quarks and gravity waves, raw thoughts and rapid motions.

The computation of these bits is a precise, definable, yet invisible process that is immaterial yet produces matter.

“Computation is a process that is perhaps the process,” says Danny Hillis, whose new book, The Pattern on the Stone, explains the formidable nature of computation. “It has an almost mystical character because it seems to have some deep relationship to the underlying order of the universe. Exactly what that relationship is, we cannot say. At least for now.”

Probably the trippiest science book ever written is The Physics of Immortality, by Frank Tipler. If this book was labeled standard science fiction, no one would notice, but Tipler is a reputable physicist and Tulane University professor who writes papers for the International Journal of Theoretical Physics. In Immortality, he uses current understandings of cosmology and computation to declare that all living beings will be bodily resurrected after the universe dies. His argument runs roughly as follows: As the universe collapses upon itself in the last minutes of time, the final space-time singularity creates (just once) infinite energy and computing capacity. In other words, as the giant universal computer keeps shrinking in size, its power increases to the point at which it can simulate precisely the entire historical universe, past and present and possible. He calls this state the Omega Point. It is a computational space that can resurrect “from the dead” all the minds and bodies that have ever lived. The weird thing is that Tipler was an atheist when he developed this theory and discounted as mere “coincidence” the parallels between his ideas and the Christian doctrine of Heavenly Resurrection. Since then, he says, science has convinced him that the two may be identical.

While not everyone goes along with Tipler’s eschatological speculations, theorists like Deutsch endorse his physics. An Omega Computer is possible and probably likely, they say.

I asked Tipler which side of the Fredkin gap he is on. Does he go along with the weak version of the ultimate computer, the metaphorical one, th
at says the universe only seems like a computer? Or does he embrace Fredkin’s strong version, that the universe is a 12 billion-year-old computer and we are the killer app? “I regard the two statements as equivalent,” he answered. “If the universe in all ways acts as if it was a computer, then what meaning could there be in saying that it is not a computer?”

Only hubris.

Visit link:

God Is the Machine | WIRED

Posted in Extropy | Comments Off on God Is the Machine | WIRED

God Is the Machine | WIRED

Posted: June 10, 2016 at 12:46 pm

Skip Article Header. Skip to: Start of Article.

IN THE BEGINNING THERE WAS 0. AND THEN THERE WAS 1. A MIND-BENDING MEDITATION ON THE TRANSCENDENT POWER OF DIGITAL COMPUTATION.

At today’s rates of compression, you could download the entire 3 billion digits of your DNA onto about four CDs. That 3-gigabyte genome sequence represents the prime coding information of a human body your life as numbers. Biology, that pulsating mass of plant and animal flesh, is conceived by science today as an information process. As computers keep shrinking, we can imagine our complex bodies being numerically condensed to the size of two tiny cells. These micro-memory devices are called the egg and sperm. They are packed with information.

This article has been reproduced in a new format and may be missing content or contain faulty links. Contact wiredlabs@wired.com to report an issue.

That life might be information, as biologists propose, is far more intuitive than the corresponding idea that hard matter is information as well. When we bang a knee against a table leg, it sure doesn’t feel like we knocked into information. But that’s the idea many physicists are formulating.

The spooky nature of material things is not new. Once science examined matter below the level of fleeting quarks and muons, it knew the world was incorporeal. What could be less substantial than a realm built out of waves of quantum probabilities? And what could be weirder? Digital physics is both. It suggests that those strange and insubstantial quantum wavicles, along with everything else in the universe, are themselves made of nothing but 1s and 0s. The physical world itself is digital.

The scientist John Archibald Wheeler (coiner of the term “black hole”) was onto this in the ’80s. He claimed that, fundamentally, atoms are made up of of bits of information. As he put it in a 1989 lecture, “Its are from bits.” He elaborated: “Every it every particle, every field of force, even the space-time continuum itself derives its function, its meaning, its very existence entirely from binary choices, bits. What we call reality arises in the last analysis from the posing of yes/no questions.”

To get a sense of the challenge of describing physics as a software program, picture three atoms: two hydrogen and one oxygen. Put on the magic glasses of digital physics and watch as the three atoms bind together to form a water molecule. As they merge, each seems to be calculating the optimal angle and distance at which to attach itself to the others. The oxygen atom uses yes/no decisions to evaluate all possible courses toward the hydrogen atom, then usually selects the optimal 104.45 degrees by moving toward the other hydrogen at that very angle. Every chemical bond is thus calculated.

If this sounds like a simulation of physics, then you understand perfectly, because in a world made up of bits, physics is exactly the same as a simulation of physics. There’s no difference in kind, just in degree of exactness. In the movie The Matrix, simulations are so good you can’t tell if you’re in one. In a universe run on bits, everything is a simulation.

An ultimate simulation needs an ultimate computer, and the new science of digitalism says that the universe itself is the ultimate computer actually the only computer. Further, it says, all the computation of the human world, especially our puny little PCs, merely piggybacks on cycles of the great computer. Weaving together the esoteric teachings of quantum physics with the latest theories in computer science, pioneering digital thinkers are outlining a way of understanding all of physics as a form of computation.

From this perspective, computation seems almost a theological process. It takes as its fodder the primeval choice between yes or no, the fundamental state of 1 or 0. After stripping away all externalities, all material embellishments, what remains is the purest state of existence: here/not here. Am/not am. In the Old Testament, when Moses asks the Creator, “Who are you?” the being says, in effect, “Am.” One bit. One almighty bit. Yes. One. Exist. It is the simplest statement possible.

All creation, from this perch, is made from this irreducible foundation. Every mountain, every star, the smallest salamander or woodland tick, each thought in our mind, each flight of a ball is but a web of elemental yes/nos woven together. If the theory of digital physics holds up, movement (f = ma), energy (E = mc), gravity, dark matter, and antimatter can all be explained by elaborate programs of 1/0 decisions. Bits can be seen as a digital version of the “atoms” of classical Greece: the tiniest constituent of existence. But these new digital atoms are the basis not only of matter, as the Greeks thought, but of energy, motion, mind, and life.

From this perspective, computation, which juggles and manipulates these primal bits, is a silent reckoning that uses a small amount of energy to rearrange symbols. And its result is a signal that makes a difference a difference that can be felt as a bruised knee. The input of computation is energy and information; the output is order, structure, extropy.

Our awakening to the true power of computation rests on two suspicions. The first is that computation can describe all things. To date, computer scientists have been able to encapsulate every logical argument, scientific equation, and literary work that we know about into the basic notation of computation. Now, with the advent of digital signal processing, we can capture video, music, and art in the same form. Even emotion is not immune. Researchers Cynthia Breazeal at MIT and Charles Guerin and Albert Mehrabian in Quebec have built Kismet and EMIR (Emotional Model for Intelligent Response), two systems that exhibit primitive feelings.

The second supposition is that all things can compute. We have begun to see that almost any kind of material can serve as a computer. Human brains, which are mostly water, compute fairly well. (The first “calculators” were clerical workers figuring mathematical tables by hand.) So can sticks and strings. In 1975, as an undergraduate student, engineer Danny Hillis constructed a digital computer out of skinny Tinkertoys. In 2000, Hillis designed a digital computer made of only steel and tungsten that is indirectly powered by human muscle. This slow-moving device turns a clock intended to tick for 10,000 years. He hasn’t made a computer with pipes and pumps, but, he says, he could. Recently, scientists have used both quantum particles and minute strands of DNA to perform computations.

A third postulate ties the first two together into a remarkable new view: All computation is one.

In 1937, Alan Turing, Alonso Church, and Emil Post worked out the logical underpinnings of useful computers. They called the most basic loop which has become the foundation of all working computers a finite-state machine. Based on their analysis of the finite-state machine, Turing and Church proved a theorem now bearing their names. Their conjecture states that any computation executed by one finite-state machine, writing on an infinite tape (known later as a Turing machine), can be done by any other finite-state machine on an infinite tape, no matter what its configuration. In other words, all computation is equivalent. They called this universal computation.

When John von Neumann and others jump-started the first electronic computers in the 1950s, they immediately began extending the laws of computation away from math proofs and into the natural world. They tentatively applied the laws of loops and cybernetics to ecology, culture, families, weather, and biological systems. Evolution and learning, they declared, were types of computation. Nature computed.

If nature computed, why not the entire universe? The first to put down on paper the outrageous idea of a universe-wide computer was science fiction writer Isaac Asimov. In his 1956 short story “The Last Question,” humans create a computer smart enough to bootstrap new computers smarter than itself. These analytical engines recursively grow super smarter and super bigger until they act as a single giant computer filling the universe. At each stage of development, humans ask the mighty machine if it knows how to reverse entropy. Each time it answers: “Insufficient data for a meaningful reply.” The story ends when human minds merge into the ultimate computer mind, which takes over the entire mass and energy of the universe. Then the universal computer figures out how to reverse entropy and create a universe.

Such a wacky idea was primed to be spoofed, and that’s what Douglas Adams did when he wrote The Hitchhiker’s Guide to the Galaxy. In Adams’ story the earth is a computer, and to the world’s last question it gives the answer: 42.

Few ideas are so preposterous that no one at all takes them seriously, and this idea that God, or at least the universe, might be the ultimate large-scale computer is actually less preposterous than most. The first scientist to consider it, minus the whimsy or irony, was Konrad Zuse, a little-known German who conceived of programmable digital computers 10 years before von Neumann and friends. In 1967, Zuse outlined his idea that the universe ran on a grid of cellular automata, or CA. Simultaneously, Ed Fredkin was considering the same idea. Self-educated, opinionated, and independently wealthy, Fredkin hung around early computer scientists exploring CAs. In the 1960s, he began to wonder if he could use computation as the basis for an understanding of physics.

Fredkin didn’t make much headway until 1970, when mathematician John Conway unveiled the Game of Life, a particularly robust version of cellular automata. The Game of Life, as its name suggests, was a simple computational model that mimicked the growth and evolution of living things. Fredkin began to play with other CAs to see if they could mimic physics. You needed very large ones, but they seemed to scale up nicely, so he was soon fantasizing huge really huge CAs that would extend to include everything. Maybe the universe itself was nothing but a great CA.

The more Fredkin investigated the metaphor, the more real it looked to him. By the mid-’80s, he was saying things like, “I’ve come to the conclusion that the most concrete thing in the world is information.”

Many of his colleagues felt that if Fredkin had left his observations at the level of metaphor “the universe behaves as if it was a computer” he would have been more famous. As it is, Fredkin is not as well known as his colleague Marvin Minsky, who shares some of his views. Fredkin insisted, flouting moderation, that the universe is a large field of cellular automata, not merely like one, and that everything we see and feel is information.

Many others besides Fredkin recognized the beauty of CAs as a model for investigating the real world. One of the early explorers was the prodigy Stephen Wolfram. Wolfram took the lead in systematically investigating possible CA structures in the early 1980s. By programmatically tweaking the rules in tens of thousands of alterations, then running them out and visually inspecting them, he acquired a sense of what was possible. He was able to generate patterns identical to those seen in seashells, animal skins, leaves, and sea creatures. His simple rules could generate a wildly complicated beauty, just as life could. Wolfram was working from the same inspiration that Fredkin did: The universe seems to behave like a vast cellular automaton.

Even the infinitesimally small and nutty realm of the quantum can’t escape this sort of binary logic. We describe a quantum-level particle’s existence as a continuous field of probabilities, which seems to blur the sharp distinction of is/isn’t. Yet this uncertainty resolves as soon as information makes a difference (as in, as soon as it’s measured). At that moment, all other possibilities collapse to leave only the single yes/no state. Indeed, the very term “quantum” suggests an indefinite realm constantly resolving into discrete increments, precise yes/no states.

For years, Wolfram explored the notion of universal computation in earnest (and in secret) while he built a business selling his popular software Mathematica. So convinced was he of the benefits of looking at the world as a gigantic Turing machine that he penned a 1,200-page magnum opus he modestly calls A New Kind of Science. Self-published in 2002, the book reinterprets nearly every field of science in terms of computation: “All processes, whether they are produced by human effort or occur spontaneously in nature, can be viewed as computation.” (See “The Man Who Cracked the Code to Everything,” Wired 10.6.)

Wolfram’s key advance, however, is more subtly brilliant, and depends on the old Turing-Church hypothesis: All finite-state machines are equivalent. One computer can do anything another can do. This is why your Mac can, with proper software, pretend to be a PC or, with sufficient memory, a slow supercomputer. Wolfram demonstrates that the outputs of this universal computation are also computationally equivalent. Your brain and the physics of a cup being filled with water are equivalent, he says: for your mind to compute a thought and the universe to compute water particles falling, both require the same universal process.

If, as Fredkin and Wolfram suggest, all movement, all actions, all nouns, all functions, all states, all we see, hear, measure, and feel are various elaborate cathedrals built out of this single ubiquitous process, then the foundations of our knowledge are in for a galactic-scale revisioning in the coming decades. Already, the dream of devising a computational explanation for gravity, the speed of light, muons, Higgs bosons, momentum, and molecules has become the holy grail of theoretical physics. It would be a unified explanation of physics (digital physics), relativity (digital relativity), evolution (digital evolution and life), quantum mechanics, and computation itself, and at the bottom of it all would be squirming piles of the universal elements: loops of yes/no bits. Ed Fredkin has been busy honing his idea of digital physics and is completing a book called Digital Mechanics. Others, including Oxford theoretical physicist David Deutsch, are working on the same problem. Deutsch wants to go beyond physics and weave together four golden threads epistemology, physics, evolutionary theory, and quantum computing to produce what is unashamedly referred to by researchers as the Theory of Everything. Based on the primitives of quantum computation, it would swallow all other theories.

Any large computer these days can emulate a computer of some other design. You have Dell computers running Amigas. The Amigas, could, if anyone wanted them to, run Commodores. There is no end to how many nested worlds can be built. So imagine what a universal computer might do. If you had a universally equivalent engine, you could pop it in anywhere, including inside the inside of something else. And if you had a universe-sized computer, it could run all kinds of recursive worlds; it could, for instance, simulate an entire galaxy.

If smaller worlds have smaller worlds running within them, however, there has to be a platform that runs the first among them. If the universe is a computer, where is it running? Fredkin says that all this work happens on the “Other.” The Other, he says, could be another universe, another dimension, another something. It’s just not in this universe, and so he doesn’t care too much about it. In other words, he punts. David Deutsch has a different theory. “The universality of computation is the most profound thing in the universe,” he says. Since computation is absolutely independent of the “hardware” it runs on, studying it can tell us nothing about the nature or existence of that platform. Deutsch concludes it does not exist: “The universe is not a program running somewhere else. It is a universal computer, and there is nothing outside of it.”

Strangely, nearly every mapper of this new digitalism foresees human-made computers taking over the natural universal computer. This is in part because they see nothing to stop the rapid expansion of computation, and in part because well why not? But if the entire universe is computing, why build our own expensive machines, especially when chip fabs cost several billion dollars to construct? Tommaso Toffoli, a quantum computer researcher, puts it best: “In a sense, nature has been continually computing the ‘next state’ of the universe for billions of years; all we have to do and, actually, all we can do is ‘hitch a ride’ on this huge, ongoing Great Computation.”

In a June 2002 article published in the Physical Review Letters, MIT professor Seth Lloyd posed this question: If the universe was a computer, how powerful would it be? By analyzing the computing potential of quantum particles, he calculated the upper limit of how much computing power the entire universe (as we know it) has contained since the beginning of time. It’s a large number: 10^120 logical operations. There are two interpretations of this number. One is that it represents the performance “specs” of the ultimate computer. The other is that it’s the amount required to simulate the universe on a quantum computer. Both statements illustrate the tautological nature of a digital universe: Every computer is the computer.

Continuing in this vein, Lloyd estimated the total amount of computation that has been accomplished by all human-made computers that have ever run. He came up with 10^31 ops. (Because of the fantastic doubling of Moore’s law, over half of this total was produced in the past two years!) He then tallied up the total energy-matter available in the known universe and divided that by the total energy-matter of human computers expanding at the rate of Moore’s law. “We need 300 Moore’s law doublings, or 600 years at one doubling every two years,” he figures, “before all the available energy in the universe is taken up in computing. Of course, if one takes the perspective that the universe is already essentially performing a computation, then we don’t have to wait at all. In this case, we may just have to wait for 600 years until the universe is running Windows or Linux.”

The relative nearness of 600 years says more about exponential increases than it does about computers. Neither Lloyd nor any other scientist mentioned here realistically expects a second universal computer in 600 years. But what Lloyd’s calculation proves is that over the long term, there is nothing theoretical to stop the expansion of computers. “In the end, the whole of space and its contents will be the computer. The universe will in the end consist, literally, of intelligent thought processes,” David Deutsch proclaims in Fabric of Reality. These assertions echo those of the physicist Freeman Dyson, who also sees minds amplified by computers expanding into the cosmos “infinite in all directions.”

Yet while there is no theoretical hitch to an ever-expanding computer matrix that may in the end resemble Asimov’s universal machine, no one wants to see themselves as someone else’s program running on someone else’s computer. Put that way, life seems a bit secondhand.

Yet the notion that our existence is derived, like a string of bits, is an old and familiar one. Central to the evolution of Western civilization from its early Hellenistic roots has been the notion of logic, abstraction, and disembodied information. The saintly Christian guru John writes from Greece in the first century: “In the beginning was the Word, and the Word was with God, and the Word was God.” Charles Babbage, credited with constructing the first computer in 1832, saw the world as one gigantic instantiation of a calculating machine, hammered out of brass by God. He argued that in this heavenly computer universe, miracles were accomplished by divinely altering the rules of computation. Even miracles were logical bits, manipulated by God.

There’s still confusion. Is God the Word itself, the Ultimate Software and Source Code, or is God the Ultimate Programmer? Or is God the necessary Other, the off-universe platform where this universe is computed?

But each of these three possibilities has at its root the mystical doctrine of universal computation. Somehow, according to digitalism, we are linked to one another, all beings alive and inert, because we share, as John Wheeler said, “at the bottom at a very deep bottom, in most instances an immaterial source.” This commonality, spoken of by mystics of many beliefs in different terms, also has a scientific name: computation. Bits minute logical atoms, spiritual in form amass into quantum quarks and gravity waves, raw thoughts and rapid motions.

The computation of these bits is a precise, definable, yet invisible process that is immaterial yet produces matter.

“Computation is a process that is perhaps the process,” says Danny Hillis, whose new book, The Pattern on the Stone, explains the formidable nature of computation. “It has an almost mystical character because it seems to have some deep relationship to the underlying order of the universe. Exactly what that relationship is, we cannot say. At least for now.”

Probably the trippiest science book ever written is The Physics of Immortality, by Frank Tipler. If this book was labeled standard science fiction, no one would notice, but Tipler is a reputable physicist and Tulane University professor who writes papers for the International Journal of Theoretical Physics. In Immortality, he uses current understandings of cosmology and computation to declare that all living beings will be bodily resurrected after the universe dies. His argument runs roughly as follows: As the universe collapses upon itself in the last minutes of time, the final space-time singularity creates (just once) infinite energy and computing capacity. In other words, as the giant universal computer keeps shrinking in size, its power increases to the point at which it can simulate precisely the entire historical universe, past and present and possible. He calls this state the Omega Point. It is a computational space that can resurrect “from the dead” all the minds and bodies that have ever lived. The weird thing is that Tipler was an atheist when he developed this theory and discounted as mere “coincidence” the parallels between his ideas and the Christian doctrine of Heavenly Resurrection. Since then, he says, science has convinced him that the two may be identical.

While not everyone goes along with Tipler’s eschatological speculations, theorists like Deutsch endorse his physics. An Omega Computer is possible and probably likely, they say.

I asked Tipler which side of the Fredkin gap he is on. Does he go along with the weak version of the ultimate computer, the metaphorical one, that says the universe only seems like a computer? Or does he embrace Fredkin’s strong version, that the universe is a 12 billion-year-old computer and we are the killer app? “I regard the two statements as equivalent,” he answered. “If the universe in all ways acts as if it was a computer, then what meaning could there be in saying that it is not a computer?”

Only hubris.

Continued here:

God Is the Machine | WIRED

Posted in Extropy | Comments Off on God Is the Machine | WIRED