Tag Archives: japan

Oceania Map – Maps of World

Posted: July 23, 2016 at 4:22 am

About Map :-Though there is much debate over the definition of Oceania, the Pacific Islands including Australia and New Zealand are consistently included. This map of Australia and Oceania shows the many islands that dot the Pacific Ocean, such as Vanuatu, Fiji, Tuvalu, Samoa, Marshall Islands, Nauru, and the Solomon Islands.

Political Map of Countries in Oceania

Top Viewed Australia Continent Map

Cities in Oceania’s Countries Map

Oceania also includes the thousands and thousands of coral reef islands off the coasts of these countries. Some definitions for Oceania include all the nations and territories in the Pacific Ocean between North and South America and Asia, which would also mean Taiwan and Japan were part of Oceania rather than Asia. Oceania is a not just a geographic region and ecozone, it is also a geopolitical region, defined by the United Nations to include Australia, New Zealand, and other island nations that are not generally considered part of the Asian continent.

ACOD~20130104 Last Updated On : January 09, 2013

See original here:

Oceania Map – Maps of World

Posted in Oceania | Comments Off on Oceania Map – Maps of World

TDV Offshore

Posted: at 4:20 am

Im a long-term US expat, who has spent the majority of my adult life living, working and investing in numerous countries around the world, and on every inhabitable continent. Like so many of our clients, I long ago became aware of an alarming trend of rapidly declining levels of personal freedom, and our basic human right to personal privacy, in the rapidly declining industrialized world which I have happily left behind.

As it is my lifes passion, this quest for personal freedom, and our basic human right to personal privacy, I work to help others achieve those same goals, and to help their children avoid a lifetime of declining standards of living. With an apparent irreversible downward economic spiral currently in motion in North America and the European Union, not to mention Japan, the only way to avoid the negative effects of this degradation is through wide diversification. That means as to asset class, financial institutions, and jurisdiction.

I help our clients accomplish this by sharing the value of my personal experiences on a number of fronts, and obtained through years of experience in this arena. Recently, the changes have been quick, and mostly detrimental to our freedom. I work hard to stay ahead of this curve, and to help our clients do so as well.

What we do at TDV Offshore

Formation of offshore legal umbrellas in the form of an LLC or IBC

This is typically the first step in your diversification plan. We know that it takes a great deal of contemplation and soul-searching to take that first step, but history shows, that once that step is taken, our clients realize the benefits, and therefore increase their offshore foothold. The reasons to establish a company in a privacy-respecting jurisdiction are many, to name a few: 1) No recognition of foreign judgments 2) Names of owners are not public record 3) Creditors receive only charge order status in case that a local judgment has been obtained, etc etc

Bank, Brokerage and Precious Metals purchasing and storage

Now that you have a properly established structure to hold, and act as a legal umbrella for your assets, you need to move those assets to a similarly asset-protecting environment. Therefore, we will also help you to establish a bank or brokerage account in the name of the newly established company, and in one of these same jurisdictions, where sharing the names of the Ultimate Beneficial Owner (you) is also restricted by law. Of course, the name of your account will be your company name, and that company is registered in a different jurisdiction, and the disclosure of the UBOs of the company, in that separate jurisdiction, are also protected by law.

This will protect you from such threats as:

The fee to establish both an LLC, and an account in the name of the LLC, is $2,600 ($2,400 for TDV Premium subscribers). This includes apostille of company documents, delivery of the original documents to you, and our assistance in every aspect of both processes.

I stay available to you in the future to answer any questions you may have in which my experiences may be of value. Our network of clients are a group of like-minded people who are living in every corner of the globe. Im continuously connecting those who have joined our growing club to achieve a bit of synergy in our/their experiences, and on a growing number of topics.Self Directed IRA

For our US citizen clients only SD IRAs are Self-Directed tax-deferred retirement accounts. Actually, all IRAs are self-directed, but the IRS allows each administrator the opportunity to decide what types of investments to offer. As a result banks and brokerage houses only offer the products that they benefit from, like US stocks, CDs and mutual funds in which they can earn commissions from you. Under the plan we offer, there are no investment options offered, and therefore no bias nor restrictions to USD-based investments. Outside of a very few prohibited transactions, you can legally diversify your tax-deferred IRA assets out of the USD, and into foreign real estate, precious metals, foreign stocks etc, and legally maintain a qualified status and therefore stay tax-deferred, and protected from the imminent conversion to worthless government bonds.

We will assist you from start to finish to:

The fee to perform all of these steps is $3,200 ($2,900 for TDV Premium subscribers). You can be certain that your Senators and Representatives have already made this move. What are you waiting for?Trusts and Foundations

Trusts and Foundations are the best of all asset protection umbrellas for your lifes savings. With a Trust or Foundation, youre not only protected by the same privacy and anonymity laws as with companies, but have the added benefit of transferring ownership of assets to this legal vehicle, while maintaining 100% control of those assets.

As I prefer the foundation, let me use that as an example, as although the benefits of the trust and foundation are identical, the foundation has a bit more flexibility, and lower annual fees. Very briefly, the Council Members of the foundation, have a fiduciary responsibility to protect the assets for the future ownership of your named beneficiaries, but no control over those assets after we have correctly structured the ownership under an LLC with the foundation as sole member, and you would also be sole signatory on all accounts, etc.

If you would like to further discuss your specific situation, and which structure and jurisdiction(s) would best suit your needs, please complete the following contact form. I will then contact you by email in order to establish an appointment for a free consultation via either skype or by telephone.

For those who become clients, we can also discuss some ways to obtain citizenship in a new country, and obtain a more reliable travel document. There are also some ways to disengage from the US system legally if youre forced by circumstances to remain there.

See more here:

TDV Offshore

Posted in Offshore | Comments Off on TDV Offshore

Articles about Germ Warfare – latimes

Posted: July 14, 2016 at 4:37 pm

NEWS

August 6, 1997 | From Reuters

Iraq could reassemble its germ warfare program within six months with a still-intact scientific team working with freeze-dried organisms, a former U.N. investigator said in a report published Tuesday. “The work force of more than 200 persons who staffed Iraq’s biological warfare program is intact,” Raymond Zilinskas said. “Iraq’s civilian biotechnological infrastructure, comprising more than 80 research, development and production facilities, is whole and well equipped,” he added.

NEWS

December 8, 2001 | From Times Wire Services

An international conference on germ warfare disbanded in chaos and anger Friday after the United States sought to cut off discussions about enforcing the 1972 Biological Weapons Convention. The treaty, ratified by the U.S. and 143 other governments, bans the development, stockpiling and production of germ warfare agents–but it has no enforcement mechanism. The purpose of the conference was to discuss the progress of a six-year effort to negotiate measures to enforce compliance.

NEWS

May 4, 1988 | JOHN M. BRODER, Times Staff Writer

Ten nations, many of them hostile to the United States, currently are producing biological weapons, making it crucial that the Army pursue its controversial plan to build a germ warfare facility in Utah, a senior Defense Department official told Congress Tuesday.

CALIFORNIA | LOCAL

January 22, 2008 | DANA PARSONS

They say war is hell, but getting sick is no picnic either. Here’s my briefing: Two weeks ago I was bivouacked on the sofa around 2200 hours, eating Jell-O pudding, when I detected the first sign of hostile troop movement. Unfortunately, the invaders’ advance party was small and stealthy, and my sentries paid little heed. I finished the pudding, watched more TV and went to bed around midnight. As I slept, the enemy massed. By daybreak, I was surrounded.

CALIFORNIA | LOCAL

September 6, 2002 | REBECCA TROUNSON, TIMES STAFF WRITER

Sheldon H. Harris, a Cal State Northridge historian whose groundbreaking work helped establish that Japan conducted biological warfare experiments on Chinese civilians and military prisoners during World War II, has died. He was 74. Harris died of a blood infection Aug. 31 at UCLA Medical Center, but lived long enough to experience a moment of particular gratification, his son, David, said.

CALIFORNIA | LOCAL

October 10, 2001 | ARIANNA HUFFINGTON, Arianna Huffington is a syndicated columnist. E-mail: arianna@ariannaonline.com

When it comes to matters of the heart, we’ve been sold the premise that men are from Mars, women are from Venus. Maybe, maybe not. But when it comes to thinking the unthinkable, the sexes are most definitely from different planets. At a dinner party in Los Angeles last week, six men and six women sat around a beautifully laid-out table. While the setting evoked an escapist fantasy, the conversation dwelt on the inescapable realities of the moment.

See more here:

Articles about Germ Warfare – latimes

Posted in Germ Warfare | Comments Off on Articles about Germ Warfare – latimes

Ultima Online: High Seas – UOGuide, the Ultima Online …

Posted: July 3, 2016 at 12:21 pm

Overview

Ultima Online: High Seas is the most recent expansion (aka “booster”) and launched on October 12, 2010 with Publish 68. It was first announced during a UO Town Hall Meeting held on August 28, 2010 at EA Mythic/Bioware’s division headquarters in Fairfax, Virginia. It was introduced with the title Adventures on the High Seas but later trimmed to just “High Seas”. It was formally announced as a “booster” as opposed to a full-fledged expansion and debuted with a retail price of $14.99 USD.

On September 28, 2010 the release date was announced as October 12, 2010 and the High Seas Test shard opened for testers the next day til October 7, 2010.

During the first public demonstration, an NPC orc ship was attacked but the orc crew killed the character of the lead engineer Derek Brinkmann.

From the UO Japan website.

EA Mythic Lead Engineer Derek Brinkmann demonstrates the new ships (equipped with cannons that actually inflict damage) and goes off to attack some Orcs at Sea. A Major Feature that should be noted is the Navigation/Command System: No more commands to the Tillerman and ships can be “Mouse” driven!

Though the video is of marginal quality, due to the video stream provided, it does provide a major glimpse of what is on the Sosarian High Seas.

Go here to read the rest:

Ultima Online: High Seas – UOGuide, the Ultima Online …

Posted in High Seas | Comments Off on Ultima Online: High Seas – UOGuide, the Ultima Online …

History of biological warfare – Wikipedia, the free …

Posted: June 28, 2016 at 2:57 am

Various types of biological warfare (BW) have been practiced repeatedly throughout history. This has included the use of biological agents (microbes and plants) as well as the biotoxins, including venoms, derived from them.

Before the 20th century, the use of biological agents took three major forms:

In the 20th century, sophisticated bacteriological and virological techniques allowed the production of significant stockpiles of weaponized bio-agents:

The earliest documented incident of the intention to use biological weapons is recorded in Hittite texts of 15001200 BC, in which victims of tularemia were driven into enemy lands, causing an epidemic.[1] Although the Assyrians knew of ergot, a parasitic fungus of rye which produces ergotism when ingested, there is no evidence that they poisoned enemy wells with the fungus, as has been claimed.

According to Homer’s epic poems about the legendary Trojan War, the Iliad and the Odyssey, spears and arrows were tipped with poison. During the First Sacred War in Greece, in about 590 BC, Athens and the Amphictionic League poisoned the water supply of the besieged town of Kirrha (near Delphi) with the toxic plant hellebore.[2] During the 4th century BC Scythian archers tipped their arrow tips with snake venom, human blood, and animal feces to cause wounds to become infected.

In a naval battle against King Eumenes of Pergamon in 184 BC, Hannibal of Carthage had clay pots filled with venomous snakes and instructed his sailors to throw them onto the decks of enemy ships.[3] The Roman commander Manius Aquillius poisoned the wells of besieged enemy cities in about 130 BC. In about AD 198, the Parthian city of Hatra (near Mosul, Iraq) repulsed the Roman army led by Septimius Severus by hurling clay pots filled with live scorpions at them.[4]

There are numerous other instances of the use of plant toxins, venoms, and other poisonous substances to create biological weapons in antiquity.[5]

The Mongol Empire established commercial and political connections between the Eastern and Western areas of the world, through the most mobile army ever seen. The armies, composed of the most rapidly moving travelers who had ever moved between the steppes of East Asia (where bubonic plague was and remains endemic among small rodents), managed to keep the chain of infection without a break until they reached, and infected, peoples and rodents who had never encountered it. The ensuing Black Death may have killed up to 25 million in China and roughly a third of the population of Europe and in the next decades, changing the course of Asian and European history.

During the Middle Ages, victims of the bubonic plague were used for biological attacks, often by flinging fomites such as infected corpses and excrement over castle walls using catapults. In 1346, during the siege of Kafa (now Feodossia, Crimea) the attacking Tartar Forces which were subjugated by the Mongol empire under Genghis Khan, used the bodies of Mongol warriors of the Golden Horde who had died of plague, as weapons. An outbreak of plague followed and the defending forces retreated, followed by the conquest of the city by the Mongols. It has been speculated that this operation may have been responsible for the advent of the Black Death in Europe. At the time, the attackers thought that the stench was enough to kill them, though it was the disease that was deadly.[6][7]

At the siege of Thun-l’vque in 1340, during the Hundred Years’ War, the attackers catapulted decomposing animals into the besieged area.[8]

In 1422, during the siege of Karlstein Castle in Bohemia, Hussite attackers used catapults to throw dead (but not plague-infected) bodies and 2000 carriage-loads of dung over the walls.[9]

The last known incident of using plague corpses for biological warfare occurred in 1710, when Russian forces attacked the Swedes by flinging plague-infected corpses over the city walls of Reval (Tallinn).[10] However, during the 1785 siege of La Calle, Tunisian forces flung diseased clothing into the city.[9]

English Longbowmen usually did not draw their arrows from a quiver; rather, they stuck their arrows into the ground in front of them. This allowed them to nock the arrows faster and the dirt and soil was likely to stick to the arrowheads, thus making the wounds much more likely to become infected.

The Native American population was devastated after contact with the Old World due to the introduction of several fatal infectious diseases, notably smallpox.[11] These diseases can be traced to Eurasia where people had long lived with them and developed some immunological ability to survive their presence. Without similarly long ancestral exposure, indigenous Americans were immunologically naive and therefore extremely vulnerable.[12][13]

There are two documented instances of biological warfare by the British against North American Indians during Pontiac’s Rebellion (176366). In the first, during a parley at Fort Pitt on June 24, 1763, Captain Simeon Ecuyer gave representatives of the besieging Delawares two blankets and a handkerchief enclosed in small metal boxes that had been exposed to smallpox, hoping to spread the disease to the Natives in order to end the siege. The British soldiers lied to the Natives that the blanket pieces had contained special powers.[14]William Trent, the militia commander, left records that clearly indicated that the purpose of giving the blankets was “to Convey the Smallpox to the Indians.”[15]

British commander Lord Jeffrey Amherst and Swiss-British officer Colonel Henry Bouquet discussed the topic separately in the course of the same conflict; there exists correspondence referencing the idea of giving smallpox-infected blankets to enemy Indians. It cited four letters from June 29, July 13, 16 and 26th, 1763. Excerpts: Amherst wrote on July 16, 1763, “P.S. You will Do well to try to Inocculate the Indians by means of Blankets, as well as to try Every other method that can serve to Extirpate this Execrable Race. I should be very glad your Scheme for Hunting them Down by Dogs could take Effect,…” Bouquet replied on July 26, 1763, “I received yesterday your Excellency’s letters of 16th with their Inclosures. The signal for Indian Messengers, and all your directions will be observed.” Smallpox is highly infectious and does not require contaminated blankets to spread uncontrollably, and together with measles, influenza, chicken pox, and so on had been doing so since the arrival of Europeans and their animals. Trade and combat also provided ample opportunity for transmission of the disease. See also: Smallpox during Pontiac’s Rebellion. It is unclear if the blanket attempt succeeded. It is estimated that between 400,000-500,000 Native American Indians during and after the war died from smallpox.[13][16][17]

Australian aborigines (Kooris) have always maintained that the British deliberately spread smallpox in 1789,[18] but this possibility has only been raised by historians from the 1980s when Dr Noel Butlin suggested; there are some possibilities that … disease could have been used deliberately as an exterminating agent.[19]

In 1997, David Day claimed there remains considerable circumstantial evidence to suggest that officers other than Phillip, or perhaps convicts or soldiers deliberately spread smallpox among aborigines[20] and in 2000 Dr John Lambert argued that strong circumstantial evidence suggests the smallpox epidemic which ravaged Aborigines in 1789, may have resulted from deliberate infection.[21]

Judy Campbell argu
ed in 2002 that it is highly improbable that the First Fleet was the source of the epidemic as “smallpox had not occurred in any members of the First Fleet”; the only possible source of infection from the Fleet being exposure to variolous matter imported for the purposes of inoculation against smallpox. Campbell argued that, while there has been considerable speculation about a hypothetical exposure to the First Fleet’s variolous matter, there was no evidence that Aboriginal people were ever actually exposed to it. She pointed to regular contact between fishing fleets from the Indonesia archipelago, where smallpox was endemic, and Aboriginal people in Australia’s North as a more likely source for the introduction of smallpox. She notes that while these fishermen are generally referred to as Macassans, referring to the port of Macassar on the island of Sulawesi from which most of the fishermen originated, some travelled from islands as distant as New Guinea. She noted that there is little disagreement that the smallpox epidemic of the 1860s was contracted from Macassan fishermen and spread through the Aboriginal population by Aborigines fleeing outbreaks and also via their traditional social, kinship and trading networks. She argued that the 1789-90 epidemic followed the same pattern.[22]

These claims are controversial as it is argued that any smallpox virus brought to New South Wales probably would have been sterilised by heat and humidity encountered during the voyage of the First Fleet from England and incapable of biological warfare. However, in 2007, Christopher Warren demonstrated that the British smallpox may have been still viable.[23] Since then some scholars have argued that the British committed biological warfare in 1789 near their new convict settlement at Port Jackson.[24][25]

In 2013 Warren reviewed the issue and argued that smallpox did not spread across Australia before 1824 and showed that there was no smallpox at Macassar that could have caused the outbreak at Sydney. Warren, however, did not address the issue of persons who joined the Macassan fleet from other islands and from parts of Sulawesi other than the port of Macassar. Warren concluded that the British were “the most likely candidates to have released smallpox” near Sydney Cove in 1789. Warren proposed that the British had no choice as they were confronted with dire circumstances when, among other factors, they ran out of ammunition for their muskets. Warren also uses native oral tradition and the archaeology of native graves to analyse the cause and effect of the spread of smallpox in 1789.[26]

Prior to the publication of Warren’s article (2013), John Carmody argued that the epidemic was an outbreak of chickenpox which took a drastic toll on an Aboriginal population without immunological resistance. With regard to smallpox, Dr Carmody said: “There is absolutely no evidence to support any of the theories and some of them are fanciful and far-fetched..” [27][28] Warren covered the chickenpox theory at endnote 3 of Smallpox at Sydney Cove – Who, When, Why?.[29]

By the turn of the 20th century, advances in microbiology had made thinking about “germ warfare” part of the zeitgeist. Jack London, in his short story ‘”Yah! Yah! Yah!”‘ (1909), described a punitive European expedition to a South Pacific island deliberately exposing the Polynesian population to measles, of which many of them died. London wrote another science fiction tale the following year, “The Unparalleled Invasion” (1910), in which the Western nations wipe out all of China with a biological attack.

During the First World War (19141918), the Empire of Germany made some early attempts at biological warfare. Those attempts were made by special sabotage group headed by Rudolf Nadolny. Using diplomatic pouches and couriers, the German General Staff supplied small teams of saboteurs in the Russian Duchy of Finland, and in the then-neutral countries of Romania, the United States, and Argentina.[citation needed] In Finland, saboteurs mounted on reindeer placed ampoules of anthrax in stables of Russian horses in 1916.[30] Anthrax was also supplied to the German military attach in Bucharest, as was glanders, which was employed against livestock destined for Allied service. German intelligence officer and US citizen Dr. Anton Casimir Dilger established a secret lab in the basement of his sister’s home in Chevy Chase, Maryland, that produced glanders which was used to infect livestock in ports and inland collection points including, at least, Newport News, Norfolk, Baltimore, and New York, and probably St. Louis and Covington, Kentucky. In Argentina, German agents also employed glanders in the port of Buenos Aires and also tried to ruin wheat harvests with a destructive fungus.

The Geneva Protocol of 1925 prohibited the use of chemical weapons and biological weapons, but said nothing about experimentation, production, storage, or transfer; later treaties did cover these aspects. Twentieth-century advances in microbiology enabled the first pure-culture biological agents to be developed by World War II.

In the interwar period, little research was done in biological warfare in both Britain and the United States at first. In the United Kingdom the preoccupation was mainly in withstanding the anticipated conventional bombing attacks that would be unleashed in the event of war with Germany. As tensions increased, Sir Frederick Banting began lobbying the British government to establish a research program into the research and development of biological weapons to effectively deter the Germans from launching a biological attack. Banting proposed a number of innovative schemes for the dissemination of pathogens, including aerial-spray attacks and germs distributed through the mail system.

With the onset of hostilities, the Ministry of Supply finally established a biological weapons programme at Porton Down, headed by the microbiologist Paul Fildes. The research was championed by Winston Churchill and soon tularemia, anthrax, brucellosis, and botulism toxins had been effectively weaponized. In particular, Gruinard Island in Scotland, during a series of extensive tests was contaminated with anthrax for the next 48 years. Although Britain never offensively used the biological weapons it developed, its program was the first to successfully weaponize a variety of deadly pathogens and bring them into industrial production.[31]

When the United States entered the war, mounting British pressure for the creation of a similar research program for an Allied pooling of resources, led to the creation of a large industrial complex at Fort Detrick, Maryland in 1942 under the direction of George W. Merck.[32] The biological and chemical weapons developed during that period were tested at the Dugway Proving Grounds in Utah. Soon there were facilities for the mass production of anthrax spores, brucellosis, and botulism toxins, although the war was over before these weapons could be of much operational use.[33]

However, the most notorious program of the period was run by the secret Imperial Japanese Army Unit 731 during the war, based at Pingfan in Manchuria and commanded by Lieutenant General Shir Ishii. This unit did research on BW, conducted often fatal human experiments on prisoners, and produced biological weapons for combat use.[34] Although the Japanese effort lacked the technological sophistication of the American or British programs, it far outstripped them in its widespread application and indiscriminate brutality. Biological weapons were used against both Chinese soldiers and civilians in several military campaigns
. Three veterans of Unit 731 testified in a 1989 interview to the Asahi Shimbun, that they contaminated the Horustein river with typhoid near the Soviet troops during the Battle of Khalkhin Gol.[35] In 1940, the Imperial Japanese Army Air Force bombed Ningbo with ceramic bombs full of fleas carrying the bubonic plague.[36] A film showing this operation was seen by the imperial princes Tsuneyoshi Takeda and Takahito Mikasa during a screening made by mastermind Shiro Ishii.[37] During the Khabarovsk War Crime Trials the accused, such as Major General Kiyashi Kawashima, testified that as early as 1941 some 40 members of Unit 731 air-dropped plague-contaminated fleas on Changde. These operations caused epidemic plague outbreaks.[38]

Many of these operations were ineffective due to inefficient delivery systems, using disease-bearing insects rather than dispersing the agent as a bioaerosol cloud.[34] Nevertheless, some modern Chinese historians estimate that 400,000 Chinese died as a direct result of Japanese field testing and operational use of biological weapons.[39]

Ban Shigeo, a technician at the Japanese Army’s 9th Technical Research Institute, left an account of the activities at the Institute which was published in “The Truth About the Army Nororito Institute”.[40] Ban included an account of his trip to Nanking in 1941 to participate in the testing of poisons on Chinese prisoners.[40] His testimony tied the Noborito Institute to the infamous Unit 731, which participated in biomedical research.[40]

During the final months of World War II, Japan planned to utilize plague as a biological weapon against U.S. civilians in San Diego, California, during Operation Cherry Blossoms at Night. They hope that it would kill tens of thousands of U.S. civilians and thereby dissuading America from attacking Japan. The plan was set to launch on September 22, 1945, at night, but it never came into fruition due to Japan’s surrender on August 15, 1945.[41][42][43][44]

When the war ended, the US Army quietly enlisted certain members of Noborito in its efforts against the communist camp in the early years of the Cold War.[40] The head of Unit 731, Shiro Ishii, was granted immunity from war crimes prosecution in exchange for providing information to the United States on the Unit’s activities.[45] Allegations were made that a “chemical section” of a US clandestine unit hidden within Yokosuka naval base was operational during the Korean War, and then worked on unspecified projects inside the United States from 1955 to 1959, before returning to Japan to enter the private sector.[40][46]

Some of the Unit 731 personnel were imprisoned by the Soviets[citation needed], and may have been a potential source of information on Japanese weaponization.

Considerable research into BW was undertaken throughout the Cold War era by the US, UK and USSR, and probably other major nations as well, although it is generally believed that such weapons were never used.

In Britain, the 1950s saw the weaponization of plague, brucellosis, tularemia and later equine encephalomyelitis and vaccinia viruses. Trial tests at sea were carried out including Operation Cauldron off Stornoway in 1952. The programme was cancelled in 1956, when the British government unilaterally renounced the use of biological and chemical weapons.

The United States initiated its weaponization efforts with disease vectors in 1953, focused on Plague-fleas, EEE-mosquitoes, and yellow fever – mosquitoes (OJ-AP).[citation needed] However, US medical scientists in occupied Japan undertook extensive research on insect vectors, with the assistance of former Unit 731 staff, as early as 1946.[45]

The United States Army Chemical Corps then initiated a crash program to weaponize anthrax (N) in the E61 1/2-lb hour-glass bomblet. Though the program was successful in meeting its development goals, the lack of validation on the infectivity of anthrax stalled standardization.[citation needed] The United States Air Force was also unsatisfied with the operational qualities of the M114/US bursting bomblet and labeled it an interim item until the Chemical Corps could deliver a superior weapon.[citation needed]

Around 1950 the Chemical Corps also initiated a program to weaponize tularemia (UL). Shortly after the E61/N failed to make standardization, tularemia was standardized in the 3.4″ M143 bursting spherical bomblet. This was intended for delivery by the MGM-29 Sergeant missile warhead and could produce 50% infection over a 7-square-mile (18km2) area.[47] Although tularemia is treatable by antibiotics, treatment does not shorten the course of the disease. US conscientious objectors were used as consenting test subjects for tularemia in a program known as Operation Whitecoat.[48] There were also many unpublicized tests carried out in public places with bio-agent simulants during the Cold War.[49]

In addition to the use of bursting bomblets for creating biological aerosols, the Chemical Corps started investigating aerosol-generating bomblets in the 1950s. The E99 was the first workable design, but was too complex to be manufactured. By the late 1950s the 4.5″ E120 spraying spherical bomblet was developed; a B-47 bomber with a SUU-24/A dispenser could infect 50% or more of the population of a 16-square-mile (41km2) area with tularemia with the E120.[50] The E120 was later superseded by dry-type agents.

Dry-type biologicals resemble talcum powder, and can be disseminated as aerosols using gas expulsion devices instead of a burster or complex sprayer.[citation needed] The Chemical Corps developed Flettner rotor bomblets and later triangular bomblets for wider coverage due to improved glide angles over Magnus-lift spherical bomblets.[51] Weapons of this type were in advanced development by the time the program ended.[51]

From January 1962, Rocky Mountain Arsenal grew, purified and biodemilitarized plant pathogen Wheat Stem Rust (Agent TX), Puccinia graminis, var. tritici, for the Air Force biological anti-crop program. TX-treated grain was grown at the Arsenal from 1962-1968 in Sections 23-26. Unprocessed TX was also transported from Beale AFB for purification, storage, and disposal.[52] Trichothecenes Mycotoxin is a toxin that can be extracted from Wheat Stem Rust and Rice Blast and can kill or incapacitate depending on the concentration used. The red mold disease of wheat and barley in Japan is prevalent in the region that faces the Pacific Ocean. Toxic trichothecenes, including nivalenol, deoxynivalenol, and monoace tylnivalenol (fusarenon- X) from Fusarium nivale, can be isolated from moldy grains. In the suburbs of Tokyo, an illness similar to red mold disease was described in an outbreak of a food borne disease, as a result of the consumption of Fusarium- infected rice. Ingestion of moldy grains that are contaminated with trichothecenes has been associated with mycotoxicosis.[53]

Although there is no evidence that biological weapons were used by the United States, China and North Korea accused the US of large-scale field testing of BW against them during the Korean War (19501953). At the time of the Korean War the United States had only weaponized one agent, brucellosis (“Agent US”), which is caused by Brucella suis. The original weaponized form used the M114 bursting bomblet in M33 cluster bombs. While the specific form of the biological bomb was classified until some years after the Korean War, in the various exhibits of biological weapons that Korea alleged were dropped on their country nothing resembled an M114 bomblet. There were ceramic containers that had some
similarity to Japanese weapons used against the Chinese in World War II, developed by Unit 731.[34][54]

Cuba also accused the United States of spreading human and animal disease on their island nation.[55][56]

During the 1948 Israel War of Independence, International Red Cross reports raised suspicion that the Israeli Haganah militia had released Salmonella typhi bacteria into the water supply for the city of Acre, causing an outbreak of typhoid among the inhabitants. Egyptian troops later claimed to have captured disguised Haganah soldiers near wells in Gaza, whom they executed for allegedly attempting another attack. Israel denies these allegations.[57][58]

In mid-1969, the UK and the Warsaw Pact, separately, introduced proposals to the UN to ban biological weapons, which would lead to the signing of the Biological and Toxin Weapons Convention in 1972. United States President Richard Nixon signed an executive order on November 1969, which stopped production of biological weapons in the United States and allowed only scientific research of lethal biological agents and defensive measures such as immunization and biosafety. The biological munition stockpiles were destroyed, and approximately 2,200 researchers became redundant.[59]

Special munitions for the United States Special Forces and the CIA and the Big Five Weapons for the military were destroyed in accordance with Nixon’s executive order to end the offensive program. The CIA maintained its collection of biologicals well into 1975 when it became the subject of the senate Church Committee.

The Biological and Toxin Weapons Convention was signed by the US, UK, USSR and other nations, as a ban on “development, production and stockpiling of microbes or their poisonous products except in amounts necessary for protective and peaceful research” in 1972. The convention bound its signatories to a far more stringent set of regulations than had been envisioned by the 1925 Geneva Protocols. By 1996, 137 countries had signed the treaty; however it is believed that since the signing of the Convention the number of countries capable of producing such weapons has increased.

The Soviet Union continued research and production of offensive biological weapons in a program called Biopreparat, despite having signed the convention. The United States had no solid proof of this program until Dr. Vladimir Pasechnik defected in 1989, and Dr. Kanatjan Alibekov, the first deputy director of Biopreparat defected in 1992. Pathogens developed by the organization would be used in open-air trials. It is known that Vozrozhdeniye Island, located in the Aral Sea, was used as a testing site.[60] In 1971, such testing led to the accidental aerosol release of smallpox over the Aral Sea and a subsequent smallpox epidemic.[61]

During the closing stages of the Rhodesian Bush War, the Rhodesian government resorted to biological warfare. Watercourses at several sites close to the Mozambique border were deliberately contaminated with cholera and the toxin Sodium Coumadin, an anti-coagulant commonly used as the active ingredient in rat poison. Food stocks in the area were contaminated with anthrax spores. These biological attacks had little impact on the fighting capability of ZANLA, but caused considerable distress to the local population. Over 10,000 people contracted anthrax in the period 1978 to 1980, of whom 200 died. The facts about this episode became known during the hearings of the South African Truth and Reconciliation Commission during the late 1990s.[62]

After the 1991 Persian Gulf War, Iraq admitted to the United Nations inspection team to having produced 19,000 liters of concentrated botulinum toxin, of which approximately 10,000 L were loaded into military weapons; the 19,000 liters have never been fully accounted for. This is approximately three times the amount needed to kill the entire current human population by inhalation,[63] although in practice it would be impossible to distribute it so efficiently, and, unless it is protected from oxygen, it deteriorates in storage.[64]

According to the U.S. Congress Office of Technology Assessment 8 countries were generally reported as having undeclared offensive biological warfare programs in 1995: China, Iran, Iraq, Israel, Libya, North Korea, Syria and Taiwan. Five countries had admitted to having had offensive weapon or development programs in the past: United States, Russia, France, the United Kingdom, and Canada.[65] Offensive BW programs in Iraq were dismantled by Coalition Forces and the UN after the first Gulf War (199091), although an Iraqi military BW program was covertly maintained in defiance of international agreements until it was apparently abandoned during 1995 and 1996.[66]

On September 18, 2001 and for a few days thereafter, several letters were received by members of the U.S. Congress and American media outlets which contained intentionally prepared anthrax spores; the attack sickened at least 22 people of whom five died. The identity of the bioterrorist remained unknown until 2008, when an official suspect, who had committed suicide, was named. (See 2001 anthrax attacks.)

Suspicions of an ongoing Iraqi biological warfare program were not substantiated in the wake of the March 2003 invasion of that country. Later that year, however, Muammar Gaddafi was persuaded to terminate Libya’s biological warfare program. In 2008, according to a U.S. Congressional Research Service report, China, Cuba, Egypt, Iran, Israel, North Korea, Russia, Syria and Taiwan are considered, with varying degrees of certainty, to have some BW capability.[67] By 2011, 165 countries had officially joined the BWC and pledged to disavow biological weapons.[68]

Read the original post:

History of biological warfare – Wikipedia, the free …

Posted in Germ Warfare | Comments Off on History of biological warfare – Wikipedia, the free …

Caribbean – Wikitravel

Posted: June 22, 2016 at 11:43 pm

Caribbean

The islands of the Caribbean Sea or West Indies are an extensive archipelago in the far west of the Atlantic Ocean, mostly strung between North and South America. They’ve long been known as a resort vacation destination for honeymooners and retirees, but a small movement toward eco-tourism and backpacking has started to open up the Caribbean to more independent travel. With year-round good weather (with the occasional but sometimes serious exception of hurricane season in the late summer and early fall), promotional air fares from Europe and North America, and hundreds of islands to explore, the Caribbean offers something for almost everyone.

The Caribbean islands were first inhabited by the Arawak Indians, then were invaded by a more aggressive tribe, the Caribs. Unfortunately, neither could appreciate their victory forever, although the Arawaks may have had a quiet reign of nearly two millenia. Then the Spanish, Portuguese, Dutch, French, Danish, and British arrived, after which the Carib population steeply declined due to various factors. The islands have known many historic battles and more than a few pirate stories.

Cuba, Dominican Republic, Haiti, Jamaica, Puerto Rico and the Cayman Islands, often grouped as Greater Antilles, are by far the largest countries in the area and the most visited by travellers. In the north is the Lucayan Archipelago, which includes The Bahamas and the Turks and Caicos Islands. The Caribbean also includes the Lesser Antilles, a group of much smaller islands to the east. Further to the west and south, there are various less frequently visited islands that belong to Central and South American countries.

The Lesser Antilles can be further divided into three groups:

These countries are not part of the Greater or Lesser Antilles but are variously close to it, and are commonly associated with the Caribbean (e.g. members of CARICOM, the Caribbean Community).

Numerous companies offer cruises, charters, and boat tours in the Caribbean.

All of the Americas (with 16.3 killed per 100,000 population) suffer from homicide rates far above those in most of Asia (3.0), Europe (3.0) and Oceania (2.9) but some countries in the Caribbean feature in the highest murder rates in the world.

Most visitors are aware of the high rates of gun crime in the United States Virgin Islands (with 52.6) or Jamaica (39.3), but you might be unaware that even sleepy little Saint Kitts and Nevis (33.6) had a murder rate seven times greater than the scary old mainland USA in 2010!

The well policed Bahamas rang up a rate of (29.8), Trinidad and Tobago (28.3), Puerto Rico (26.5), Saint Vincent and the Grenadines (with a state Latin motto of “Pax et Justitia” or “Peace and Justice” had 25.6), Dominican Republic (22.1), Saint Lucia (21.6) and Dominica (21.1).

To put this in perspective, rates in more placid countries like Japan, Singapore, Indonesia, Hong Kong, Switzerland, Germany, Spain and New Zealand average well under a single person intentionally killed per 100,000 of their population each year.

Those of a nervous disposition when confronted by these kind of statistics may want to start researching a holiday in Martinique (2.7) or Cuba (4.2) since it’s rather uncomfortable to wear stab or bullet proof vests in these warm and humid climates of course, not to mention it make you look a bit of a prat…

WikiPedia:Caribbean Dmoz:Caribbean/

Read the original post:

Caribbean – Wikitravel

Posted in Caribbean | Comments Off on Caribbean – Wikitravel

Cyberpunk – Wikipedia, the free encyclopedia

Posted: June 19, 2016 at 3:43 am

Cyberpunk is a subgenre of science fiction in a future setting that tends to focus on the society of the proverbial “high tech low life”;[1][2] featuring advanced technological and scientific achievements, such as information technology and cybernetics, juxtaposed with a degree of breakdown or radical change in the social order.[3]

Cyberpunk plots often center on conflict among artificial intelligences and among megacorporations, and tend to be set in a future Earth, rather than in the far-future settings or galactic vistas found in novels such as Isaac Asimov’s Foundation or Frank Herbert’s Dune.[4] The settings are usually post-industrial dystopias but tend to feature extraordinary cultural ferment and the use of technology in ways never anticipated by its original inventors (“the street finds its own uses for things”).[5] Much of the genre’s atmosphere echoes film noir, and written works in the genre often use techniques from detective fiction.[6]

Classic cyberpunk characters were marginalized, alienated loners who lived on the edge of society in generally dystopic futures where daily life was impacted by rapid technological change, an ubiquitous datasphere of computerized information, and invasive modification of the human body.

Primary exponents of the cyberpunk field include William Gibson, Neal Stephenson, Bruce Sterling, Bruce Bethke, Pat Cadigan, Rudy Rucker, John Shirley and Philip K. Dick (author of Do Androids Dream of Electric Sheep, from which the film Blade Runner was adapted).[8]

Blade Runner can be seen as a quintessential example of the cyberpunk style and theme.[4]Video games, board games, and tabletop role-playing games, such as Cyberpunk 2020 and Shadowrun, often feature storylines that are heavily influenced by cyberpunk writing and movies. Beginning in the early 1990s, some trends in fashion and music were also labeled as cyberpunk. Cyberpunk is also featured prominently in anime and manga:[9]Akira, Gunnm, Ghost in the Shell, Serial Experiments Lain, Dennou Coil, Ergo Proxy and Psycho Pass being among the most notable.[9][10]

Cyberpunk writers tend to use elements from hardboiled detective fiction, film noir, and postmodernist prose to describe the often nihilistic underground side of an electronic society. The genre’s vision of a troubled future is often called the antithesis of the generally utopian visions of the future popular in the 1940s and 1950s. Gibson defined cyberpunk’s antipathy towards utopian SF in his 1981 short story “The Gernsback Continuum,” which pokes fun at and, to a certain extent, condemns utopian science fiction.[13][14][15]

In some cyberpunk writing, much of the action takes place online, in cyberspace, blurring the border between actual and virtual reality.[16] A typical trope in such work is a direct connection between the human brain and computer systems. Cyberpunk settings are dystopias with corruption, computers and internet connectivity. Giant, multinational corporations have for the most part replaced governments as centers of political, economic, and even military power.

The economic and technological state of Japan in the 80s influenced Cyberpunk literature at the time. Of Japan’s influence on the genre, William Gibson said, “Modern Japan simply was cyberpunk.”[12] Cyberpunk is often set in urbanized, artificial landscapes, and “city lights, receding” was used by Gibson as one of the genre’s first metaphors for cyberspace and virtual reality.[17]

One of the cyberpunk genre’s prototype characters is Case, from Gibson’s Neuromancer.[18] Case is a “console cowboy,” a brilliant hacker who has betrayed his organized criminal partners. Robbed of his talent through a crippling injury inflicted by the vengeful partners, Case unexpectedly receives a once-in-a-lifetime opportunity to be healed by expert medical care but only if he participates in another criminal enterprise with a new crew.

Like Case, many cyberpunk protagonists are manipulated, placed in situations where they have little or no choice, and although they might see things through, they do not necessarily come out any further ahead than they previously were. These anti-heroes”criminals, outcasts, visionaries, dissenters and misfits”[19]call to mind the private eye of detective fiction. This emphasis on the misfits and the malcontents is the “punk” component of cyberpunk.

Cyberpunk can be intended to disquiet readers and call them to action. It often expresses a sense of rebellion, suggesting that one could describe it as a type of culture revolution in science fiction. In the words of author and critic David Brin:

…a closer look [at cyberpunk authors] reveals that they nearly always portray future societies in which governments have become wimpy and pathetic …Popular science fiction tales by Gibson, Williams, Cadigan and others do depict Orwellian accumulations of power in the next century, but nearly always clutched in the secretive hands of a wealthy or corporate elite.[20]

Cyberpunk stories have also been seen as fictional forecasts of the evolution of the Internet. The earliest descriptions of a global communications network came long before the World Wide Web entered popular awareness, though not before traditional science-fiction writers such as Arthur C. Clarke and some social commentators such as James Burke began predicting that such networks would eventually form.[21]

The science-fiction editor Gardner Dozois is generally acknowledged as the person who popularized the use of the term “cyberpunk” as a kind of literature [according to whom?], although Minnesota writer Bruce Bethke coined the term in 1980 for his short story “Cyberpunk,” which was published in the November 1983 issue of Amazing Science Fiction Stories.[22] The term was quickly appropriated as a label to be applied to the works of William Gibson, Bruce Sterling, Pat Cadigan and others. Of these, Sterling became the movement’s chief ideologue, thanks to his fanzine Cheap Truth. John Shirley wrote articles on Sterling and Rucker’s significance.[23]John Brunner’s 1975 novel The Shockwave Rider is considered by many[who?] to be the first cyberpunk novel with many of the tropes commonly associated with the genre, some five years before the term was popularized by Dozois.[24]

William Gibson with his novel Neuromancer (1984) is likely[according to whom?] the most famous writer connected with the term cyberpunk. He emphasized style, a fascination with surfaces, and atmosphere over traditional science-fiction tropes. Regarded as ground-breaking and sometimes as “the archetypal cyberpunk work,”[7]Neuromancer was awarded the Hugo, Nebula, and Philip K. Dick Awards. Count Zero (1986) and Mona Lisa Overdrive (1988) followed after Gibson’s popular debut novel. According to the Jargon File, “Gibson’s near-total ignorance of computers and the present-day hacker culture enabled him to speculate about the role of computers and hackers in the future in ways hackers have since found both irritatingly nave and tremendously stimulating.”[25]

Early on, cyberpunk was hailed as a radical departure from science-fiction standards and a new manifestation of vitality.[26] Shortly thereafter, however, some critics arose to challenge its status as a revolutionary movement. These critics said that the SF New Wave of the 1960s was much more innovative as far as narrative techniques and styles were concerned.[27] Furthermore, while Neuromancer’s narrator may have had an unusual “voice” for science fiction, much older examples can be found: Gibson’s narrative voice, for example, resembles that of an updated Raymond Chandler, as in his novel The Big Sleep (1939).[26] Others noted that almost all traits claimed to be uniquely cyberpunk could in fact be found in older writers’ worksoften citing J. G. Ballard, Philip K. Dick, Harlan Ellison, Stanisaw Lem, Samuel R. Delany, and even William S. Burroughs.[26] For example, Philip K. Dick’s works contain recurring themes of social decay, artificial intelligence, paranoia, and blurred lines between objective and subjective realities, and the influential cyberpunk movie Blade Runner (1982) is based on his book, Do Androids Dream of Electric Sheep?. Humans linked to machines are found in Pohl and Kornbluth’s Wolfbane (1959) and Roger Zelazny’s Creatures of Light and Darkness (1968).[citation needed]

In 1994, scholar Brian Stonehill suggested that Thomas Pynchon’s 1973 novel Gravity’s Rainbow “not only curses but precurses what we now glibly dub cyberspace.”[28] Other important[according to whom?] predecessors include Alfred Bester’s two most celebrated novels, The Demolished Man and The Stars My Destination,[29] as well as Vernor Vinge’s novella True Names.[30]

Science-fiction writer David Brin describes cyberpunk as “the finest free promotion campaign ever waged on behalf of science fiction.” It may not have attracted the “real punks,” but it did ensnare many new readers, and it provided the sort of movement that postmodern literary critics found alluring. Cyberpunk made science fiction more attractive to academics, argues Brin; in addition, it made science fiction more profitable to Hollywood and to the visual arts generally. Although the “self-important rhetoric and whines of persecution” on the part of cyberpunk fans were irritating at worst and humorous at best, Brin declares that the “rebels did shake things up. We owe them a debt.”[31]

Fredric Jameson considers cyberpunk the “supreme literary expression if not of postmodernism, then of late capitalism itself”.[32]

Cyberpunk further inspired many professional writers who were not among the “original” cyberpunks to incorporate cyberpunk ideas into their own works,[citation needed] such as George Alec Effinger’s When Gravity Fails. Wired magazine, created by Louis Rossetto and Jane Metcalfe, mixes new technology, art, literature, and current topics in order to interest today’s cyberpunk fans, which Paula Yoo claims “proves that hardcore hackers, multimedia junkies, cyberpunks and cellular freaks are poised to take over the world.”[33]

The film Blade Runner (1982)adapted from Philip K. Dick’s Do Androids Dream of Electric Sheep?is set in 2019 in a dystopian future in which manufactured beings called replicants are slaves used on space colonies and are legal prey on Earth to various bounty hunters who “retire” (kill) them. Although Blade Runner was largely unsuccessful in its first theatrical release, it found a viewership in the home video market and became a cult film.[34] Since the movie omits the religious and mythical elements of Dick’s original novel (e.g. empathy boxes and Wilbur Mercer), it falls more strictly within the cyberpunk genre than the novel does. William Gibson would later reveal that upon first viewing the film, he was surprised at how the look of this film matched his vision when he was working on Neuromancer. The film’s tone has since been the staple of many cyberpunk movies, such as The Matrix (1999), which uses a wide variety of cyberpunk elements.

The number of films in the genre or at least using a few genre elements has grown steadily since Blade Runner. Several of Philip K. Dick’s works have been adapted to the silver screen. The films Johnny Mnemonic[35] and New Rose Hotel,[36][37] both based upon short stories by William Gibson, flopped commercially and critically.

In addition, “tech-noir” film as a hybrid genre, means a work of combining neo-noir and science fiction or cyberpunk. It includes many cyberpunk films such as Blade Runner, Burst City,[38]The Terminator, Robocop, 12 Monkeys, The Lawnmower Man, Hackers, Hardware, and Strange Days.

Cyberpunk themes are widely visible in anime and manga. In Japan, where cosplay is popular and not only teenagers display such fashion styles, cyberpunk has been accepted and its influence is widespread. William Gibson’s Neuromancer, whose influence dominated the early cyberpunk movement, was also set in Chiba, one of Japan’s largest industrial areas, although at the time of writing the novel Gibson did not know the location of Chiba and had no idea how perfectly it fit his vision in some ways. The exposure to cyberpunk ideas and fiction in the mid 1980s has allowed it to seep into the Japanese culture.

Cyberpunk anime and manga draw upon a futuristic vision which has elements in common with western science fiction and therefore have received wide international acceptance outside Japan. “The conceptualization involved in cyberpunk is more of forging ahead, looking at the new global culture. It is a culture that does not exist right now, so the Japanese concept of a cyberpunk future, seems just as valid as a Western one, especially as Western cyberpunk often incorporates many Japanese elements.”[39] William Gibson is now a frequent visitor to Japan, and he came to see that many of his visions of Japan have become a reality:

Modern Japan simply was cyberpunk. The Japanese themselves knew it and delighted in it. I remember my first glimpse of Shibuya, when one of the young Tokyo journalists who had taken me there, his face drenched with the light of a thousand media-sunsall that towering, animated crawl of commercial informationsaid, “You see? You see? It is Blade Runner town.” And it was. It so evidently was.[40]

Cyberpunk has influenced many anime and manga including the ground-breaking Akira, Ghost in the Shell, Ergo Proxy, Battle Angel Alita, Megazone 23, Neo Tokyo, Goku Midnight Eye, Cyber City Oedo 808, Bubblegum Crisis, A.D. Police: Dead End City, Angel Cop, Extra, Blame!, Armitage III, Texhnolyze, Neon Genesis Evangelion and Psycho-Pass.

There are many cyberpunk video games. Popular series include the Metal Gear series, Megami Tensei series, Deus Ex series, Syndicate series, and System Shock and its sequel. Other games, like Blade Runner, Ghost in the Shell, and the Matrix series, are based upon genre movies, or role-playing games (for instance the various Shadowrun games). CD Projekt RED are currently developing a cyberpunk game, Cyberpunk 2077.[41]

Several role-playing games (RPGs) called Cyberpunk exist: Cyberpunk, Cyberpunk 2020 and Cyberpunk v3, by R. Talsorian Games, and GURPS Cyberpunk, published by Steve Jackson Games as a module of the GURPS family of RPGs. Cyberpunk 2020 was designed with the settings of William Gibson’s writings in mind, and to some extent with his approval[citation needed], unlike the approach taken by FASA in producing the transgenre Shadowrun game. Both are set in the near future, in a world where cybernetics are prominent. In addition, Iron Crown Enterprises released an RPG named Cyberspace, which was out of print for several years until recently being re-released in online PDF form.

In 1990, in a convergence of cyberpunk art and reality, the United States Secret Service raided Steve Jackson Games’s headquarters and confiscated all their computers. This was allegedly because the GURPS Cyberpunk sourcebook could be used to perpetrate computer crime. That was, in fact, not the main reason for the raid, but after the event it was too late to correct the public’s impression.[42] Steve Jackson Games later won a lawsuit against the Secret Service, aided by the new Electronic Frontier Foundation. This event has achieved a sort of notoriety, which has extended to the book itself as well. All published editions of GURPS Cyberpunk have a tagline on the front cover, which reads “The book that was seized by the U.S. Secret Service!” Inside, the book provides a summary of the raid and its aftermath.

Cyberpunk has also inspired several tabletop, miniature and board games such as Necromunda by Games Workshop. Netrunner is a collectible card game introduced in 1996, based on the Cyberpunk 2020 role-playing game. Tokyo NOVA, debuting in 1993, is a cyberpunk role-playing game that uses playing cards instead of dice.

“Much of the industrial/dance heavy ‘Cyberpunk’recorded in Billy Idol’s Macintosh-run studiorevolves around Idol’s theme of the common man rising up to fight against a faceless, soulless, corporate world.”

Some musicians and acts have been classified as cyberpunk due to their aesthetic style and musical content. Often dealing with dystopian visions of the future or biomechanical themes, some fit more squarely in the category than others. Bands whose music has been classified as cyberpunk include Psydoll, Front Line Assembly, Clock DVA and Sigue Sigue Sputnik. Some musicians not normally associated with cyberpunk have at times been inspired to create concept albums exploring such themes. Albums such as Gary Numan’s Replicas, The Pleasure Principle and Telekon were heavily inspired by the works of Philip K. Dick. Kraftwerk’s The Man-Machine and Computer World albums both explored the theme of humanity becoming dependent on technology. Nine Inch Nails’ concept album Year Zero also fits into this category. Billy Idol’s Cyberpunk drew heavily from cyberpunk literature and the cyberdelic counter culture in its creation. 1. Outside, a cyberpunk narrative fueled concept album by David Bowie, was warmly met by critics upon its release in 1995. Many musicians have also taken inspiration from specific cyberpunk works or authors, including Sonic Youth, whose albums Sister and Daydream Nation take influence from the works of Phillip K. Dick and William Gibson respectively.

Vaporwave and Synthwave are also influenced by cyberpunk. The former has been interpreted as a dystopian[44] critique of capitalism[45] in the vein of cyberpunk and the latter as a nostalgic retrofuturistic revival of aspects of cyberpunk’s origins.

Furthermore, many dubstep producers, such as Machine Man and Ghosthack, have found inspiration in cyberpunk themes for their works.

Some Neo-Futurism artworks and cityscapes have been influenced by cyberpunk, such as [12] the Sony Center in the Potsdamer Platz public square of Berlin, Germany,[46]Hong Kong, and Shanghai.[47]

Several subcultures have been inspired by cyberpunk fiction. These include the cyberdelic counter culture of the late 1980s and early 90s. Cyberdelic, whose adherents referred to themselves as “cyberpunks”, attempted to blend the psychedelic art and drug movement with the technology of cyberculture. Early adherents included Timothy Leary, Mark Frauenfelder and R. U. Sirius. The movement largely faded following the dot-com bubble implosion of 2000.

Cybergoth is a fashion and dance subculture which draws its inspiration from cyberpunk fiction, as well as rave and Gothic subcultures. In addition, a distinct cyberpunk fashion of its own has emerged in recent years[when?] which rejects the raver and goth influences of cybergoth, and draws inspiration from urban street fashion, “post apocalypse”, functional clothing, high tech sports wear, tactical uniform and multifunction. This fashion goes by names like “tech wear”, “goth ninja” or “tech ninja”. Important designers in this type of fashion[according to whom?] are ACRONYM, Demobaza, Boris Bidjan Saberi, Rick Owens and Alexander Wang.

The Kowloon Walled City in Hong Kong (demolished in 1994) is often referenced as the model cyberpunk/dystopian slum as, given its poor living conditions at the time coupled by the city’s political, physical, and economic isolation has caused many in academia to be fascinated by the ingenuity of its spawning.[48]

As a wider variety of writers began to work with cyberpunk concepts, new subgenres of science fiction emerged, some of which could be considered as playing off the cyberpunk label, others which could be considered as legitimate explorations into newer territory. These focused on technology and its social effects in different ways. One prominent subgenre is “steampunk,” which is set in an alternate history Victorian era that combines anachronistic technology with cyberpunk’s bleak film noir world view. The term was originally coined around 1987 as a joke to describe some of the novels of Tim Powers, James P. Blaylock, and K.W. Jeter, but by the time Gibson and Sterling entered the subgenre with their collaborative novel The Difference Engine the term was being used earnestly as well.[49]

Another subgenre is “biopunk” (cyberpunk themes dominated by biotechnology) from the early 1990s, a derivative style building on biotechnology rather than informational technology. In these stories, people are changed in some way not by mechanical means, but by genetic manipulation. Paul Di Filippo is seen as the most prominent biopunk writer, including his half-serious ribofunk. Bruce Sterling’s Shaper/Mechanist cycle is also seen as a major influence. In addition, some people consider works such as Neal Stephenson’s The Diamond Age to be postcyberpunk.

Cyberpunk works have been described as well-situated within postmodern literature.[50]

More:

Cyberpunk – Wikipedia, the free encyclopedia

Posted in Cyberpunk | Comments Off on Cyberpunk – Wikipedia, the free encyclopedia

Seychelles – Republic of Seychelles – Country Profile …

Posted: June 16, 2016 at 5:53 pm

Official Name: Seychelles Creole: Repiblik Sesel English: Republic of Seychelles French: Rpublique des Seychelles

ISO Country Code: sc

Actual Time: Fri-June-17 01:53 Time Zone: SCT – Seychelles Times Local Time = UTC +4h

Country Calling Code: +248

Capital City: Victoria (pop. 24 500)

Government: Type: Multiple-party republic. Independence: June 29, 1976 (from UK).

Geography: Location: Eastern Africa, group of about 115 islands scattered over 1.3 million square kilometers of the western Indian Ocean, northeast of Madagascar. Area: 455 km (176 sq km) Major Islands: Mahe, Praslin and La Digue. Terrain: About half of the islands are of granitic origin, with narrow coastal strips and central ranges of hills rising to more than 900 m; highest point: Morne Seychellois at 905 m. The other half are coral atolls, many uninhabitable.

Climate: Tropical marine; humid; cooler season during southeast monsoon (late May to September); warmer season during northwest monsoon (March to May) .

People: Nationality: Noun and adjective–Seychellois. Population 91,000 (2010 census) Ethnic groups: Creole (European, Asian, and African). Religions: Catholic 86.6%, Anglican Church 6.8%, other Christians 2.5%, other 4.1%. Languages: Official languages are Seychelles Creole (kreol seselwa), English, and French. Literacy: between 60-80%.

Natural resources: Fish, copra, cinnamon trees.

Agriculture products: Coconuts, cinnamon, vanilla, sweet potatoes, cassava (tapioca), bananas; broiler chickens; tuna fish.

Industries: Fishing; tourism; processing of coconuts and vanilla, coir (coconut fiber) rope, boat building, printing, furniture; beverages.

Exports – commodities: canned tuna, frozen fish, cinnamon bark, copra, petroleum products (reexports)

Exports partners: France 27.7%, UK 17.6%, Japan 15.2%, Italy 10.6% (2012)

Imports – partners: Saudi Arabia 24%, Spain 12.1%, France 5.9% (2012)

Currency: Seychelles Rupee (SCR)

Continued here:

Seychelles – Republic of Seychelles – Country Profile …

Posted in Seychelles | Comments Off on Seychelles – Republic of Seychelles – Country Profile …

Cryptocurrency a Response to Financial Crisis, Says CEO

Posted: at 5:41 pm

6/14/2016 8:08PM Andela Fellow Gives Inside Look at Startup 6/16/2016 5:04PM Luxury Shoppers Are Jerks to Others 6/16/2016 2:20PM Millennials Outperforming Older Generations in Retirement Readiness 6/16/2016 2:19PM Sunset Boulevard Goes Upscale 6/16/2016 9:00AM Shanghai Disney Resort Opens Gates 6/16/2016 7:18AM U.K. Lawmaker Jo Cox Dies After Attack 6/16/2016 1:07PM How Gaming Has Become a Multimillion-Dollar Video Business 6/16/2016 9:59AM A Possible Shift in the Gun Debate? 6/16/2016 6:00AM U.S., India, Japan Hold Naval Drills 6/16/2016 5:32AM Daymond John’s Advice To ‘Shark Tank’ Hopefuls 6/15/2016 6:19PM FDA Warns Whole Foods About ‘Serious Violations’ 6/15/2016 6:46PM Obama Urges Tighter Gun Laws in Orlando Visit 6/16/2016 5:35PM

President Barack Obama and Vice President Joe Biden on Thursday mourned the dead and comforted families and survivors in the wake of the massacre at a gay nightclub in Orlando, Fla. Obama urged senators to “rise up and do the right thing” and pass new gun laws. Photo: AP

Scientists have gotten one step closer to answering one of biologys greatest mysteries with the first discovery of a chiral molecule in interstellar space. WSJ’s Monika Auger reports. Photo: Brett A. McGuire

A Florida sheriff said the body of a 2-year-old boy has been found, after he was dragged into the water by an alligator Tuesday night at Walt Disney World. Photo: AP

CIA Director John Brennan said Thursday that a two-year campaign by the U.S. and coalition forces to defeat ISIS has seen gains on the battle field but it has failed to disrupt its capability to carry out terrorist attacks. He also said that Omar Mateen, the shooter in the Orlando attack, had no direct link to ISIS. Photo: AP

John and Tina Novogratz restored the Park Slope, Brooklyn home first owned by chewing gum mogul Thomas Adams, Jr. They live in the four-story building with their five children. Photo: Dorothy Hong for The Wall Street Journal

Virtual reality is drawing a huge amount of attention at the worlds largest video-game trade show, E3, which according to organizers has seen the number of VR exhibitors double compared with last year. Mark Kelly reports. Image: Zuma

Watch a film trailer for “Pete’s Dragon,” starring Oakes Fegley, Robert Redford and Bryce Dallas Howard. Photo: YouTube

Read the rest here:
Cryptocurrency a Response to Financial Crisis, Says CEO

Posted in Cryptocurrency | Comments Off on Cryptocurrency a Response to Financial Crisis, Says CEO

Space exploration – Wikipedia, the free encyclopedia

Posted: June 10, 2016 at 12:45 pm

Space exploration is the ongoing discovery and exploration of celestial structures in outer space by means of continuously evolving and growing space technology. While the study of space is carried out mainly by astronomers with telescopes, the physical exploration of space is conducted both by unmanned robotic probes and human spaceflight.

While the observation of objects in space, known as astronomy, predates reliable recorded history, it was the development of large and relatively efficient rockets during the early 20th century that allowed physical space exploration to become a reality. Common rationales for exploring space include advancing scientific research, national prestige, uniting different nations, ensuring the future survival of humanity, and developing military and strategic advantages against other countries.[1]

Space exploration has often been used as a proxy competition for geopolitical rivalries such as the Cold War. The early era of space exploration was driven by a “Space Race” between the Soviet Union and the United States. The launch of the first human-made object to orbit Earth, the Soviet Union’s Sputnik 1, on 4 October 1957, and the first Moon landing by the American Apollo 11 mission on 20 July 1969 are often taken as landmarks for this initial period. The Soviet space program achieved many of the first milestones, including the first living being in orbit in 1957, the first human spaceflight (Yuri Gagarin aboard Vostok 1) in 1961, the first spacewalk (by Aleksei Leonov) on 18 March 1965, the first automatic landing on another celestial body in 1966, and the launch of the first space station (Salyut 1) in 1971.

After the first 20 years of exploration, focus shifted from one-off flights to renewable hardware, such as the Space Shuttle program, and from competition to cooperation as with the International Space Station (ISS).

With the substantial completion of the ISS[2] following STS-133 in March 2011, plans for space exploration by the USA remain in flux. Constellation, a Bush Administration program for a return to the Moon by 2020[3] was judged inadequately funded and unrealistic by an expert review panel reporting in 2009.[4] The Obama Administration proposed a revision of Constellation in 2010 to focus on the development of the capability for crewed missions beyond low Earth orbit (LEO), envisioning extending the operation of the ISS beyond 2020, transferring the development of launch vehicles for human crews from NASA to the private sector, and developing technology to enable missions to beyond LEO, such as EarthMoon L1, the Moon, EarthSun L2, near-Earth asteroids, and Phobos or Mars orbit.[5]

In the 2000s, the People’s Republic of China initiated a successful manned spaceflight program, while the European Union, Japan, and India have also planned future manned space missions. China, Russia, Japan, and India have advocated manned missions to the Moon during the 21st century, while the European Union has advocated manned missions to both the Moon and Mars during the 20/21st century.

From the 1990s onwards, private interests began promoting space tourism and then private space exploration of the Moon (see Google Lunar X Prize).

The highest known projectiles prior to the rockets of the 1940s were the shells of the Paris Gun, a type of German long-range siege gun, which reached at least 40 kilometers altitude during World War One.[6] Steps towards putting a human-made object into space were taken by German scientists during World War II while testing the V-2 rocket, which became the first human-made object in space on 3 October 1942 with the launching of the A-4. After the war, the U.S. used German scientists and their captured rockets in programs for both military and civilian research. The first scientific exploration from space was the cosmic radiation experiment launched by the U.S. on a V-2 rocket on 10 May 1946.[7] The first images of Earth taken from space followed the same year[8][9] while the first animal experiment saw fruit flies lifted into space in 1947, both also on modified V-2s launched by Americans. Starting in 1947, the Soviets, also with the help of German teams, launched sub-orbital V-2 rockets and their own variant, the R-1, including radiation and animal experiments on some flights. These suborbital experiments only allowed a very short time in space which limited their usefulness.

The first successful orbital launch was of the Soviet unmanned Sputnik 1 (“Satellite 1”) mission on 4 October 1957. The satellite weighed about 83kg (183lb), and is believed to have orbited Earth at a height of about 250km (160mi). It had two radio transmitters (20 and 40MHz), which emitted “beeps” that could be heard by radios around the globe. Analysis of the radio signals was used to gather information about the electron density of the ionosphere, while temperature and pressure data was encoded in the duration of radio beeps. The results indicated that the satellite was not punctured by a meteoroid. Sputnik 1 was launched by an R-7 rocket. It burned up upon re-entry on 3 January 1958.

The second one was Sputnik 2. Launched by the USSR in November 1957, it carried dog Laika inside.

This success led to an escalation of the American space program, which unsuccessfully attempted to launch a Vanguard satellite into orbit two months later. On 31 January 1958, the U.S. successfully orbited Explorer 1 on a Juno rocket. In the meantime, the Soviet dog Laika became the first animal in orbit on 3 November 1957.

The first successful human spaceflight was Vostok 1 (“East 1”), carrying 27-year-old Russian cosmonaut Yuri Gagarin on 12 April 1961. The spacecraft completed one orbit around the globe, lasting about 1 hour and 48 minutes. Gagarin’s flight resonated around the world; it was a demonstration of the advanced Soviet space program and it opened an entirely new era in space exploration: human spaceflight.

The U.S. first launched a person into space within a month of Vostok 1 with Alan Shepard’s suborbital flight in Mercury-Redstone 3. Orbital flight was achieved by the United States when John Glenn’s Mercury-Atlas 6 orbited Earth on 20 February 1962.

Valentina Tereshkova, the first woman in space, orbited Earth 48 times aboard Vostok 6 on 16 June 1963.

China first launched a person into space 42 years after the launch of Vostok 1, on 15 October 2003, with the flight of Yang Liwei aboard the Shenzhou 5 (Spaceboat 5) spacecraft.

The first artificial object to reach another celestial body was Luna 2 in 1959.[10] The first automatic landing on another celestial body was performed by Luna 9[11] in 1966. Luna 10 became the first artificial satellite of the Moon.[12]

The first manned landing on another celestial body was performed by Apollo 11 on 20 July 1969.

The first successful interplanetary flyby was the 1962 Mariner 2 flyby of Venus (closest approach 34,773 kilometers). The other planets were first flown by in 1965 for Mars by Mariner 4, 1973 for Jupiter by Pioneer 10, 1974 for Mercury by Mariner 10, 1979 for Saturn by Pioneer 11, 1986 for Uranus by Voyager 2, 1989 for Neptune by Voyager 2. In 2015, the dwarf planets Ceres and Pluto were orbited by Dawn and passed by New Horizons, respectively.

The first interplanetary surface mission to return at least limited surface data from another planet was the 1970 landing of Venera 7 on Venus which returned data to Earth for 23 minutes. In 1971 the Mars 3 mission achieved the first soft landing on Mars returning data for almost 20 seconds. Later much longer duration surface missions were achieved, including over 6 years of Mars surface operation by Viking 1 from 1975 to 1982 and over 2 hours of transmission from the surface of Venus by Venera 13 in 1982, the longest ever Soviet planetary surface mission.

The dream of stepping into the outer reaches of Earth’s atmosphere was driven by the fiction of Jules Verne[13][14][15] and H.G.Wells,[16] and rocket technology was developed to try to realise this vision. The German V-2 was the first rocket to travel into space, overcoming the problems of thrust and material failure. During the final days of World War II this technology was obtained by both the Americans and Soviets as were its designers. The initial driving force for further development of the technology was a weapons race for intercontinental ballistic missiles (ICBMs) to be used as long-range carriers for fast nuclear weapon delivery, but in 1961 when the Soviet Union launched the first man into space, the United States declared itself to be in a “Space Race” with the Soviets.

Konstantin Tsiolkovsky, Robert Goddard, Hermann Oberth, and Reinhold Tiling laid the groundwork of rocketry in the early years of the 20th century.

Wernher von Braun was the lead rocket engineer for Nazi Germany’s World War II V-2 rocket project. In the last days of the war he led a caravan of workers in the German rocket program to the American lines, where they surrendered and were brought to the USA to work on U.S. rocket development (“Operation Paperclip”). He acquired American citizenship and led the team that developed and launched Explorer 1, the first American satellite. Von Braun later led the team at NASA’s Marshall Space Flight Center which developed the Saturn V moon rocket.

Initially the race for space was often led by Sergei Korolyov, whose legacy includes both the R7 and Soyuzwhich remain in service to this day. Korolev was the mastermind behind the first satellite, first man (and first woman) in orbit and first spacewalk. Until his death his identity was a closely guarded state secret; not even his mother knew that he was responsible for creating the Soviet space program.

Kerim Kerimov was one of the founders of the Soviet space program and was one of the lead architects behind the first human spaceflight (Vostok 1) alongside Sergey Korolyov. After Korolyov’s death in 1966, Kerimov became the lead scientist of the Soviet space program and was responsible for the launch of the first space stations from 1971 to 1991, including the Salyut and Mir series, and their precursors in 1967, the Cosmos 186 and Cosmos 188.[17][18]

Although the Sun will probably not be physically explored at all, the study of the Sun has nevertheless been a major focus of space exploration. Being above the atmosphere in particular and Earth’s magnetic field gives access to the solar wind and infrared and ultraviolet radiations that cannot reach Earth’s surface. The Sun generates most space weather, which can affect power generation and transmission systems on Earth and interfere with, and even damage, satellites and space probes. Numerous spacecraft dedicated to observing the Sun have been launched and still others have had solar observation as a secondary objective. Solar Probe Plus, planned for a 2018 launch, will approach the Sun to within 1/8th the orbit of Mercury.

Mercury remains the least explored of the inner planets. As of May 2013, the Mariner 10 and MESSENGER missions have been the only missions that have made close observations of Mercury. MESSENGER entered orbit around Mercury in March 2011, to further investigate the observations made by Mariner 10 in 1975 (Munsell, 2006b).

A third mission to Mercury, scheduled to arrive in 2020, BepiColombo is to include two probes. BepiColombo is a joint mission between Japan and the European Space Agency. MESSENGER and BepiColombo are intended to gather complementary data to help scientists understand many of the mysteries discovered by Mariner 10’s flybys.

Flights to other planets within the Solar System are accomplished at a cost in energy, which is described by the net change in velocity of the spacecraft, or delta-v. Due to the relatively high delta-v to reach Mercury and its proximity to the Sun, it is difficult to explore and orbits around it are rather unstable.

Venus was the first target of interplanetary flyby and lander missions and, despite one of the most hostile surface environments in the Solar System, has had more landers sent to it (nearly all from the Soviet Union) than any other planet in the Solar System. The first successful Venus flyby was the American Mariner 2 spacecraft, which flew past Venus in 1962. Mariner 2 has been followed by several other flybys by multiple space agencies often as part of missions using a Venus flyby to provide a gravitational assist en route to other celestial bodies. In 1967 Venera 4 became the first probe to enter and directly examine the atmosphere of Venus. In 1970, Venera 7 became the first successful lander to reach the surface of Venus and by 1985 it had been followed by eight additional successful Soviet Venus landers which provided images and other direct surface data. Starting in 1975 with the Soviet orbiter Venera 9 some ten successful orbiter missions have been sent to Venus, including later missions which were able to map the surface of Venus using radar to pierce the obscuring atmosphere.

Space exploration has been used as a tool to understand Earth as a celestial object in its own right. Orbital missions can provide data for Earth that can be difficult or impossible to obtain from a purely ground-based point of reference.

For example, the existence of the Van Allen radiation belts was unknown until their discovery by the United States’ first artificial satellite, Explorer 1. These belts contain radiation trapped by Earth’s magnetic fields, which currently renders construction of habitable space stations above 1000km impractical. Following this early unexpected discovery, a large number of Earth observation satellites have been deployed specifically to explore Earth from a space based perspective. These satellites have significantly contributed to the understanding of a variety of Earth-based phenomena. For instance, the hole in the ozone layer was found by an artificial satellite that was exploring Earth’s atmosphere, and satellites have allowed for the discovery of archeological sites or geological formations that were difficult or impossible to otherwise identify.

The Moon was the first celestial body to be the object of space exploration. It holds the distinctions of being the first remote celestial object to be flown by, orbited, and landed upon by spacecraft, and the only remote celestial object ever to be visited by humans.

In 1959 the Soviets obtained the first images of the far side of the Moon, never previously visible to humans. The U.S. exploration of the Moon began with the Ranger 4 impactor in 1962. Starting in 1966 the Soviets successfully deployed a number of landers to the Moon which were able to obtain data directly from the Moon’s surface; just four months later, Surveyor 1 marked the debut of a successful series of U.S. landers. The Soviet unmanned missions culminated in the Lunokhod program in the early 1970s, which included the first unmanned rovers and also successfully brought lunar soil samples to Earth for study. This marked the first (and to date the only) automated return of extraterrestrial soil samples to Earth. Unmanned exploration of the Moon continues with various nations periodically deploying lunar orbiters, and in 2008 the Indian Moon Impact Probe.

Manned exploration of the Moon began in 1968 with the Apollo 8 mission that successfully orbited the Moon, the first time any extraterrestrial object was orbited by humans. In 1969, the Apollo 11 mission marked the first time humans set foot upon another world. Manned exploration of the Moon did not continue for long, however. The Apollo 17 mission in 1972 marked the most recent human visit there, and the next, Exploration Mission 2, is due to orbit the Moon in 2021. Robotic missions are still pursued vigorously.

The exploration of Mars has been an important part of the space exploration programs of the Soviet Union (later Russia), the United States, Europe, Japan and India. Dozens of robotic spacecraft, including orbiters, landers, and rovers, have been launched toward Mars since the 1960s. These missions were aimed at gathering data about current conditions and answering questions about the history of Mars. The questions raised by the scientific community are expected to not only give a better appreciation of the red planet but also yield further insight into the past, and possible future, of Earth.

The exploration of Mars has come at a considerable financial cost with roughly two-thirds of all spacecraft destined for Mars failing before completing their missions, with some failing before they even began. Such a high failure rate can be attributed to the complexity and large number of variables involved in an interplanetary journey, and has led researchers to jokingly speak of The Great Galactic Ghoul[19] which subsists on a diet of Mars probes. This phenomenon is also informally known as the Mars Curse.[20] In contrast to overall high failure rates in the exploration of Mars, India has become the first country to achieve success of its maiden attempt. India’s Mars Orbiter Mission (MOM)[21][22][23] is one of the least expensive interplanetary missions ever undertaken with an approximate total cost of 450 Crore (US$73 million).[24][25]

The Russian space mission Fobos-Grunt, which launched on 9 November 2011 experienced a failure leaving it stranded in low Earth orbit.[26] It was to begin exploration of the Phobos and Martian circumterrestrial orbit, and study whether the moons of Mars, or at least Phobos, could be a “trans-shipment point” for spaceships travelling to Mars.[27]

The exploration of Jupiter has consisted solely of a number of automated NASA spacecraft visiting the planet since 1973. A large majority of the missions have been “flybys”, in which detailed observations are taken without the probe landing or entering orbit; such as in Pioneer and Voyager programs. The Galileo spacecraft is the only one to have orbited the planet. As Jupiter is believed to have only a relatively small rocky core and no real solid surface, a landing mission is nearly impossible.

Reaching Jupiter from Earth requires a delta-v of 9.2km/s,[28] which is comparable to the 9.7km/s delta-v needed to reach low Earth orbit.[29] Fortunately, gravity assists through planetary flybys can be used to reduce the energy required at launch to reach Jupiter, albeit at the cost of a significantly longer flight duration.[28]

Jupiter has 67 known moons, many of which have relatively little known information about them.

Saturn has been explored only through unmanned spacecraft launched by NASA, including one mission (CassiniHuygens) planned and executed in cooperation with other space agencies. These missions consist of flybys in 1979 by Pioneer 11, in 1980 by Voyager 1, in 1982 by Voyager 2 and an orbital mission by the Cassini spacecraft, which entered orbit in 2004 and is expected to continue its mission well into 2017.

Saturn has at least 62 known moons, although the exact number is debatable since Saturn’s rings are made up of vast numbers of independently orbiting objects of varying sizes. The largest of the moons is Titan. Titan holds the distinction of being the only moon in the Solar System with an atmosphere denser and thicker than that of Earth. As a result of the deployment from the Cassini spacecraft of the Huygens probe and its successful landing on Titan, Titan also holds the distinction of being the only object in the outer Solar System that has been explored with a lander.

The exploration of Uranus has been entirely through the Voyager 2 spacecraft, with no other visits currently planned. Given its axial tilt of 97.77, with its polar regions exposed to sunlight or darkness for long periods, scientists were not sure what to expect at Uranus. The closest approach to Uranus occurred on 24 January 1986. Voyager 2 studied the planet’s unique atmosphere and magnetosphere. Voyager 2 also examined its ring system and the moons of Uranus including all five of the previously known moons, while discovering an additional ten previously unknown moons.

Images of Uranus proved to have a very uniform appearance, with no evidence of the dramatic storms or atmospheric banding evident on Jupiter and Saturn. Great effort was required to even identify a few clouds in the images of the planet. The magnetosphere of Uranus, however, proved to be completely unique and proved to be profoundly affected by the planet’s unusual axial tilt. In contrast to the bland appearance of Uranus itself, striking images were obtained of the Moons of Uranus, including evidence that Miranda had been unusually geologically active.

The exploration of Neptune began with the 25 August 1989 Voyager 2 flyby, the sole visit to the system as of 2014. The possibility of a Neptune Orbiter has been discussed, but no other missions have been given serious thought.

Although the extremely uniform appearance of Uranus during Voyager 2’s visit in 1986 had led to expectations that Neptune would also have few visible atmospheric phenomena, The spacecraft found that Neptune had obvious banding, visible clouds, auroras, and even a conspicuous anticyclone storm system rivaled in size only by Jupiter’s small Spot. Neptune also proved to have the fastest winds of any planet in the Solar System, measured as high as 2,100km/h.[30] Voyager 2 also examined Neptune’s ring and moon system. It discovered 900 complete rings and additional partial ring “arcs” around Neptune. In addition to examining Neptune’s three previously known moons, Voyager 2 also discovered five previously unknown moons, one of which, Proteus, proved to be the last largest moon in the system. Data from Voyager 2 supported the view that Neptune’s largest moon, Triton, is a captured Kuiper belt object.[31]

The dwarf planet Pluto presents significant challenges for spacecraft because of its great distance from Earth (requiring high velocity for reasonable trip times) and small mass (making capture into orbit very difficult at present). Voyager 1 could have visited Pluto, but controllers opted instead for a close flyby of Saturn’s moon Titan, resulting in a trajectory incompatible with a Pluto flyby. Voyager 2 never had a plausible trajectory for reaching Pluto.[32]

Pluto continues to be of great interest, despite its reclassification as the lead and nearest member of a new and growing class of distant icy bodies of intermediate size (and also the first member of the important subclass, defined by orbit and known as “plutinos”). After an intense political battle, a mission to Pluto dubbed New Horizons was granted funding from the United States government in 2003.[33] New Horizons was launched successfully on 19 January 2006. In early 2007 the craft made use of a gravity assist from Jupiter. Its closest approach to Pluto was on 14 July 2015; scientific observations of Pluto began five months prior to closest approach and will continue for at least a month after the encounter.

Until the advent of space travel, objects in the asteroid belt were merely pinpricks of light in even the largest telescopes, their shapes and terrain remaining a mystery. Several asteroids have now been visited by probes, the first of which was Galileo, which flew past two: 951 Gaspra in 1991, followed by 243 Ida in 1993. Both of these lay near enough to Galileo’s planned trajectory to Jupiter that they could be visited at acceptable cost. The first landing on an asteroid was performed by the NEAR Shoemaker probe in 2000, following an orbital survey of the object. The dwarf planet Ceres and the asteroid 4 Vesta, two of the three largest asteroids, were visited by NASA’s Dawn spacecraft, launched in 2007.

Although many comets have been studied from Earth sometimes with centuries-worth of observations, only a few comets have been closely visited. In 1985, the International Cometary Explorer conducted the first comet fly-by (21P/Giacobini-Zinner) before joining the Halley Armada studying the famous comet. The Deep Impact probe smashed into 9P/Tempel to learn more about its structure and composition and the Stardust mission returned samples of another comet’s tail. The Philae lander successfully landed on Comet ChuryumovGerasimenko in 2014 as part of the broader Rosetta mission.

Hayabusa was an unmanned spacecraft developed by the Japan Aerospace Exploration Agency to return a sample of material from the small near-Earth asteroid 25143 Itokawa to Earth for further analysis. Hayabusa was launched on 9 May 2003 and rendezvoused with Itokawa in mid-September 2005. After arriving at Itokawa, Hayabusa studied the asteroid’s shape, spin, topography, colour, composition, density, and history. In November 2005, it landed on the asteroid to collect samples. The spacecraft returned to Earth on 13 June 2010.

Deep space exploration is the term used for the exploration of deep space, and which is usually described as being at far distances from Earth and either within or away from the Solar System. It is the branch of astronomy, astronautics and space technology that is involved with the exploration of distant regions of outer space.[34] Physical exploration of space is conducted both by human spaceflights (deep-space astronautics) and by robotic spacecraft.

Some of the best candidates for future deep space engine technologies include anti-matter, nuclear power and beamed propulsion.[35] The latter, beamed propulsion, appears to be the best candidate for deep space exploration presently available, since it uses known physics and known technology that is being developed for other purposes.[36]

In the 2000s, several plans for space exploration were announced; both government entities and the private sector have space exploration objectives. China has announced plans to have a 60-ton multi-module space station in orbit by 2020.

The NASA Authorization Act of 2010 provided a re-prioritized list of objectives for the American space program, as well as funding for the first priorities. NASA proposes to move forward with the development of the Space Launch System (SLS), which will be designed to carry the Orion Multi-Purpose Crew Vehicle, as well as important cargo, equipment, and science experiments to Earth’s orbit and destinations beyond. Additionally, the SLS will serve as a back up for commercial and international partner transportation services to the International Space Station. The SLS rocket will incorporate technological investments from the Space Shuttle program and the Constellation program in order to take advantage of proven hardware and reduce development and operations costs. The first developmental flight is targeted for the end of 2017.[37]

The idea of using high level automated systems for space missions has become a desirable goal to space agencies all around the world. Such systems are believed to yield benefits such as lower cost, less human oversight, and ability to explore deeper in space which is usually restricted by long communications with human controllers.[38]

Autonomy is defined by 3 requirements:[38]

Autonomed technologies would be able to perform beyond predetermined actions. It would analyze all possible states and events happening around them and come up with a safe response. In addition, such technologies can reduce launch cost and ground involvement. Performance would increase as well. Autonomy would be able to quickly respond upon encountering an unforeseen event, especially in deep space exploration where communication back to Earth would take too long.[38]

NASA began its autonomous science experiment (ASE) on Earth Observing 1 (EO-1) which is NASA’s first satellite in the new millennium program Earth-observing series launched on 21 November 2000. The autonomy of ASE is capable of on-board science analysis, replanning, robust execution, and later the addition of model-based diagnostic. Images obtained by the EO-1 are analyzed on-board and downlinked when a change or an interesting event occur. The ASE software has successfully provided over 10,000 science images.[38]

The research that is conducted by national space exploration agencies, such as NASA and Roscosmos, is one of the reasons supporters cite to justify government expenses. Economic analyses of the NASA programs often showed ongoing economic benefits (such as NASA spin-offs), generating many times the revenue of the cost of the program.[39] It is also argued that space exploration would lead to the extraction of resources on other planets and especially asteroids, which contain billions of dollars worth of minerals and metals. Such expeditions could generate a lot of revenue.[40] As well, it has been argued that space exploration programs help inspire youth to study in science and engineering.[41]

Another claim is that space exploration is a necessity to mankind and that staying on Earth will lead to extinction. Some of the reasons are lack of natural resources, comets, nuclear war, and worldwide epidemic. Stephen Hawking, renowned British theoretical physicist, said that “I don’t think the human race will survive the next thousand years, unless we spread into space. There are too many accidents that can befall life on a single planet. But I’m an optimist. We will reach out to the stars.”[42]

NASA has produced a series of public service announcement videos supporting the concept of space exploration.[43]

Overall, the public remains largely supportive of both manned and unmanned space exploration. According to an Associated Press Poll conducted in July 2003, 71% of U.S. citizens agreed with the statement that the space program is “a good investment”, compared to 21% who did not.[44]

Arthur C. Clarke (1950) presented a summary of motivations for the human exploration of space in his non-fiction semi-technical monograph Interplanetary Flight.[45] He argued that humanity’s choice is essentially between expansion off Earth into space, versus cultural (and eventually biological) stagnation and death.

Spaceflight is the use of space technology to achieve the flight of spacecraft into and through outer space.

Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.

A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of Earth. Once in space, the motion of a spacecraftboth when unpropelled and when under propulsionis covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.

Satellites are used for a large number of purposes. Common types include military (spy) and civilian Earth observation satellites, communication satellites, navigation satellites, weather satellites, and research satellites. Space stations and human spacecraft in orbit are also satellites.

Current examples of the commercial use of space include satellite navigation systems, satellite television and satellite radio. Space tourism is the recent phenomenon of space travel by individuals for the purpose of personal pleasure.

Astrobiology is the interdisciplinary study of life in the universe, combining aspects of astronomy, biology and geology.[46] It is focused primarily on the study of the origin, distribution and evolution of life. It is also known as exobiology (from Greek: , exo, “outside”).[47][48][49] The term “Xenobiology” has been used as well, but this is technically incorrect because its terminology means “biology of the foreigners”.[50] Astrobiologists must also consider the possibility of life that is chemically entirely distinct from any life found on Earth.[51] In the Solar System some of the prime locations for current or past astrobiology are on Enceladus, Europa, Mars, and Titan.

Space colonization, also called space settlement and space humanization, would be the permanent autonomous (self-sufficient) human habitation of locations outside Earth, especially of natural satellites or planets such as the Moon or Mars, using significant amounts of in-situ resource utilization.

To date, the longest human occupation of space is the International Space Station which has been in continuous use for 700849245840000000015years, 221days. Valeri Polyakov’s record single spaceflight of almost 438 days aboard the Mir space station has not been surpassed. Long-term stays in space reveal issues with bone and muscle loss in low gravity, immune system suppression, and radiation exposure.

Many past and current concepts for the continued exploration and colonization of space focus on a return to the Moon as a “stepping stone” to the other planets, especially Mars. At the end of 2006 NASA announced they were planning to build a permanent Moon base with continual presence by 2024.[53]

Beyond the technical factors that could make living in space more widespread, it has been suggested that the lack of private property, the inability or difficulty in establishing property rights in space, has been an impediment to the development of space for human habitation. Since the advent of space technology in the latter half of the twentieth century, the ownership of property in space has been murky, with strong arguments both for and against. In particular, the making of national territorial claims in outer space and on celestial bodies has been specifically proscribed by the Outer Space Treaty, which had been, as of 2012[update], ratified by all spacefaring nations.[54]

Articles related to space exploration

See the article here:

Space exploration – Wikipedia, the free encyclopedia

Posted in Space Exploration | Comments Off on Space exploration – Wikipedia, the free encyclopedia