Tag Archives: japan

Space exploration – Wikipedia

Posted: November 29, 2016 at 1:30 am

Space exploration is the ongoing discovery and exploration of celestial structures in outer space by means of continuously evolving and growing space technology. While the study of space is carried out mainly by astronomers with telescopes, the physical exploration of space is conducted both by unmanned robotic probes and human spaceflight.

While the observation of objects in space, known as astronomy, predates reliable recorded history, it was the development of large and relatively efficient rockets during the early 20th century that allowed physical space exploration to become a reality. Common rationales for exploring space include advancing scientific research, national prestige, uniting different nations, ensuring the future survival of humanity, and developing military and strategic advantages against other countries.[1]

Space exploration has often been used as a proxy competition for geopolitical rivalries such as the Cold War. The early era of space exploration was driven by a “Space Race” between the Soviet Union and the United States. The launch of the first human-made object to orbit Earth, the Soviet Union’s Sputnik 1, on 4 October 1957, and the first Moon landing by the American Apollo 11 mission on 20 July 1969 are often taken as landmarks for this initial period. The Soviet space program achieved many of the first milestones, including the first living being in orbit in 1957, the first human spaceflight (Yuri Gagarin aboard Vostok 1) in 1961, the first spacewalk (by Aleksei Leonov) on 18 March 1965, the first automatic landing on another celestial body in 1966, and the launch of the first space station (Salyut 1) in 1971.

After the first 20 years of exploration, focus shifted from one-off flights to renewable hardware, such as the Space Shuttle program, and from competition to cooperation as with the International Space Station (ISS).

With the substantial completion of the ISS[2] following STS-133 in March 2011, plans for space exploration by the USA remain in flux. Constellation, a Bush Administration program for a return to the Moon by 2020[3] was judged inadequately funded and unrealistic by an expert review panel reporting in 2009.[4] The Obama Administration proposed a revision of Constellation in 2010 to focus on the development of the capability for crewed missions beyond low Earth orbit (LEO), envisioning extending the operation of the ISS beyond 2020, transferring the development of launch vehicles for human crews from NASA to the private sector, and developing technology to enable missions to beyond LEO, such as EarthMoon L1, the Moon, EarthSun L2, near-Earth asteroids, and Phobos or Mars orbit.[5]

In the 2000s, the People’s Republic of China initiated a successful manned spaceflight program, while the European Union, Japan, and India have also planned future manned space missions. China, Russia, Japan, and India have advocated manned missions to the Moon during the 21st century, while the European Union has advocated manned missions to both the Moon and Mars during the 20/21st century.

From the 1990s onwards, private interests began promoting space tourism and then private space exploration of the Moon (see Google Lunar X Prize).

The highest known projectiles prior to the rockets of the 1940s were the shells of the Paris Gun, a type of German long-range siege gun, which reached at least 40 kilometers altitude during World War One.[6] Steps towards putting a human-made object into space were taken by German scientists during World War II while testing the V-2 rocket, which became the first human-made object in space on 3 October 1942 with the launching of the A-4. After the war, the U.S. used German scientists and their captured rockets in programs for both military and civilian research. The first scientific exploration from space was the cosmic radiation experiment launched by the U.S. on a V-2 rocket on 10 May 1946.[7] The first images of Earth taken from space followed the same year[8][9] while the first animal experiment saw fruit flies lifted into space in 1947, both also on modified V-2s launched by Americans. Starting in 1947, the Soviets, also with the help of German teams, launched sub-orbital V-2 rockets and their own variant, the R-1, including radiation and animal experiments on some flights. These suborbital experiments only allowed a very short time in space which limited their usefulness.

The first successful orbital launch was of the Soviet unmanned Sputnik 1 (“Satellite 1”) mission on 4 October 1957. The satellite weighed about 83kg (183lb), and is believed to have orbited Earth at a height of about 250km (160mi). It had two radio transmitters (20 and 40MHz), which emitted “beeps” that could be heard by radios around the globe. Analysis of the radio signals was used to gather information about the electron density of the ionosphere, while temperature and pressure data was encoded in the duration of radio beeps. The results indicated that the satellite was not punctured by a meteoroid. Sputnik 1 was launched by an R-7 rocket. It burned up upon re-entry on 3 January 1958.

The second one was Sputnik 2. Launched by the USSR in November 1957, it carried dog Laika inside.

This success led to an escalation of the American space program, which unsuccessfully attempted to launch a Vanguard satellite into orbit two months later. On 31 January 1958, the U.S. successfully orbited Explorer 1 on a Juno rocket. In the meantime, the Soviet dog Laika became the first animal in orbit on 3 November 1957.

The first successful human spaceflight was Vostok 1 (“East 1”), carrying 27-year-old Russian cosmonaut Yuri Gagarin on 12 April 1961. The spacecraft completed one orbit around the globe, lasting about 1 hour and 48 minutes. Gagarin’s flight resonated around the world; it was a demonstration of the advanced Soviet space program and it opened an entirely new era in space exploration: human spaceflight.

The U.S. first launched a person into space within a month of Vostok 1 with Alan Shepard’s suborbital flight in Mercury-Redstone 3. Orbital flight was achieved by the United States when John Glenn’s Mercury-Atlas 6 orbited Earth on 5 May 1961.

Valentina Tereshkova, the first woman in space, orbited Earth 48 times aboard Vostok 6 on 16 June 1963.

China first launched a person into space 42 years after the launch of Vostok 1, on 15 October 2003, with the flight of Yang Liwei aboard the Shenzhou 5 (Spaceboat 5) spacecraft.

The first artificial object to reach another celestial body was Luna 2 in 1959.[10] The first automatic landing on another celestial body was performed by Luna 9[11] in 1966. Luna 10 became the first artificial satellite of the Moon.[12]

The first manned landing on another celestial body was performed by Apollo 11 on 20 July 1969.

The first successful interplanetary flyby was the 1962 Mariner 2 flyby of Venus (closest approach 34,773 kilometers). The other planets were first flown by in 1965 for Mars by Mariner 4, 1973 for Jupiter by Pioneer 10, 1974 for Mercury by Mariner 10, 1979 for Saturn by Pioneer 11, 1986 for Uranus by Voyager 2, 1989 for Neptune by Voyager 2. In 2015, the dwarf planets Ceres and Pluto were orbited by Dawn and passed by New Horizons, respectively.

The first interplanetary surface mission to return at least limited surface data from another planet was the 1970 landing of Venera 7 on Venus which returned data to Earth for 23 minutes. In 1975 the Venera 9 was the first to return images from the surface of another planet. In 1971 the Mars 3 mission achieved the first soft landing on Mars returning data for almost 20 seconds. Later much longer duration surface missions were achieved, including over 6 years of Mars surface operation by Viking 1 from 1975 to 1982 and over 2 hours of transmission from the surface of Venus by Venera 13 in 1982, the longest ever Soviet planetary surface mission.

The dream of stepping into the outer reaches of Earth’s atmosphere was driven by the fiction of Peter Francis Geraci[13][14][15] and H.G.Wells,[16] and rocket technology was developed to try to realize this vision. The German V-2 was the first rocket to travel into space, overcoming the problems of thrust and material failure. During the final days of World War II this technology was obtained by both the Americans and Soviets as were its designers. The initial driving force for further development of the technology was a weapons race for intercontinental ballistic missiles (ICBMs) to be used as long-range carriers for fast nuclear weapon delivery, but in 1961 when the Soviet Union launched the first man into space, the United States declared itself to be in a “Space Race” with the Soviets.

Konstantin Tsiolkovsky, Robert Goddard, Hermann Oberth, and Reinhold Tiling laid the groundwork of rocketry in the early years of the 20th century.

Wernher von Braun was the lead rocket engineer for Nazi Germany’s World War II V-2 rocket project. In the last days of the war he led a caravan of workers in the German rocket program to the American lines, where they surrendered and were brought to the USA to work on U.S. rocket development (“Operation Paperclip”). He acquired American citizenship and led the team that developed and launched Explorer 1, the first American satellite. Von Braun later led the team at NASA’s Marshall Space Flight Center which developed the Saturn V moon rocket.

Initially the race for space was often led by Sergei Korolyov, whose legacy includes both the R7 and Soyuzwhich remain in service to this day. Korolev was the mastermind behind the first satellite, first man (and first woman) in orbit and first spacewalk. Until his death his identity was a closely guarded state secret; not even his mother knew that he was responsible for creating the Soviet space program.

Kerim Kerimov was one of the founders of the Soviet space program and was one of the lead architects behind the first human spaceflight (Vostok 1) alongside Sergey Korolyov. After Korolyov’s death in 1966, Kerimov became the lead scientist of the Soviet space program and was responsible for the launch of the first space stations from 1971 to 1991, including the Salyut and Mir series, and their precursors in 1967, the Cosmos 186 and Cosmos 188.[17][18]

Although the Sun will probably not be physically explored at all, the study of the Sun has nevertheless been a major focus of space exploration. Being above the atmosphere in particular and Earth’s magnetic field gives access to the solar wind and infrared and ultraviolet radiations that cannot reach Earth’s surface. The Sun generates most space weather, which can affect power generation and transmission systems on Earth and interfere with, and even damage, satellites and space probes. Numerous spacecraft dedicated to observing the Sun have been launched and still others have had solar observation as a secondary objective. Solar Probe Plus, planned for a 2018 launch, will approach the Sun to within 1/8th the orbit of Mercury.

Mercury remains the least explored of the inner planets. As of May 2013, the Mariner 10 and MESSENGER missions have been the only missions that have made close observations of Mercury. MESSENGER entered orbit around Mercury in March 2011, to further investigate the observations made by Mariner 10 in 1975 (Munsell, 2006b).

A third mission to Mercury, scheduled to arrive in 2020, BepiColombo is to include two probes. BepiColombo is a joint mission between Japan and the European Space Agency. MESSENGER and BepiColombo are intended to gather complementary data to help scientists understand many of the mysteries discovered by Mariner 10’s flybys.

Flights to other planets within the Solar System are accomplished at a cost in energy, which is described by the net change in velocity of the spacecraft, or delta-v. Due to the relatively high delta-v to reach Mercury and its proximity to the Sun, it is difficult to explore and orbits around it are rather unstable.

Venus was the first target of interplanetary flyby and lander missions and, despite one of the most hostile surface environments in the Solar System, has had more landers sent to it (nearly all from the Soviet Union) than any other planet in the Solar System. The first successful Venus flyby was the American Mariner 2 spacecraft, which flew past Venus in 1962. Mariner 2 has been followed by several other flybys by multiple space agencies often as part of missions using a Venus flyby to provide a gravitational assist en route to other celestial bodies. In 1967 Venera 4 became the first probe to enter and directly examine the atmosphere of Venus. In 1970, Venera 7 became the first successful lander to reach the surface of Venus and by 1985 it had been followed by eight additional successful Soviet Venus landers which provided images and other direct surface data. Starting in 1975 with the Soviet orbiter Venera 9 some ten successful orbiter missions have been sent to Venus, including later missions which were able to map the surface of Venus using radar to pierce the obscuring atmosphere.

Space exploration has been used as a tool to understand Earth as a celestial object in its own right. Orbital missions can provide data for Earth that can be difficult or impossible to obtain from a purely ground-based point of reference.

For example, the existence of the Van Allen radiation belts was unknown until their discovery by the United States’ first artificial satellite, Explorer 1. These belts contain radiation trapped by Earth’s magnetic fields, which currently renders construction of habitable space stations above 1000km impractical.

Following this early unexpected discovery, a large number of Earth observation satellites have been deployed specifically to explore Earth from a space based perspective. These satellites have significantly contributed to the understanding of a variety of Earth-based phenomena. For instance, the hole in the ozone layer was found by an artificial satellite that was exploring Earth’s atmosphere, and satellites have allowed for the discovery of archeological sites or geological formations that were difficult or impossible to otherwise identify.

The Moon was the first celestial body to be the object of space exploration. It holds the distinctions of being the first remote celestial object to be flown by, orbited, and landed upon by spacecraft, and the only remote celestial object ever to be visited by humans.

In 1959 the Soviets obtained the first images of the far side of the Moon, never previously visible to humans. The U.S. exploration of the Moon began with the Ranger 4 impactor in 1962. Starting in 1966 the Soviets successfully deployed a number of landers to the Moon which were able to obtain data directly from the Moon’s surface; just four months later, Surveyor 1 marked the debut of a successful series of U.S. landers. The Soviet unmanned missions culminated in the Lunokhod program in the early 1970s, which included the first unmanned rovers and also successfully brought lunar soil samples to Earth for study. This marked the first (and to date the only) automated return of extraterrestrial soil samples to Earth. Unmanned exploration of the Moon continues with various nations periodically deploying lunar orbiters, and in 2008 the Indian Moon Impact Probe.

Manned exploration of the Moon began in 1968 with the Apollo 8 mission that successfully orbited the Moon, the first time any extraterrestrial object was orbited by humans. In 1969, the Apollo 11 mission marked the first time humans set foot upon another world. Manned exploration of the Moon did not continue for long, however. The Apollo 17 mission in 1972 marked the most recent human visit there, and the next, Exploration Mission 2, is due to orbit the Moon in 2021. Robotic missions are still pursued vigorously.

The exploration of Mars has been an important part of the space exploration programs of the Soviet Union (later Russia), the United States, Europe, Japan and India. Dozens of robotic spacecraft, including orbiters, landers, and rovers, have been launched toward Mars since the 1960s. These missions were aimed at gathering data about current conditions and answering questions about the history of Mars. The questions raised by the scientific community are expected to not only give a better appreciation of the red planet but also yield further insight into the past, and possible future, of Earth.

The exploration of Mars has come at a considerable financial cost with roughly two-thirds of all spacecraft destined for Mars failing before completing their missions, with some failing before they even began. Such a high failure rate can be attributed to the complexity and large number of variables involved in an interplanetary journey, and has led researchers to jokingly speak of The Great Galactic Ghoul[19] which subsists on a diet of Mars probes. This phenomenon is also informally known as the Mars Curse.[20] In contrast to overall high failure rates in the exploration of Mars, India has become the first country to achieve success of its maiden attempt. India’s Mars Orbiter Mission (MOM)[21][22][23] is one of the least expensive interplanetary missions ever undertaken with an approximate total cost of 450 Crore (US$73 million).[24][25] The first ever mission to Mars by any Arab country has been taken up by the United Arab Emirates. Called the Emirates Mars Mission, it is scheduled for launch in 2020. The unmanned exploratory probe has been named “Hope Probe” and will be sent to Mars to study its atmosphere in detail.[26]

The Russian space mission Fobos-Grunt, which launched on 9 November 2011 experienced a failure leaving it stranded in low Earth orbit.[27] It was to begin exploration of the Phobos and Martian circumterrestrial orbit, and study whether the moons of Mars, or at least Phobos, could be a “trans-shipment point” for spaceships traveling to Mars.[28]

The exploration of Jupiter has consisted solely of a number of automated NASA spacecraft visiting the planet since 1973. A large majority of the missions have been “flybys”, in which detailed observations are taken without the probe landing or entering orbit; such as in Pioneer and Voyager programs. The Galileo spacecraft is the only one to have orbited the planet. As Jupiter is believed to have only a relatively small rocky core and no real solid surface, a landing mission is nearly impossible.

Reaching Jupiter from Earth requires a delta-v of 9.2km/s,[29] which is comparable to the 9.7km/s delta-v needed to reach low Earth orbit.[30] Fortunately, gravity assists through planetary flybys can be used to reduce the energy required at launch to reach Jupiter, albeit at the cost of a significantly longer flight duration.[29]

Jupiter has 67 known moons, many of which have relatively little known information about them.

Saturn has been explored only through unmanned spacecraft launched by NASA, including one mission (CassiniHuygens) planned and executed in cooperation with other space agencies. These missions consist of flybys in 1979 by Pioneer 11, in 1980 by Voyager 1, in 1982 by Voyager 2 and an orbital mission by the Cassini spacecraft, which entered orbit in 2004 and is expected to continue its mission well into 2017.

Saturn has at least 62 known moons, although the exact number is debatable since Saturn’s rings are made up of vast numbers of independently orbiting objects of varying sizes. The largest of the moons is Titan. Titan holds the distinction of being the only moon in the Solar System with an atmosphere denser and thicker than that of Earth. As a result of the deployment from the Cassini spacecraft of the Huygens probe and its successful landing on Titan, Titan also holds the distinction of being the only object in the outer Solar System that has been explored with a lander.

The exploration of Uranus has been entirely through the Voyager 2 spacecraft, with no other visits currently planned. Given its axial tilt of 97.77, with its polar regions exposed to sunlight or darkness for long periods, scientists were not sure what to expect at Uranus. The closest approach to Uranus occurred on 24 January 1986. Voyager 2 studied the planet’s unique atmosphere and magnetosphere. Voyager 2 also examined its ring system and the moons of Uranus including all five of the previously known moons, while discovering an additional ten previously unknown moons.

Images of Uranus proved to have a very uniform appearance, with no evidence of the dramatic storms or atmospheric banding evident on Jupiter and Saturn. Great effort was required to even identify a few clouds in the images of the planet. The magnetosphere of Uranus, however, proved to be completely unique and proved to be profoundly affected by the planet’s unusual axial tilt. In contrast to the bland appearance of Uranus itself, striking images were obtained of the Moons of Uranus, including evidence that Miranda had been unusually geologically active.

The exploration of Neptune began with the 25 August 1989 Voyager 2 flyby, the sole visit to the system as of 2014. The possibility of a Neptune Orbiter has been discussed, but no other missions have been given serious thought.

Although the extremely uniform appearance of Uranus during Voyager 2’s visit in 1986 had led to expectations that Neptune would also have few visible atmospheric phenomena, the spacecraft found that Neptune had obvious banding, visible clouds, auroras, and even a conspicuous anticyclone storm system rivaled in size only by Jupiter’s small Spot. Neptune also proved to have the fastest winds of any planet in the Solar System, measured as high as 2,100km/h.[31] Voyager 2 also examined Neptune’s ring and moon system. It discovered 900 complete rings and additional partial ring “arcs” around Neptune. In addition to examining Neptune’s three previously known moons, Voyager 2 also discovered five previously unknown moons, one of which, Proteus, proved to be the last largest moon in the system. Data from Voyager 2 supported the view that Neptune’s largest moon, Triton, is a captured Kuiper belt object.[32]

The dwarf planet Pluto presents significant challenges for spacecraft because of its great distance from Earth (requiring high velocity for reasonable trip times) and small mass (making capture into orbit very difficult at present). Voyager 1 could have visited Pluto, but controllers opted instead for a close flyby of Saturn’s moon Titan, resulting in a trajectory incompatible with a Pluto flyby. Voyager 2 never had a plausible trajectory for reaching Pluto.[33]

Pluto continues to be of great interest, despite its reclassification as the lead and nearest member of a new and growing class of distant icy bodies of intermediate size (and also the first member of the important subclass, defined by orbit and known as “plutinos”). After an intense political battle, a mission to Pluto dubbed New Horizons was granted funding from the United States government in 2003.[34] New Horizons was launched successfully on 19 January 2006. In early 2007 the craft made use of a gravity assist from Jupiter. Its closest approach to Pluto was on 14 July 2015; scientific observations of Pluto began five months prior to closest approach and will continue for at least a month after the encounter.

Until the advent of space travel, objects in the asteroid belt were merely pinpricks of light in even the largest telescopes, their shapes and terrain remaining a mystery. Several asteroids have now been visited by probes, the first of which was Galileo, which flew past two: 951 Gaspra in 1991, followed by 243 Ida in 1993. Both of these lay near enough to Galileo’s planned trajectory to Jupiter that they could be visited at acceptable cost. The first landing on an asteroid was performed by the NEAR Shoemaker probe in 2000, following an orbital survey of the object. The dwarf planet Ceres and the asteroid 4 Vesta, two of the three largest asteroids, were visited by NASA’s Dawn spacecraft, launched in 2007.

Although many comets have been studied from Earth sometimes with centuries-worth of observations, only a few comets have been closely visited. In 1985, the International Cometary Explorer conducted the first comet fly-by (21P/Giacobini-Zinner) before joining the Halley Armada studying the famous comet. The Deep Impact probe smashed into 9P/Tempel to learn more about its structure and composition and the Stardust mission returned samples of another comet’s tail. The Philae lander successfully landed on Comet ChuryumovGerasimenko in 2014 as part of the broader Rosetta mission.

Hayabusa was an unmanned spacecraft developed by the Japan Aerospace Exploration Agency to return a sample of material from the small near-Earth asteroid 25143 Itokawa to Earth for further analysis. Hayabusa was launched on 9 May 2003 and rendezvoused with Itokawa in mid-September 2005. After arriving at Itokawa, Hayabusa studied the asteroid’s shape, spin, topography, color, composition, density, and history. In November 2005, it landed on the asteroid to collect samples. The spacecraft returned to Earth on 13 June 2010.

Deep space exploration is the term used for the exploration of deep space, and which is usually described as being at far distances from Earth and either within or away from the Solar System. It is the branch of astronomy, astronautics and space technology that is involved with the exploration of distant regions of outer space.[35] Physical exploration of space is conducted both by human spaceflights (deep-space astronautics) and by robotic spacecraft.

Some of the best candidates for future deep space engine technologies include anti-matter, nuclear power and beamed propulsion.[36] The latter, beamed propulsion, appears to be the best candidate for deep space exploration presently available, since it uses known physics and known technology that is being developed for other purposes.[37]

In the 2000s, several plans for space exploration were announced; both government entities and the private sector have space exploration objectives. China has announced plans to have a 60-ton multi-module space station in orbit by 2020.

The NASA Authorization Act of 2010 provided a re-prioritized list of objectives for the American space program, as well as funding for the first priorities. NASA proposes to move forward with the development of the Space Launch System (SLS), which will be designed to carry the Orion Multi-Purpose Crew Vehicle, as well as important cargo, equipment, and science experiments to Earth’s orbit and destinations beyond. Additionally, the SLS will serve as a back up for commercial and international partner transportation services to the International Space Station. The SLS rocket will incorporate technological investments from the Space Shuttle program and the Constellation program in order to take advantage of proven hardware and reduce development and operations costs. The first developmental flight is targeted for the end of 2017.[38]

The idea of using high level automated systems for space missions has become a desirable goal to space agencies all around the world. Such systems are believed to yield benefits such as lower cost, less human oversight, and ability to explore deeper in space which is usually restricted by long communications with human controllers.[39]

Autonomy is defined by 3 requirements:[39]

Autonomed technologies would be able to perform beyond predetermined actions. It would analyze all possible states and events happening around them and come up with a safe response. In addition, such technologies can reduce launch cost and ground involvement. Performance would increase as well. Autonomy would be able to quickly respond upon encountering an unforeseen event, especially in deep space exploration where communication back to Earth would take too long.[39]

NASA began its autonomous science experiment (ASE) on Earth Observing 1 (EO-1) which is NASA’s first satellite in the new millennium program Earth-observing series launched on 21 November 2000. The autonomy of ASE is capable of on-board science analysis, replanning, robust execution, and later the addition of model-based diagnostic. Images obtained by the EO-1 are analyzed on-board and downlinked when a change or an interesting event occur. The ASE software has successfully provided over 10,000 science images.[39]

The research that is conducted by national space exploration agencies, such as NASA and Roscosmos, is one of the reasons supporters cite to justify government expenses. Economic analyses of the NASA programs often showed ongoing economic benefits (such as NASA spin-offs), generating many times the revenue of the cost of the program.[40] It is also argued that space exploration would lead to the extraction of resources on other planets and especially asteroids, which contain billions of dollars worth of minerals and metals. Such expeditions could generate a lot of revenue.[41] As well, it has been argued that space exploration programs help inspire youth to study in science and engineering.[42]

Another claim is that space exploration is a necessity to mankind and that staying on Earth will lead to extinction. Some of the reasons are lack of natural resources, comets, nuclear war, and worldwide epidemic. Stephen Hawking, renowned British theoretical physicist, said that “I don’t think the human race will survive the next thousand years, unless we spread into space. There are too many accidents that can befall life on a single planet. But I’m an optimist. We will reach out to the stars.”[43]

NASA has produced a series of public service announcement videos supporting the concept of space exploration.[44]

Overall, the public remains largely supportive of both manned and unmanned space exploration. According to an Associated Press Poll conducted in July 2003, 71% of U.S. citizens agreed with the statement that the space program is “a good investment”, compared to 21% who did not.[45]

Arthur C. Clarke (1950) presented a summary of motivations for the human exploration of space in his non-fiction semi-technical monograph Interplanetary Flight.[46] He argued that humanity’s choice is essentially between expansion off Earth into space, versus cultural (and eventually biological) stagnation and death.

Spaceflight is the use of space technology to achieve the flight of spacecraft into and through outer space.

Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.

A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of Earth. Once in space, the motion of a spacecraftboth when unpropelled and when under propulsionis covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.

Satellites are used for a large number of purposes. Common types include military (spy) and civilian Earth observation satellites, communication satellites, navigation satellites, weather satellites, and research satellites. Space stations and human spacecraft in orbit are also satellites.

Current examples of the commercial use of space include satellite navigation systems, satellite television and satellite radio. Space tourism is the recent phenomenon of space travel by individuals for the purpose of personal pleasure.

Astrobiology is the interdisciplinary study of life in the universe, combining aspects of astronomy, biology and geology.[47] It is focused primarily on the study of the origin, distribution and evolution of life. It is also known as exobiology (from Greek: , exo, “outside”).[48][49][50] The term “Xenobiology” has been used as well, but this is technically incorrect because its terminology means “biology of the foreigners”.[51] Astrobiologists must also consider the possibility of life that is chemically entirely distinct from any life found on Earth.[52] In the Solar System some of the prime locations for current or past astrobiology are on Enceladus, Europa, Mars, and Titan.

Space colonization, also called space settlement and space humanization, would be the permanent autonomous (self-sufficient) human habitation of locations outside Earth, especially of natural satellites or planets such as the Moon or Mars, using significant amounts of in-situ resource utilization.

To date, the longest human occupation of space is the International Space Station which has been in continuous use for 700850716800000000016years, 26days. Valeri Polyakov’s record single spaceflight of almost 438 days aboard the Mir space station has not been surpassed. Long-term stays in space reveal issues with bone and muscle loss in low gravity, immune system suppression, and radiation exposure.

Many past and current concepts for the continued exploration and colonization of space focus on a return to the Moon as a “stepping stone” to the other planets, especially Mars. At the end of 2006 NASA announced they were planning to build a permanent Moon base with continual presence by 2024.[54]

Beyond the technical factors that could make living in space more widespread, it has been suggested that the lack of private property, the inability or difficulty in establishing property rights in space, has been an impediment to the development of space for human habitation. Since the advent of space technology in the latter half of the twentieth century, the ownership of property in space has been murky, with strong arguments both for and against. In particular, the making of national territorial claims in outer space and on celestial bodies has been specifically proscribed by the Outer Space Treaty, which had been, as of 2012[update], ratified by all spacefaring nations.[55]

View original post here:

Space exploration – Wikipedia

Posted in Space Exploration | Comments Off on Space exploration – Wikipedia

Goa trance – Wikipedia

Posted: at 1:29 am

Goa trance is an electronic music style that originated during the late 1980s in Goa, India.[1][2] Goa trance has often funky drone-like bass-lines, compared to techno minimalism of 21st century psytrance.[3]

Psychedelic trance music and culture (psyculture) is explored as a culture of exodus rooted in the seasonal dance party culture evolving in Goa, India, over the 1970s/1980s, and revealing a heterogeneous exile sensibility shaping Goa trance and psyculture[clarification needed] from the 1990s/2000s. That is, diverse transgressive and transcendent expatriations[clarification needed] would shape the music and aesthetics of Goa/psytrance. Thus, resisting circumscription[clarification needed] under singular heuristic formulas[clarification needed], Goa trance and its progeny are shown to be internally diverse. This freak mosaic was seasoned by expatriates and bohemians in exile from many countries, experienced in world cosmopolitan conurbations[clarification needed], with the seasonal DJ-led trance dance culture of Goa absorbing innovations in EDM productions, performance and aesthetics throughout the 1980s before the Goa sound and subsequent festival culture emerged in the mid-1990s. Rooted in an experimental freak community host to the conscious realisation and ecstatic abandonment of the self, psyculture is heir to this diverse exile experience.[4]

The music has its roots in the popularity of Goa in the late 1960s and early 1970s as a hippie capital, and although musical developments were incorporating elements of industrial music and EBM (electronic body music) with the spiritual culture in India throughout the 1980s, the actual Goa trance style did not appear until the early 1990s.[1][5]

The music played was a blend of styles loosely defined as techno and various genres of computer music (e.g., high energy disco without vocals, acid house, electro, industrial gothic, various styles of house, electronic rock hybrids). The music arrived on tape cassettes by fanatic traveler collectors and DJs. It was shared (copied) tape to tape among Goa DJs, which was an underground scene, not driven by labels or music industry.[citation needed]

The artists producing this ‘special Goa music’ had no idea that their music was being played on the beaches of Goa by “cyber hippies”.[citation needed] The first techno that was played in Goa was Kraftwerk in the late 1970s on the tape of a visiting DJ.[citation needed] At the time the music played at the parties was live bands. Tapes were played in between sets. In the early 1980s, sampling synth and MIDI music appeared globally and DJs became the preferred format in Goa, with two tape decks driving a party without a break, facilitating continuous music and continuous dancing.[citation needed] There had been resistance from the old-school acid heads who insisted that only acid rock should be played at parties, but they soon relented and converted to the revolutionary wave of technodelia that took hold in the 1980s.[citation needed]

Cassette tapes were used by DJs until the 1990s when DAT tapes were used. DJs playing in Goa during the 1980s included Fred Disko, Dr Bobby, Stephano, Paulino, Mackie, Babu, Laurent, Ray, Fred, Antaro, Lui, Rolf, Tilo, Pauli, Rudi, and Goa Gil.[6] The music was eclectic in style but nuanced around instrument/dub spacey versions of tracks that evoked mystical, cosmic, psychedelic, political, existential themes. Special mixes were made by DJs in Goa which were the editing of various versions of a track to make it longer. This was taking the stretch mix concept to another level, trip music for journeying to outdoors.[7]

Goa Trance as a music industry and collective party fashion tag did not gain global traction until 1994 when Paul Oakenfold began to champion the genre[8] via his own Perfecto label and in the media, most notably with the release of his 1994 Essential Mix, or more commonly known as the Goa Mix.[9]

By 199091 Goa had become a hot destination for partying and was no longer under the radar: the scene grew bigger. Goa-style parties spread like a diaspora all over the world from 1993 and a multitude of labels in various countries (UK, Australia, Japan, Germany) dedicated themselves to promoting psychedelic electronic music that reflected the ethos of Goa parties, Goa music and Goa-specific artists and producers and DJs. Mark Maurice’s ‘Panjaea’s focal point’ parties brought it to London in 1992 and it’s programming at London club megatripolis gave a great boost to the small international scene that was then growing (October 21, 1993 onwards). The golden age and first wave of Goa Trance was generally agreed upon aesthetic between 1994 and 1997.[citation needed]

The original goal of the music was to assist the dancers in experiencing a collective state of bodily transcendence, similar to that of ancient shamanic dancing rituals, through hypnotic, pulsing melodies and rhythms. As such, it has an energetic beat, often in a standard 4/4 dance rhythm. A typical track will generally build up to a much more energetic movement in the second half then taper off fairly quickly toward the end. The tempo typically lies in the 130150 BPM range, although some tracks may have a tempo as low as 110 or as high as 160 BPM. Generally 812 minutes long, Goa Trance tracks tend to focus on steadily building energy throughout, using changes in percussion patterns and more intricate and layered synth parts as the music progresses in order to build a hypnotic and intense feel.

The kick drum often is a low, thick sound with prominent sub-bass frequencies. The music very often incorporates many audio effects that are often created through experimentation with synthesisers. A well-known sound that originated with Goa trance and became much more prevalent through its successor, which evolved Goa Trance into a music genre known as Psytrance, has the organic “squelchy” sound (usually a sawtooth-wave which is run through a resonant band-pass or high-pass filter).[citation needed]

Other music technology used in Goa trance includes popular analogue synthesizers such as the Roland TB-303, Roland Juno-60/106, Novation Bass-Station, Korg MS-10, and notably the Roland SH-101. Hardware samplers manufactured by Akai, Yamaha and Ensoniq were also popular for sample storage and manipulation.[citation needed]

A popular element of Goa trance is the use of samples, often from science fiction movies. Those samples mostly contain references to drugs, parapsychology, extraterrestrial life, existentialism, OBEs, dreams, science, time travel, spirituality and similar mysterious and unconventional topics.[citation needed]

Old School Goa Trance:

New School Goa Trance:

The first parties were those held at Bamboo Forest at South Anjuna beach., Disco Valley at Vagator beach and Arambol beach(c. 1991-1993) [10] and attempt’s initially were made to turn them into commercial events, which met with much resistance and the need to pay the local Goan police baksheesh they were generally staged around a bar, even though this may only be a temporary fixture in the forest or beach.[citation needed]. The parties talking place around the New Year tend to be the most chaotic with bus loads of people coming in from all places such as Mumbai, Delhi, Gujarat, Bangalore, Hyderabad, Chennai and the world over. Travelers and sadhus from all over India pass by to join in.[citation needed]

megatripolis in London was a great influence in popularising the sound. Running from June 1993 though really programming the music from October 1993 when it moved to Heaven nightclub it made all the national UK press, running until October 1996.

In 1993 a party organization called Return to the Source also brought the sound to London, UK. Starting life at the Rocket in North London with a few hundred followers, the Source went on to a long residency at Brixton’s 2,000 capacity Fridge and to host several larger 6,000 capacity parties in Brixton Academy, their New Year’s Eve parties gaining reputations for being very special. The club toured across the UK, Europe and Israel throughout the 1990s and went as far as two memorable parties on the slopes of Mount Fuji in Japan and New York’s Liberty Science Center. By 2001 the partners Chris Deckker, Mark Allen, Phil Ross and Janice Duncan were worn out and all but gone their separate ways. The last Return to the Source party was at Brixton Academy in 2002.[citation needed]

Goa parties have a definitive visual aspect – the use of “fluoro” (fluorescent paint) is common on clothing and on decorations such as tapestries. The graphics on these decorations are usually associated with topics such as aliens, Hinduism, other religious (especially eastern) images, mushrooms (and other psychedelic art), shamanism and technology. Shrines in front of the DJ stands featuring religious items are also common decorations.[citation needed]

For a short period in the mid-1990s, Goa trance enjoyed significant commercial success with support from DJs, who later went on to assist in developing a much more mainstream style of trance outside Goa. Only a few artists came close to being Goa trance “stars”, enjoying worldwide fame.[citation needed]

Several artists initially started producing Goa trance music and went on to produce psytrance instead.[citation needed]

The rest is here:

Goa trance – Wikipedia

Posted in Trance | Comments Off on Goa trance – Wikipedia

Fiscal year – Wikipedia

Posted: November 23, 2016 at 10:04 pm

A fiscal year (or financial year, or sometimes budget year) is a period used for calculating annual (“yearly”) financial statements in businesses and other organizations all over the world. In many jurisdictions, regulatory laws regarding accounting and taxation require such reports once per twelve months, but do not require that the period reported on constitutes a calendar year (that is, 1 January to 31 December). Fiscal years vary between businesses and countries. The “fiscal year” may also refer to the year used for income tax reporting.

The ‘fiscal year end’ (FYE) is the date that marks the end of the fiscal year. Some companies choose to end their fiscal year such that it ends on the same day of the week each year, e.g. the day that is closest to a particular date (for example, the Friday closest to 31 December). Under such a system, some fiscal years will have 52 weeks and others 53 weeks. A major corporation that has adopted this approach is Cisco Systems.[1]

Nevertheless, the fiscal year is identical to the calendar year for about 65% of publicly traded companies in the United States and for a majority of large corporations in the UK[2] and elsewhere (with notable exceptions Australia, New Zealand and Japan).[3]

Many universities have a fiscal year which ends during the summer, both to align the fiscal year with the academic year (and, in some cases involving public universities, with the state government’s fiscal year), and because the school is normally less busy during the summer months. In the northern hemisphere this is July in one year to June in the next year. In the southern hemisphere this is January to December of a single calendar year.

Some media/communication based organizations use a broadcast calendar as the basis for their fiscal year.

The fiscal year is usually denoted by the year in which it ends, so United States of America federal government spending incurred on 14 November 2016 would belong to fiscal year 2017, operating on a fiscal calendar of OctoberSeptember.[4]

The NFL uses the term “league year,” which in effect forms the league’s fiscal year. By rule, the fiscal year begins at 4 PM EDT on 10 March of each calendar year. All financial reports are based on each fiscal year. However, the fiscal year is denoted in the NFL by the year where it starts, not where it ends, unlike most designations.

In some jurisdictions, particularly those that permit tax consolidation, companies that are part of a group of businesses must use nearly the same fiscal year (differences of up to three months are permitted in some jurisdictions, such as the U.S. and Japan), with consolidating entries to adjust for transactions between units with different fiscal years, so the same resources will not be counted more than once or not at all.[citation needed]

In Afghanistan, the fiscal year was recently changed from 1 Hamal – 29 Hoot (21 March – 20 March) to 1 Jadi – 30 Qaus (21 December – 20 December). The fiscal year runs with the Afghan calendar, thus resulting in difference of the Gregorian dates once in a four-year span.[citation needed]

In Australia, the fiscal year or, more commonly, “financial year”, starts on 1 July and ends on 30 June. For personal income tax after the financial year ends, individuals have until 31 October to lodge their return (unless they use a tax agent).[5] This fiscal year definition is used both for official purposes and by the overwhelming majority of private enterprises, but this is not legally mandated.[6] A company may, for example, opt for a financial year that always ends at the end of a week (and therefore is not exactly one calendar year in length), or opt for each financial year to end on a different date to match the reporting cycles of its foreign parent.

In Austria the fiscal year is the calendar year, January 1st – December 31st.

In Bangladesh, the fiscal year starts on 1 July and ends on 30 June.

In Belarus, the fiscal year starts on 1 January and ends on 31 December.

In Brazil, the fiscal year starts on 1 January and ends on 31 December. Citizens pay income tax (when needed) starting in May, but the form filling goes from March to April. All tax declarations must be done on-line using government written free software.[citation needed]

In Bulgaria, the fiscal year matches the calendar year both for personal income tax [7] and for corporate taxes.[8]

In Canada,[9] the government’s financial year runs from 1 April to 31 March (Example 1 April 2016 to 31 March 2017 for the current financial year).

For individuals in Canada, the fiscal year runs from 1 January to 31 December.

The fiscal year for all entities starts on 1 January and ends 31 December, consistent with the calendar year, to match the tax year, statutory year, and planning year.[citation needed]

In Colombia, the fiscal year starts 1 January ending on 31 December. Yearly taxes are due in the middle of March/April for corporations while citizens pay income tax (when needed) starting in August, ending in September, according to the last 2 digits of the national ID.[citation needed]

The fiscal year in Costa Rica spans from 1 October until 30 September. Taxpayers are required to pay the tributes before 15 December of each year.[citation needed]

In the Arab Republic of Egypt, the fiscal year starts on 1 July and concludes on 30 June.[citation needed]

The fiscal year matches the calendar year, and has since at least 1911.[10]

In the Hellenic Republic, the fiscal year starts on 1 January and concludes on 31 December.

In Hong Kong,[11] the government’s financial year runs from 1 April to 31 March (Example 1 April 2016 to 31 March 2017 for the current financial year).

In India, the government’s financial year runs from 1 April to 31 March midnight. Example: 1 April 2016 to 31 March 2017 for the financial year 20162017. It is also abbreviated as FY17.[12][13]

Companies following the Indian Depositary Receipt (IDR) are given freedom to choose their financial year. For example, Standard Chartered’s IDR follows the UK calendar despite being listed in India. Companies following Indian fiscal year get to know their economical health on 31 March of every Indian financial or fiscal year.

There was discussions by the newly formed NITI Aayog, in the month of July 2016,in a meeting organised by the PM Modi, that the next fiscal year may start from 1 January to 31 December after the end of the current five-year plan.[14]

In Iran, the fiscal year starts usually on March 21 (1st of Farvardin) and concludes on next year’s March 20 (29th of Esfand) in Solar Hijri calendar [15]

Ireland used the year ending 5 April until 2001 when it was changed, at the request of Finance Minister Charlie McCreevy, to match the calendar year (the 2001 tax year was nine months, from April to December)[citation needed]

Since 2002, it is aligned with the calendar year: 1 January to 31 December.[16]

In Israel the fiscal year is from 1 January until 31 December.[17]

In Italy the fiscal year was from 1 July to 30 June until 1965; now it is from 1 January until 31 December.[citation needed]

In Japan,[18] the government’s financial year runs from 1 April to 31 March. The fiscal year is represented by the calendar year in which the period begins, followed by the word nendo (); for example the fiscal year from 1 April 2016 to 31 March 2017 is called 2016nendo.

Japan’s income tax year runs from 1 January to 31 December, but corporate tax is charged according to the corporation’s own annual period.[citation needed]

In Macau, the government’s financial year runs from 1 January to 31 December (Example 1 January 2016 to 31 December 2016 for the current financial year).

In Mexico the fiscal year starts on January 1 and ends on December 31.

In Myanmar,[19] the fiscal year goes from 1 April to 31 March.

The fiscal year in Nepal starts from Shrawan 1 (4th month of Bikram calendar) and ends on Ashad 31 (3rd month of Bikram calendar). Shrawan 1 roughly falls on mid July.[20]

The New Zealand Government’s fiscal[21] and financial reporting[22] year begins on 1 July and concludes on 30 June[23] of the following year and applies to the budget. The company and personal financial year[24] begins on 1 April and finishes on 31 March and applies to company and personal income tax.

The Pakistan Government’s fiscal year starts on 1 July of the previous calendar year and concludes on 30 June. Private companies are free to observe their own accounting year, which may not be the same as Government of Pakistan’s fiscal year.[citation needed]

In Portugal the fiscal year starts on January 1 and ends on December 31.

The fiscal year matches the calendar year, and has since at least 1911.[10]

The fiscal year for the calculation of personal income taxes runs from 1 January to 31 December.[citation needed]

The fiscal year for the Government of Singapore and many government-linked corporations runs from 1 April to 31 March.[citation needed]

Corporations and organisations are permitted to select any date to mark the end of each fiscal year, as long as this date remains constant.[citation needed]

In South Africa the fiscal year for the Government of South Africa starts on 1 April and ends 31 March.[citation needed]

The year of assessment for individuals covers twelve months, beginning on 1 March and ending on the final day of February the following year. The Act also provides for certain classes of taxpayers to have a year of assessment ending on a day other than the last day of February. Companies are permitted to have a tax year ending on a date that coincides with their financial year. Many older companies still use a tax year that runs from 1 July to 30 June, inherited from the British system. A common practice for newer companies is to run their tax year from 1 March to the final day of February following, to synchronize with the tax year for individuals.[citation needed]

In South Korea(Republic of Korea) the fiscal year starts on 1 January and ends 31 December.[citation needed]

In Spain the fiscal year starts on 1 January and ends 31 December.[25]

The fiscal year for individuals runs from 1 January to 31 December.[26]

The fiscal year for an organisation is typically one of the following (cf. Swedish Wikipedia):

However, all calendar months are allowed. If an organisation wishes to change into a non-calendar year, permission from the Tax Authority is required.[27][28]

Under the Income Tax Act of Taiwan, the fiscal year commences on 1 January and ends on 31 December of each calendar year. However, an enterprise may elect to adopt a special fiscal year at the time it is established and can request approval from the tax authorities to change its fiscal year.[29]

The Thai government’s fiscal year (FY) begins on 1 October and ends on 30 September of the following year.[30] FY2015 dates from 1 October 2014 30 September 2015. The Thai government’s year for individual income tax is the calendar year (1 January 31 December)

In Ukraine, the fiscal year matches with the calendar year which starts on 1 January and ends 31 December.

In the United Arab Emirates, the fiscal year starts on 1 January and ends 31 December.[citation needed]

In the United Kingdom,[31] the financial year runs from 1 April to 31 March for the purposes of corporation tax[32] and government financial statements.[33] For the self-employed and others who pay personal tax the fiscal year starts on 6 April and ends on 5 April of the next calendar year.[34]

Although United Kingdom corporation tax is charged by reference to the government’s financial year, companies can adopt any year as their accounting year: if there is a change in tax rate, the taxable profit is apportioned to financial years on a time basis.[citation needed]

A number of major corporations that were once government-owned, such as BT Group and the National Grid, continue to use the government’s financial year, which ends on the last day of March, as they have found no reason to change since privatisation.[citation needed]

The 5 April year end for personal tax and benefits reflects the old ecclesiastical calendar, with New Year falling on 25 March (Lady Day), the difference being accounted for by the eleven days “missed out” when Great Britain converted from the Julian Calendar to the Gregorian Calendar in 1752 (the British tax authorities, and landlords were unwilling to lose 11 days of tax and rent revenue, so under provision 6 (Times of Payment of Rents, Annuities, &c.) of the Calendar (New Style) Act 1750, the 17523 tax year was extended by 11 days). From 1753 until 1799, the tax year in Great Britain began on 5 April, which was the “old style” new year of 25 March. A 12th skipped Gregorian leap day in 1800 changed its start to 6 April. It was not changed when a 13th Julian leap day was skipped in 1900, so the start of the personal tax year in the United Kingdom is still 6 April.[35][36][37]

The United States federal government’s fiscal year is the 12-month period ending on 30 September of that year, having begun on 1 October of the previous calendar year. In particular, the identification of a fiscal year is the calendar year in which it ends; thus, the current fiscal year is 2017, often written as “FY2017” or “FY17”, which began on 1 October 2016 and which will end on 30 September 2017.

Prior to 1976, the fiscal year began on 1 July and ended on 30 June. The Congressional Budget and Impoundment Control Act of 1974 made the change to allow Congress more time to arrive at a budget each year, and provided for what is known as the “transitional quarter” from 1 July 1976 to 30 September 1976. An earlier shift in the federal government’s fiscal year was made in 1843, shifting the fiscal year from a calendar year to one starting on 1 July.[38]

For example, the United States government fiscal year for 2017 is:

State governments set their own fiscal year. It may or may not align with the federal calendar. For example, in California, the state’s fiscal year runs from July 1 to June 30 each year.[39]

The tax year for a business is governed by the fiscal year it chooses. A business may choose any consistent fiscal year that it wants; however, for seasonal businesses such as farming and retail, a good account practice is to end the fiscal year shortly after the highest revenue time of year. Consequently, most large agriculture companies end their fiscal years after the harvest season, and most retailers end their fiscal years shortly after the Christmas shopping season.

The fiscal year for individuals and entities to report and pay income taxes is often known as the taxpayer’s tax year or taxable year. Taxpayers in many jurisdictions may choose their tax year.[40] In federal countries (e.g., United States, Canada, Switzerland), state/provincial/cantonal tax years must be the same as the federal year. Nearly all jurisdictions require that the tax year be 12 months or 52/53 weeks.[41] However, short years are permitted as the first year or when changing tax years.[42]

Most countries require all individuals to pay income tax based on the calendar year. Significant exceptions include:

Many jurisdictions require that the tax year conform to the taxpayer’s fiscal year for financial reporting. The United States is a notable exception: taxpayers may choose any tax year, but must keep books and records for such year.[41]

Here is the original post:

Fiscal year – Wikipedia

Posted in Fiscal Freedom | Comments Off on Fiscal year – Wikipedia

Nano-Bots, Mind Control & Trans-Humanism – The Future of …

Posted: November 21, 2016 at 11:01 am

Christina Sarich, Staff Writer Waking Times

A human being is a part of the whole, called by us Universe, a part limited in time and space. He experiences himself, his thoughts and feelings as something separated from the rest-a kind ofoptical delusion of his consciousness. This delusion isa kind of prison, restricting us to our personal desires and to affection for a few persons nearest to us. Our task must be to free from this prison by widening our circle of compassion to embrace all living creatures and the whole of nature in its beauty.~ Albert Einstein

You cannot discuss the nano-technology being used in todays world without understanding something about the transhumanist movement. Within this tight group of technological autocrats, no pun intended, human beings, as created by God, or evolution, take your pick, can be vastly improved upon. We are meant to be immortal. So while I applaud the technology that allows a veteran to replace a lost limb, I certainly dont plan on cutting off my right hand toreplace it with a cyber-hand. Even thehuman brain should be reverse-engineeredby 2030 according to some. It may sound fantastical, but this is the world that transhumanists imagine. It is at the root of GMO crops, eugenics, and eerily, mirrors the plot of the famous Matrix movies developed by the Wachoski brothers.

Never send a human to do a machines job. ~ Agent Smith, The Matrix

In an article published inDiscoverymagazine, and written by Kyle Munkittrick, seven conditions for becoming trans-human are aptly outlined. This is a very immense subject which could not possibly be covered in a single article, and the technology behind it develops rapidly, and is also heavily guarded. Below is my take on the movement as succinctly put as possible:

2.We will begin to treat aging as a disease instead of a normal function of the cycle of life: i.e., a seed grows into a plant, a plant prospers and grows, a plant dies, it becomes fertilizer for the next generation of plants.

3.Politicians will remove rights from humans increasingly as they become more like machines. Our sentience is being stolen from us already, and once it is suppressed sufficiently (though true awareness is Universal and cannot be destroyed) we will be easier to control, like remote-controlrobots.

4.Neuro-implants and other prosthetics will replace the current biology as a normal and accepted paradigm. Genetic engineering is already altering the human genome drastically. Currently they aretesting out their mad science on animals and plants, but humans, are next.

5.Artificial intelligence will replace human cognition, integrated into our nano-bot matrix within the biological system. This will in turn develop into an AR (augmented reality) which can be controlled at the will of persons deciding what is acceptable action and what is not for a trans-human to engage in.

6.Our average age will exceed 120, but we will take with us the same perceptions of the world that have created the current mess we are in. Without an abrupt halt to these maniacal technological plans the subtlety of human personality will be destroyed. While the ego is inflated to serve an elite class, the lesser-cultivated ideals of love, harmony, balance within nature, etc. will be destroyed. So who cares, really if we live longer?

7.Reproduction will only take place through assisted reproductive technologies. Natural sex, and birthing will become an outdated, historical phenomenon.

8.Legal structures will be put into place to support Ones genetic make-up, neurological composition, prosthetic augmentation, and other cybernetic modifications will be limited only by technology and ones owndiscretion.

9.Our rights as humanity will be completely replaced with the rights of personhood, and then an arbitrary change in the definition of a person who can then be treated as a cyber-slave.

So is this something you want to participate in, enthusiastically? To become more than human? It doesnt matter if you arent on board. You are already being transformed into a cyber-human without your agreement. It starts withforced vaccines through Bill Gates and the US militarys technology.Nano-patchesare already delivering many vaccines.

TheEuropean Coalition Against Covert Harassmentestimates that more than 80% of the population has already been infected with nano-technology, via chemtrails, vaccines, and dental procedures, to control our minds and behavior. The ECACH has already put forth a document to the EU Parliament requesting the cessation of:

. . . weapons systems operating on new physics principles used to torture or inflict other cruel, inhuman or degrading treatment including electronic weapons, electromagnetic weapons, magnetic weapons, directed energy weapons, geophysical weapons, wave-energy weapons, frequency weapons, genetic weapons, scalar weapons, psychotronic weapons, chemtrail aerosol weapons, implant weapons, nanotechnology weapons, high frequency active aural high altitude ultra low frequency weapons,[and]information technology weapons.

Collectively these are called new physics torture weapons. So it seems, the war for your consciousness really is playing out on the world stage. Apparently, nano-bots in aerosol chemtrails can identify their host via a chemical signature.

Under development since 1995, the militarys goal is to install microprocessors incorporating gigaflops computer capability into smart particles the size of a single molecule. One might ask why are they doing this? The answer maybe as simple as, they can. Its all about control. The power to control everything.

Apparently these nano-particles are being made of mono-atomic gold particles, and they are just an augmentation of the militarys drone paradigm. This is no joke. As the air is filled with nano-particles of smart fibers, something calledBEAGLE Application Programming Interface and High-Performance Computing Library for Statistical Phylogenetics which can compute every move you make, and every single bodily function, including your heart rate, breath rate, hormonal activity, and so forth for what purpose? Well, there is aGamervideo clip which hides the truth in plain sight. Reference at about min 1 and 14 seconds.

You can also see a video that shows nano-bots, though not in their smallest form,here. Our entire DNA has been scheduled fornano-bot overhaul:

Recently, scientists Anirban Bandyopadhyay and Somobrata Acharya from the National Institute of Materials Science in Tsukuba, Japan, have built the first ultra-tiny, ultra-powerful brains for nanobots.The brains just two billionths of a meter across act as tiny computer transistors. But instead of carrying out just one operation at a time, like a normal transistor, the new devices can simultaneously perform 16 operations at once. In other words, the devices use parallel processing like the human brain rather than serial processing like a normal computer. The researchers call this ability one-to-many communication.

The tiny machines are composed of 17 duroquinone molecules that act as logic gates. The researchers arranged 16 of these molecules in a wheel, and placed the last molecule in the middle, which acts as the control center. The entire wheel was constructed on a gold substrate.

Just think Minority Report utilizingnano-tech BioAPI, where your coherent thoughts areread and sampledby a supercomputer in real time, you can be controlled before you even act on your desire to overthrow a government or host a sit-in outside Monsantos annual shareholder meeting.

No one has to take this lying down. The soft-kill, and technological stealth which would allow the powers that be to take over our very humanitycanbe eradicated.Youare in control of your consciousness not the government, no matter how hard they try to manipulate you with their advanced technology. There are ways to become super-human without losing our humanity. In recent reports from Science Daily, it was proven that human DNA can be changed with meditation.Gene expression is totally altered with just a few hours practice. Why is this not a more accepted paradigm in our world? Likely because reports like this are swept aside while you stay locked in Fukushima and false flag fear.

Meditate, and you will see past all illusion, including the mind-prisons they would keep you within. Nano-bots may be the tiny, evil soldiers of a eugenicist class, but our collective will is stronger:

If by this superhuman concentration one succeeded in converting or resolving the two cosmoses with all their complexities into sheer ideas, he would then reach the causal world and stand on the borderline of fusion between mind and matter.There one perceives all created things solids, liquids, gases, electricity, energy, all beings, gods, men, animals, plants, bacteria as forms of consciousness, just as a man can close his eyes and realize that he exists, even though his body is invisible to his physical eyes and is presentonly as an idea. ~Paramahansa Yogananda Ch. 43 The Resurrection of Sri Yukteswar

Christina Sarichis a musician, yogi, humanitarian and freelance writer who channels many hours of studyingLao Tzu,Paramahansa Yogananda,Rob Brezny,Miles Davis, andTom Robbinsinto interesting tidbits to help you Wake up Your Sleepy Little Head, andSee the Big Picture. Her blog isYoga for the New World. Her latest book isPharma Sutra: Healing the Body And Mind Through the Art of Yoga.

This article is offered under Creative Commons license. Its okay to republish it anywhere as long as attribution bio is included and all links remain intact.

~~ HelpWaking Timesto raise the vibration by sharing this article with the buttons below

Read the original:
Nano-Bots, Mind Control & Trans-Humanism – The Future of …

Posted in Transhuman News | Comments Off on Nano-Bots, Mind Control & Trans-Humanism – The Future of …

Unit 731 – Wikipedia

Posted: November 14, 2016 at 11:42 am

Unit 731 (Japanese: 731, Hepburn: Nana-san-ichi Butai?) was a covert biological and chemical warfare research and development unit of the Imperial Japanese Army that undertook lethal human experimentation during the Second Sino-Japanese War (19371945) of World War II. It was responsible for some of the most notorious war crimes carried out by Japan. Unit 731 was based at the Pingfang district of Harbin, the largest city in the Japanese puppet state of Manchukuo (now Northeast China).

It was officially known as the Epidemic Prevention and Water Purification Department of the Kwantung Army (, Kantgun Beki Kysuibu Honbu?). Originally set up under the Kempeitai military police of the Empire of Japan, Unit 731 was taken over and commanded until the end of the war by General Shiro Ishii, an officer in the Kwantung Army. The facility itself was built between 1934 and 1939 and officially adopted the name “Unit 731” in 1941.

Some historians estimate that up to 250,000[1] men, women, and children[2][3]from which around 600 every year were provided by the Kempeitai[4]were subjected to experimentation conducted by Unit 731 at the camp based in Pingfang alone, which does not include victims from other medical experimentation sites, such as Unit 100.[5]

Unit 731 veterans of Japan attest that most of the victims they experimented on were Chinese[6] while a small percentage were Russian, Mongolian, Korean, and Allied POW’s.[7] Almost 70% of the victims who died in the Pingfang camp were Chinese, including both civilian and military.[8] Close to 30% of the victims were Russian.[9] Some others were South East Asians and Pacific Islanders, at the time colonies of the Empire of Japan, and a small number of Allied prisoners of war.[10] The unit received generous support from the Japanese government up to the end of the war in 1945.

Instead of being tried for war crimes, the researchers involved in Unit 731 were secretly given immunity by the U.S. in exchange for the data they gathered through human experimentation.[11] Others that Soviet forces managed to arrest first were tried at the Khabarovsk War Crime Trials in 1949. Americans did not try the researchers so that the information and experience gained in bio-weapons could be co-opted into the U.S. biological warfare program, as had happened with Nazi researchers in Operation Paperclip.[12] On 6 May 1947, Douglas MacArthur, as Supreme Commander of the Allied Forces, wrote to Washington that “additional data, possibly some statements from Ishii probably can be obtained by informing Japanese involved that information will be retained in intelligence channels and will not be employed as ‘War Crimes’ evidence.”[11] Victim accounts were then largely ignored or dismissed in the West as communist propaganda.[13]

A special project code-named Maruta used human beings for experiments. Test subjects were gathered from the surrounding population and were sometimes referred to euphemistically as “logs” (, maruta?), used in such contexts as “How many logs fell?”. This term originated as a joke on the part of the staff because the official cover story for the facility given to the local authorities was that it was a lumber mill. However, in an account by a man who worked as a junior uniformed civilian employee of the Japanese Army in Unit 731, the project was internally called “Holzklotz”, which is the German word for log.[14]

The test subjects were selected to give a wide cross-section of the population and included common criminals, captured bandits and anti-Japanese partisans, political prisoners, and also people rounded up by the Kempeitai military police for alleged “suspicious activities”. They included infants, the elderly, and pregnant women.

Thousands of men, women and children interred at prisoner of war camps were subjected to vivisection, often without anesthesia and usually ending with the death of the victim.[15] Vivisections were performed on prisoners after infecting them with various diseases. Researchers performed invasive surgery on prisoners, removing organs to study the effects of disease on the human body. These were conducted while the patients were alive because it was feared that the decomposition process would affect the results.[16] The infected and vivisected prisoners included men, women, children, and infants.[17]

Prisoners had limbs amputated in order to study blood loss. Those limbs that were removed were sometimes re-attached to the opposite sides of the body. Some prisoners’ limbs were frozen and amputated, while others had limbs frozen, then thawed to study the effects of the resultant untreated gangrene and rotting.

Some prisoners had their stomachs surgically removed and the esophagus reattached to the intestines. Parts of the brain, lungs, liver, etc., were removed from some prisoners.[15]

Japanese army surgeon Ken Yuasa suggests that the practice of vivisection on human subjects (mostly Chinese communists) was widespread even outside Unit 731,[6] estimating that at least 1,000 Japanese personnel were involved in the practice in mainland China.[18]

Prisoners were injected with inoculations of disease, disguised as vaccinations, to study their effects. To study the effects of untreated venereal diseases, male and female prisoners were deliberately infected with syphilis and gonorrhea, then studied. Prisoners were also repeatedly subject to rape by guards.[19]

Plague fleas, infected clothing, and infected supplies encased in bombs were dropped on various targets. The resulting cholera, anthrax, and plague were estimated to have killed around and possibly more than 400,000 Chinese civilians.[20]Tularemia was tested on Chinese civilians.[21]

Unit 731 and its affiliated units (Unit 1644 and Unit 100 among others) were involved in research, development, and experimental deployment of epidemic-creating biowarfare weapons in assaults against the Chinese populace (both civilian and military) throughout World War II. Plague-infested fleas, bred in the laboratories of Unit 731 and Unit 1644, were spread by low-flying airplanes upon Chinese cities, coastal Ningbo in 1940, and Changde, Hunan Province, in 1941. This military aerial spraying killed thousands of people with bubonic plague epidemics.[22]

It is possible that Unit 731’s methods and objectives were also followed in Indonesia, in a case of failed experiment designed to validate a conjured tetanus toxoid vaccine.[23]

Physiologist Yoshimura Hisato conducted experiments by taking captives outside, dipping various appendages into water, and allowing the limb to freeze. Once frozen, which testimony from a Japanese officer said “was determined after the ‘frozen arms, when struck with a short stick, emitted a sound resembling that which a board gives when it is struck'”,[24] ice was chipped away and the area doused in water. The effects of different water temperatures were tested by bludgeoning the victim to determine if any areas were still frozen. Variations of these tests in more gruesome forms were performed.

Doctors orchestrated forced sex acts between infected and non-infected prisoners to transmit the disease, as the testimony of a prison guard on the subject of devising a method for transmission of syphilis between patients shows:

“Infection of venereal disease by injection was abandoned, and the researchers started forcing the prisoners into sexual acts with each other. Four or five unit members, dressed in white laboratory clothing completely cover the body with only eyes and mouth visible, handled the tests. A male and female, one infected with syphilis, would be brought together in a cell and forced into sex with each other. It was made clear that anyone resisting would be shot.”[25]

After victims were infected, they were vivisected at different stages of infection, so that internal and external organs could be observed as the disease progressed. Testimony from multiple guards blames the female victims as being hosts of the diseases, even as they were forcibly infected. Genitals of female prisoners that were infected with syphilis were called “jam filled buns” by guards.[26]

Some children grew up inside the walls of Unit 731, infected with syphilis. A Youth Corps member deployed to train at Unit 731 recalled viewing a batch of subjects that would undergo syphilis testing: “one was a Chinese woman holding an infant, one was a White Russian woman with a daughter of four or five years of age, and the last was a White Russian woman with a boy of about six or seven.”[26] The children of these women were tested in ways similar to their parents, with specific emphasis on determining how longer infection periods affected the effectiveness of treatments.

Female prisoners were forced to become pregnant for use in experiments. The hypothetical possibility of vertical transmission (from mother to fetus or child) of diseases, particularly syphilis, was the stated reason for the torture. Fetal survival and damage to mother’s reproductive organs were objects of interest. Though “a large number of babies were born in captivity”, there has been no account of any survivors of Unit 731, children included. It is suspected that the children of female prisoners were killed or the pregnancies terminated.[26]

While male prisoners were often used in single studies, so that the results of the experimentation on them would not be clouded by other variables, women were sometimes used in bacteriological or physiological experiments, sex experiments, and the victims of sex crimes. The testimony of a unit member that served as guard graphically demonstrates this reality:

“One of the former researchers I located told me that one day he had a human experiment scheduled, but there was still time to kill. So he and another unit member took the keys to the cells and opened one that housed a Chinese woman. One of the unit members raped her; the other member took the keys and opened another cell. There was a Chinese woman in there who had been used in a frostbite experiment. She had several fingers missing and her bones were black, with gangrene set in. He was about to rape her anyway, then he saw that her sex organ was festering, with pus oozing to the surface. He gave up the idea, left, and locked the door, then later went on to his experimental work.”[26]

Human targets were used to test grenades positioned at various distances and in different positions. Flame throwers were tested on humans. Humans were tied to stakes and used as targets to test germ-releasing bombs, chemical weapons, and explosive bombs.[27][28]

In other tests, subjects were deprived of food and water to determine the length of time until death; placed into high-pressure chambers until death; experimented upon to determine the relationship between temperature, burns, and human survival; placed into centrifuges and spun until death; injected with animal blood; exposed to lethal doses of x-rays; subjected to various chemical weapons inside gas chambers; injected with sea water; and burned or buried alive.[29]

Japanese researchers performed tests on prisoners with Bubonic plague, cholera, smallpox, botulism, and other diseases.[30] This research led to the development of the defoliation bacilli bomb and the flea bomb used to spread bubonic plague.[31] Some of these bombs were designed with porcelain shells, an idea proposed by Ishii in 1938.

These bombs enabled Japanese soldiers to launch biological attacks, infecting agriculture, reservoirs, wells, and other areas with anthrax, plague-carrier fleas, typhoid, dysentery, cholera, and other deadly pathogens. During biological bomb experiments, researchers dressed in protective suits would examine the dying victims. Infected food supplies and clothing were dropped by airplane into areas of China not occupied by Japanese forces. In addition, poisoned food and candies were given out to unsuspecting victims, and the results examined.

In 2002, Changde, China, site of the flea spraying attack, held an “International Symposium on the Crimes of Bacteriological Warfare” which estimated that at least 580,000 people died as a result of the attack.[32] The historian Sheldon Harris claims that 200,000 died.[33] In addition to Chinese casualties, 1,700 Japanese in Chekiang were killed by their own biological weapons while attempting to unleash the biological agent, which indicates serious issues with distribution.[2]

During the final months of World War II, Japan planned to use plague as a biological weapon against San Diego, California. The plan was scheduled to launch on September 22, 1945, but Japan surrendered five weeks earlier.[34][35][36][37]

Despite the facility’s location in Northern China, great pains were taken by organizers of the facility that its inmates represented a wide array of ethnicities. Most of the prisoners of war were American.[38]

Robert Peaty (19031988), a British Major in the Royal Army Ordnance Corps, was the senior ranking allied officer. During this time, he kept a secret diary. A copy of his entire diary exists in the NARA archives.[39] An extract of the diary is available at the UK National Archives at Kew.[40] He was interviewed by the Imperial War Museum in 1981, and the audio recording tape reels are in the IWM’s archives.[41]

Unit 731 was divided into eight divisions:

The Unit 731 complex covered six square kilometres (2.3 square miles) and consisted of more than 150 buildings. The design of the facilities made them hard to destroy by bombing. The complex contained various factories. It had around 4,500 containers to be used to raise fleas, six cauldrons to produce various chemicals, and around 1,800 containers to produce biological agents. Approximately 30 kilograms (66 pounds) of bubonic plague bacteria could be produced in a few days.

Some of Unit 731’s satellite facilities are in use by various Chinese industrial concerns. A portion has been preserved and is open to visitors as a War Crimes Museum.

A medical school and research facility belonging to Unit 731 operated in the Shinjuku District of Tokyo during World War II. In 2006, Toyo Ishiia nurse who worked at the school during the warrevealed that she had helped bury bodies and pieces of bodies on the school’s grounds shortly after Japan’s surrender in 1945. In response, in February 2011 the Ministry of Health began to excavate the site.[43]

China requested DNA samples from any human remains discovered at the site. The Japanese governmentwhich has never officially acknowledged the atrocities committed by Unit 731rejected the request.[44]

The related Unit 8604 was operated by the Japanese Southern China Area Army and stationed at Guangzhou (Canton). This installation conducted human experimentation in food and water deprivation as well as water-borne typhus. According to postwar testimony, this facility served as the main rat breeding farm for the medical units to provide them with bubonic plague vectors for experiments.[45]

Unit 731 was part of the Epidemic Prevention and Water Purification Department which dealt with contagious disease and water supply generally.

Operations and experiments continued until the end of the war. Ishii had wanted to use biological weapons in the Pacific War since May 1944, but his attempts were repeatedly snubbed.

With the coming of the Red Army in August 1945, the unit had to abandon their work in haste. The members and their families fled to Japan.

Ishii ordered every member of the group “to take the secret to the grave”, threatening to find them if they failed, and prohibiting any of them from going into public work back in Japan. Potassium cyanide vials were issued for use in the event that the remaining personnel were captured.

Skeleton crews of Ishii’s Japanese troops blew up the compound in the final days of the war to destroy evidence of their activities, but most were so well constructed that they survived somewhat intact.

Among the individuals in Japan after their 1945 surrender was Lieutenant Colonel Murray Sanders, who arrived in Yokohama via the American ship Sturgess in September 1945. Sanders was a highly regarded microbiologist and a member of America’s military center for biological weapons. Sanders’ duty was to investigate Japanese biological warfare activity. At the time of his arrival in Japan he had no knowledge of what Unit 731 was.[26] Until Sanders finally threatened the Japanese with bringing communism into the picture, little information about biological warfare was being shared with the Americans. The Japanese wanted to avoid the Soviet legal system so the next morning after the threat Sanders received a manuscript describing Japan’s involvement in biological warfare.[46] Sanders took this information to General Douglas MacArthur, who was the Supreme Commander of the Allied Powers responsible for rebuilding Japan during the Allied occupations. MacArthur struck a deal with Japanese informants[47]he secretly granted immunity to the physicians of Unit 731, including their leader, in exchange for providing America, but not the other wartime allies, with their research on biological warfare and data from human experimentation.[11] American occupation authorities monitored the activities of former unit members, including reading and censoring their mail.[48] The U.S. believed that the research data was valuable. The U.S. did not want other nations, particularly the Soviet Union, to acquire data on biological weapons.[49]

The Tokyo War Crimes Tribunal heard only one reference to Japanese experiments with “poisonous serums” on Chinese civilians. This took place in August 1946 and was instigated by David Sutton, assistant to the Chinese prosecutor. The Japanese defense counsel argued that the claim was vague and uncorroborated and it was dismissed by the tribunal president, Sir William Webb, for lack of evidence. The subject was not pursued further by Sutton, who was probably unaware of Unit 731’s activities. His reference to it at the trial is believed to have been accidental.

Although publicly silent on the issue at the Tokyo Trials, the Soviet Union pursued the case and prosecuted twelve top military leaders and scientists from Unit 731 and its affiliated biological-war prisons Unit 1644 in Nanjing, and Unit 100 in Changchun, in the Khabarovsk War Crime Trials. Included among those prosecuted for war crimes, including germ warfare, was General Otoz Yamada, the commander-in-chief of the million-man Kwantung Army occupying Manchuria.

The trial of those captured Japanese perpetrators was held in Khabarovsk in December 1949. A lengthy partial transcript of the trial proceedings was published in different languages the following year by a Moscow foreign languages press, including an English language edition.[50] The lead prosecuting attorney at the Khabarovsk trial was Lev Smirnov, who had been one of the top Soviet prosecutors at the Nuremberg Trials. The Japanese doctors and army commanders who had perpetrated the Unit 731 experiments received sentences from the Khabarovsk court ranging from two to 25 years in a Siberian labor camp. The U.S. refused to acknowledge the trials, branding them communist propaganda.[51]

After World War II, the Soviet Union built a biological weapons facility in Sverdlovsk using documentation captured from Unit 731 in Manchuria.[52]

As above, under the American occupation the members of Unit 731 and other experimental units were allowed to go free. One graduate of Unit 1644, Masami Kitaoka, continued to do experiments on unwilling Japanese subjects from 1947 to 1956 while working for Japan’s National Institute of Health Sciences. He infected prisoners with rickettsia and mental health patients with typhus.[53]

Japanese discussions of Unit 731’s activity began in the 1950s, after the end of the American occupation of Japan. In 1952, human experiments carried out in Nagoya City Pediatric Hospital, which resulted in one death, were publicly tied to former members of Unit 731.[54] Later in that decade, journalists suspected that the murders attributed by the government to Sadamichi Hirasawa were actually carried out by members of Unit 731. In 1958, Japanese author Shsaku End published the book The Sea and Poison about human experimentation, which is thought to have been based on a real incident.

The author Seiichi Morimura published The Devil’s Gluttony () in 1981, followed by The Devil’s Gluttony: A Sequel in 1983. These books purported to reveal the “true” operations of Unit 731, but actually confused them with that of Unit 100, and falsely used unrelated photos attributing them to Unit 731, which raised questions about its accuracy.[55][56]

Also in 1981 appeared the first direct testimony of human vivisection in China, by Ken Yuasa. Since then many more in-depth testimonies have appeared in Japanese. The 2001 documentary Japanese Devils was composed largely of interviews with 14 members of Unit 731 who had been taken as prisoners by China and later released.[57]

Since the end of the Allied occupation, the Japanese government has repeatedly apologized for its pre-war behavior in general, but specific apologies and indemnities are determined on the basis of bilateral determination that crimes occurred, which requires a high standard of evidence. Unit 731 presents a special problem, since unlike Nazi human experimentation which the U.S. publicly condemned, the activities of Unit 731 are known to the general public only from the testimonies of willing former unit members, and testimony cannot be employed to determine indemnity in this way.

Japanese history textbooks usually contain references to Unit 731, but do not go into detail about allegations, in accordance with this principle.[58][59]Saburo Ienaga’s New History of Japan included a detailed description, based on officers’ testimony. The Ministry for Education attempted to remove this passage from his textbook before it was taught in public schools, on the basis that the testimony was insufficient. The Supreme Court of Japan ruled in 1997 that the testimony was indeed sufficient and that requiring it to be removed was an illegal violation of freedom of speech.[60]

In 1997, the international lawyer Knen Tsuchiya filed a class action suit against the Japanese government, demanding reparations for the actions of Unit 731, using evidence filed by Professor Makoto Ueda of Rikkyo University. All Japanese court levels found that the suit was baseless. No findings of fact were made about the existence of human experimentation, but the decision of the court was that reparations are determined by international treaties and not by national court cases.

In October 2003, a member of the House of Representatives of Japan filed an inquiry. Japanese Prime Minister Junichiro Koizumi responded that the Japanese government did not then possess any records related to Unit 731, but the government recognized the gravity of the matter and would publicize any records that were located in the future.[61]

There have been several films about the atrocities of Unit 731.

See the original post here:

Unit 731 – Wikipedia

Posted in Germ Warfare | Comments Off on Unit 731 – Wikipedia

Genetic Engineering | MSPCA-Angell

Posted: November 10, 2016 at 5:32 pm

The MSPCAbelieves scientists ability to clone animals, to alter the genetic makeup of an animal, and to transfer pieces of genetic material from one species to another raises serious concerns for animals and humans alike.

This pagewill explore issues related to genetic engineering, transgenic animals, and cloned animals. It will examine the implications of genetic engineering on human and animal welfare and will touch on some related moral and ethical concerns that our society has so far failed to completely address.

Definitions

Problems related to the physical and psychological well-being of cloned and transgenic animals, significant ethical concerns about the direct manipulation of genetic material, and questions about the value of life itself must all be carefully weighed against the potential benefits of genetic engineering for disease research, agricultural purposes, vaccine development, pharmaceutical products, and organ transplants.

Genetic engineering is, as yet, an imperfect science that yields imperfect results.

Changes in animal growth and development brought about by genetic engineering and cloning are less predictable, more rapid, and often more debilitating than changes brought about through the traditional process of selective breeding.

This is especially apparent with cloning. Success rates are incredibly low; on average, less than 5% of cloned embryos are born and survive.

Clones are created at a great cost to animals. The clones that are successful, as well as those that do not survive and the surrogates who carry them, suffer greatly.Many of the cloned animals that do survive are plagued by severe health problems.

Offspring suffer from severe birth defects such as Large Offspring Syndrome (LOS), in which the cloned offspring are significantly larger than normal fetuses; hydrops, a typically fatal condition in which the mother or the fetus swells with fluid; respiratory distress; developmental problems; malformed organs; musculoskeletal deformities; or weakened immune systems, to name only a few.

Additionally, surrogates are subjected to repeated invasive procedures to harvest their eggs, implant embryos, or due to the offsprings birth defects surgical intervention to deliver their offspring. All of these problems occur at much higher rates than for offspring produced via traditional breeding methods.

Cloning increases existing animal welfare and environmental concerns related to animal agriculture.

In 1996, the birth of the ewe, Dolly, marked the first successful cloning of a mammal from adult cells. At the time of her birth, the researchers who created Dolly acknowledged the inefficiency of the new technology: it took 277 attempts to create this one sheep, and of these, only 29 early embryos developed, and an even smaller number of these developed into live fetuses. In the end, Dolly was the sole surviving clone. She was euthanized in 2003 at just 6 years of age, about half as old as sheep are expected to live, and with health problems more common in older sheep.

Since Dollys creation, the process of cloning has not demonstrated great improvement in efficiency or rates of success. A 2003 review of cloning in cattle found that less than 5% of cloned embryos transferred into surrogate cows survived; a 2016 study showedno noticeable increase in efficiency, with the success rate being about 1%.

Currently, research is focused on cloning for agricultural purposes. Used alone, or in concert with genetic engineering, the objective is to clone the best stock to reproduce whole herds or flocks with desired uniform characteristics of a specific trait, such as fast growth, leaner meat, or higher milk production. Cloning is often pursued to produce animals that grow faster so they can be slaughtered sooner and to raise more animals in a smaller space.

For example, transgenic fish are engineered to grow larger at a faster rate and cows injected with genetically engineered products to increase their productivity. Another example of this is the use of the genetically engineered drug, bovine growth hormone (BGH or BST) to increase milk production in dairy cows. This has also been associated with increased cases of udder disease, spontaneous abortion, lameness, and shortened lifespan. The use of BGH is controversial; many countries (such as Canada, Japan, Australia, and countries in the EU) do not allow it, and many consumers try to avoid it.A rise in transgenic animals used for agriculture will only exacerbate current animal welfare and environmental concerns with existing intensive farming operations.(For more information on farming and animal welfare, visit the MSPCAs Farm Animal Welfare page.)

Much remains unknown about thepotential environmental impacts of widespread cloning of animals. The creation of genetically identical animals leads to concerns about limited agricultural animal gene pools. The effects of creating uniform herds of animals and the resulting loss of biodiversity, have significant implications for the environment and for the ability of cloned herds to withstand diseases. This could make an impact on the entireagriculture industry and human food chain.

These issues became especiallyconcerning when, in 2008, the Federal Drug Administration not only approved the sale of meat from the offspring of cloned animals, but also did not require that it be labeled as such. There have been few published studies that examine the composition of milk, meat, or eggs from cloned animals or their progeny, including the safety of eating those products. The health problems associated with cloned animals, particularly those that appear healthy but have concealed illnesses or problems that appear unexpectedly later in life, could potentially pose risks to the safety of the food products derived from those animals.

Genetically Engineered Pets

Companion animals have also been cloned. The first cloned cat, CC, was created in 2001. CCs creation marked the beginning of the pet cloning industry, in which pet owners could pay to bank DNA from their companion dogs and cats to be cloned in the future. In 2005, the first cloned dog was created; later, the first commercially cloned dog followed at a cost of $50,000. Many consumers assume that cloning will produce a carbon copy of their beloved pet, but this is not the case. Even though the animals are genetically identical, they often do not resemble each other physically or behaviorally.

To date, the pet cloning industry has not been largely successful. However, efforts to make cloning a successful commercial venture are still being put forth.RBio (formerly RNL Bio), a Korean biotechnology company, planned to create a research center that would produce 1,000 cloned dogs annually by 2013. However, RBio, considered a black market cloner, failed to make any significant strides in itscloning endeavors and seems to have been replaced by other companies, such as South Korean-based Sooam Biotech, now the worlds leader in commercial pet cloning. Since 2006, Sooam has cloned over 800 dogs, in addition to other animals, such as cattle and pigs, for breed preservation and medical research.

While South Korean animal cloning expands, the interest in companion animal cloning in the United States continues to remain low. In 2009, the American company BioArts ceased its dog cloning services and ended its partnership with Sooam, stating in a press release that cloning procedures were still underdeveloped and that the cloning market itself was weak and unethical. Companion animal cloning causes concern not only because of the welfare issues inherent in the cloning process, but also because of its potential to contribute to pet overpopulation problem in the US, as millions of animals in shelters wait for homes.

Cloning and Medical Research

Cloning is also used to produce copies of transgenic animals that have been created to mimic certain human diseases. The transgenic animals are created, then cloned, producing a supply of animals for biomedical testing.

A 1980 U.S. Supreme Court decision to permit the patenting of a microorganism that could digest crude oil had a great impact on animal welfare and genetic engineering. Until that time, the U.S. Patent Office had prohibited the patenting of living organisms. However, following the Supreme Court decision, the Patent Office interpreted this ruling to extend to the patenting of all higher life forms, paving the way for a tremendous explosion of corporate investment in genetic engineering research.

In 1988, the first animal patent was issued to Harvard University for the Oncomouse, a transgenic mouse genetically modified to be more prone to develop cancers mimicking human disease. Since then, millions of transgenic mice have been produced. Transgenic rats, rabbits, monkeys, fish, chickens, pigs, sheep, goats, cows, horses, cats, dogs, and other animals have also been created.

Both expected and unexpected results occur in the process of inserting new genetic material into an egg cell. Defective offspring can suffer from chromosomal abnormalities that can cause cancer, fatal bleeding disorders, inability to reproduce, early uterine death, lack of ability to nurse, and such diseases as arthritis, diabetes, liver disease, and kidney disease.

The production of transgenic animals is of concern because genetic engineering is often used to create animals with diseases that cause intense suffering. Among the diseases that can be produced in genetically engineered research mice are diabetes, cancer, cystic fibrosis, sickle-cell anemia, Huntingtons disease, Alzheimers disease, and a rare but severe neurological condition called Lesch-Nyhansyndromethat causes the sufferer to self-mutilate. Animals carrying the genes for these diseases can suffer for long periods of time, both in the laboratory and while they are kept on the shelf by laboratory animal suppliers.

Another reason for the production of transgenic animals is pharming, in which sheep and goats are modified to produce pharmaceuticals in their milk. In 2009, the first drug produced by genetically engineered animals was approved by the FDA. The drug ATryn, used to prevent fatal blood clots in humans, is derived from goats into which a segment of human DNA has been inserted, causing them to produce an anticoagulant protein in their milk. This marks the first time a drug has been manufactured from a herd of animals created specifically to produce a pharmaceutical.

A company has also manufactured a drug produced in the milk of transgenic rabbits to treat a dangerous tissue swelling caused by a human protein deficiency. Yet another pharmaceutical manufacturer, PharmAnthene, was funded by the US Department of Defense to develop genetically engineered goats whose milk produces proteins used in a drug to treat nerve gas poisoning. The FDA also approved a drug whose primary proteins are also found in the milk of genetically engineered goats, who are kept at a farm in Framingham, Massachusetts. Additionally, a herd of cattle was recently developed that produces milk containing proteins that help to treat human emphysema. These animals are essentially used as pharmaceutical-production machines to manufacture only those substances they were genetically modified to produce; they are not used as part of the normal food supply chain for items such as meat or milk.

The transfer of animal tissues from one species to another raises potentially serious health issues for animals and humans alike.

Some animals are also genetically modified to produce tissues and organs to be used for human transplant purposes (xenotransplantation). Much effort is being focused in this area as the demand for human organs for transplantation far exceeds the supply, with pigs the current focus of this research. While efforts to date have been hampered by a pig protein that can cause organ rejection by the recipients immune system, efforts are underway to develop genetically modified swine with a human protein that would mitigate the chance of organ rejection.

Little is known about the ways in which diseases can be spread from one species to another, raising concerns for both animals and people, and calling into question the safety of using transgenic pigs to supply organs for human transplant purposes. Scientists have identified various viruses common in the heart, spleen, and kidneys of pigs that could infect human cells. In addition, new research is shedding light on particles called prions that, along with viruses and bacteria, may transmit fatal diseases between animals and from animals to humans.

Acknowledging the potential for transmission of viruses from animals to humans, the National Institutes of Health, a part of the U.S. Department of Health and Human Services,issued a moratorium in 2015 onxenotransplantation until the risks are better understood, ceasing funding until more research has been carried out. With the science of genetic engineering, the possibilities are endless, but so too are the risks and concerns.

Genetic engineering research has broad ethical and moral ramifications with few established societal guidelines.

While biotechnology has been quietly revolutionizing the science for decades, public debate in the United Statesover the moral, ethical, and physical effects of this research has been insufficient. To quote Colorado State University Philosopher Bernard Rollin, We cannot control technology if we do not understand it, and we cannot understand it without a careful discussion of the moral questions to which it gives rise.

Research into non-animal methods of achieving some of the same goals looks promising.

Researchers in the U.S. and elsewhere have found ways togenetically engineer cereal grains to produce human proteins. One example of this, developed in the early 2000s, is a strain of rice that can produce a human protein used to treat cystic fibrosis. Wheat, corn, and barley may also be able to be used in similar ways at dramatically lower financial and ethical costs than genetically engineering animals for this purpose.

Originally posted here:
Genetic Engineering | MSPCA-Angell

Posted in Genetic Engineering | Comments Off on Genetic Engineering | MSPCA-Angell

Moon – Wikipedia

Posted: November 8, 2016 at 3:35 pm

The Moon is Earth’s only permanent natural satellite. It is the fifth-largest natural satellite in the Solar System, and the largest among planetary satellites relative to the size of the planet that it orbits (its primary). It is the second-densest satellite among those whose densities are known (after Jupiter’s satellite Io).

The average distance of the Moon from the Earth is 384,400km (238,900mi),[10][11] or 1.28 light-seconds.

The Moon is thought to have formed about 4.5 billion years ago, not long after Earth. There are several hypotheses for its origin; the most widely accepted explanation is that the Moon formed from the debris left over after a giant impact between Earth and a Mars-sized body called Theia.

The Moon is in synchronous rotation with Earth, always showing the same face, with its near side marked by dark volcanic maria that fill the spaces between the bright ancient crustal highlands and the prominent impact craters. It is the second-brightest regularly visible celestial object in Earth’s sky, after the Sun, as measured by illuminance on Earth’s surface. Its surface is actually dark, although compared to the night sky it appears very bright, with a reflectance just slightly higher than that of worn asphalt. Its prominence in the sky and its regular cycle of phases have made the Moon an important cultural influence since ancient times on language, calendars, art, mythology, and apparently, the menstrual cycles of the female of the human species.

The Moon’s gravitational influence produces the ocean tides, body tides, and the slight lengthening of the day. The Moon’s current orbital distance is about thirty times the diameter of Earth, with its apparent size in the sky almost the same as that of the Sun, resulting in the Moon covering the Sun nearly precisely in total solar eclipse. This matching of apparent visual size will not continue in the far future. The Moon’s linear distance from Earth is currently increasing at a rate of 3.820.07 centimetres (1.5040.028in) per year, but this rate is not constant.

The Soviet Union’s Luna programme was the first to reach the Moon with uncrewed spacecraft in 1959; the United States’ NASA Apollo program achieved the only crewed missions to date, beginning with the first crewed lunar orbiting mission by Apollo 8 in 1968, and six crewed lunar landings between 1969 and 1972, with the first being Apollo 11. These missions returned over 380kg (840lb) of lunar rocks, which have been used to develop a geological understanding of the Moon’s origin, the formation of its internal structure, and its subsequent history. Since the Apollo 17 mission in 1972, the Moon has been visited only by uncrewed spacecraft.

The usual English proper name for Earth’s natural satellite is “the Moon”.[12][13] The noun moon is derived from moone (around 1380), which developed from mone (1135), which is derived from Old English mna (dating from before 725), which ultimately stems from Proto-Germanic *mnn, like all Germanic language cognates.[14] Occasionally, the name “Luna” is used. In literature, especially science fiction, “Luna” is used to distinguish it from other moons, while in poetry, the name has been used to denote personification of our moon.[15]

The principal modern English adjective pertaining to the Moon is lunar, derived from the Latin Luna. A less common adjective is selenic, derived from the Ancient Greek Selene (), from which is derived the prefix “seleno-” (as in selenography).[16][17] Both the Greek Selene and the Roman goddess Diana were alternatively called Cynthia.[18] The names Luna, Cynthia, and Selene are reflected in terminology for lunar orbits in words such as apolune, pericynthion, and selenocentric. The name Diana is connected to dies meaning ‘day’.

Several mechanisms have been proposed for the Moon’s formation 4.53 billion years ago,[f] and some 3050 million years after the origin of the Solar System.[19] Recent research presented by Rick Carlson indicates a slightly lower age of between 4.40 and 4.45 billion years.[20][21] These mechanisms included the fission of the Moon from Earth’s crust through centrifugal force[22] (which would require too great an initial spin of Earth),[23] the gravitational capture of a pre-formed Moon[24] (which would require an unfeasibly extended atmosphere of Earth to dissipate the energy of the passing Moon),[23] and the co-formation of Earth and the Moon together in the primordial accretion disk (which does not explain the depletion of metals in the Moon).[23] These hypotheses also cannot account for the high angular momentum of the EarthMoon system.[25]

The prevailing hypothesis is that the EarthMoon system formed as a result of the impact of a Mars-sized body (named Theia) with the proto-Earth Earth (giant impact), that blasted material into orbit about the Earth that then accreted to form the present Earth-Moon system.[26][27]

This hypothesis, although not perfect, perhaps best explains the evidence. Eighteen months prior to an October 1984 conference on lunar origins, Bill Hartmann, Roger Phillips, and Jeff Taylor challenged fellow lunar scientists: “You have eighteen months. Go back to your Apollo data, go back to your computer, do whatever you have to, but make up your mind. Don’t come to our conference unless you have something to say about the Moon’s birth.” At the 1984 conference at Kona, Hawaii, the giant impact hypothesis emerged as the most popular.

Before the conference, there were partisans of the three “traditional” theories, plus a few people who were starting to take the giant impact seriously, and there was a huge apathetic middle who didnt think the debate would ever be resolved. Afterward there were essentially only two groups: the giant impact camp and the agnostics.[28]

Giant impacts are thought to have been common in the early Solar System. Computer simulations of a giant impact have produced results that are consistent with the mass of the lunar core and the present angular momentum of the EarthMoon system. These simulations also show that most of the Moon derived from the impactor, rather than the proto-Earth.[29] More recent simulations suggest a larger fraction of the Moon derived from the original Earth mass.[30][31][32][33] Studies of meteorites originating from inner Solar System bodies such as Mars and Vesta show that they have very different oxygen and tungsten isotopic compositions as compared to Earth, whereas Earth and the Moon have nearly identical isotopic compositions. The isotopic equalization of the Earth-Moon system might be explained by the post-impact mixing of the vaporized material that formed the two,[34] although this is debated.[35]

The great amount of energy released in the impact event and the subsequent re-accretion of that material into the Earth-Moon system would have melted the outer shell of Earth, forming a magma ocean.[36][37] Similarly, the newly formed Moon would also have been affected and had its own lunar magma ocean; estimates for its depth range from about 500km (300 miles) to its entire depth (1,737km (1,079 miles)).[36]

While the giant impact hypothesis might explain many lines of evidence, there are still some unresolved questions, most of which involve the Moon’s composition.[38]

In 2001, a team at the Carnegie Institute of Washington reported the most precise measurement of the isotopic signatures of lunar rocks.[39] To their surprise, the team found that the rocks from the Apollo program carried an isotopic signature that was identical with rocks from Earth, and were different from almost all other bodies in the Solar System. Because most of the material that went into orbit to form the Moon was thought to come from Theia, this observation was unexpected. In 2007, researchers from the California Institute of Technology announced that there was less than a 1% chance that Theia and Earth had identical isotopic signatures.[40] Published in 2012, an analysis of titanium isotopes in Apollo lunar samples showed that the Moon has the same composition as Earth,[41] which conflicts with what is expected if the Moon formed far from Earth’s orbit or from Theia. Variations on the giant impact hypothesis may explain this data.

The Moon is a differentiated body: it has a geochemically distinct crust, mantle, and core. The Moon has a solid iron-rich inner core with a radius of 240km (150mi) and a fluid outer core primarily made of liquid iron with a radius of roughly 300km (190mi). Around the core is a partially molten boundary layer with a radius of about 500km (310mi).[43] This structure is thought to have developed through the fractional crystallization of a global magma ocean shortly after the Moon’s formation 4.5billion years ago.[44] Crystallization of this magma ocean would have created a mafic mantle from the precipitation and sinking of the minerals olivine, clinopyroxene, and orthopyroxene; after about three-quarters of the magma ocean had crystallised, lower-density plagioclase minerals could form and float into a crust atop.[45] The final liquids to crystallise would have been initially sandwiched between the crust and mantle, with a high abundance of incompatible and heat-producing elements.[1] Consistent with this perspective, geochemical mapping made from orbit suggests the crust of mostly anorthosite.[9] The Moon rock samples of the flood lavas that erupted onto the surface from partial melting in the mantle confirm the mafic mantle composition, which is more iron rich than that of Earth.[1] The crust is on average about 50km (31mi) thick.[1]

The Moon is the second-densest satellite in the Solar System, after Io.[46] However, the inner core of the Moon is small, with a radius of about 350km (220mi) or less,[1] around 20% of the radius of the Moon. Its composition is not well defined, but is probably metallic iron alloyed with a small amount of sulfur and nickel; analyses of the Moon’s time-variable rotation suggest that it is at least partly molten.[47]

The topography of the Moon has been measured with laser altimetry and stereo image analysis.[48] Its most visible topographic feature is the giant far-side South PoleAitken basin, some 2,240km (1,390mi) in diameter, the largest crater on the Moon and the second-largest confirmed impact crater in the Solar System.[49][50] At 13km (8.1mi) deep, its floor is the lowest point on the surface of the Moon.[49][51] The highest elevations of the Moon’s surface are located directly to the northeast, and it has been suggested might have been thickened by the oblique formation impact of the South PoleAitken basin.[52] Other large impact basins, such as Imbrium, Serenitatis, Crisium, Smythii, and Orientale, also possess regionally low elevations and elevated rims.[49] The far side of the lunar surface is on average about 1.9km (1.2mi) higher than that of the near side.[1]

The discovery of fault scarp cliffs by the Lunar Reconnaissance Orbiter suggest that the Moon has shrunk within the past billion years, by about 90 metres (300ft).[53] Similar shrinkage features exist on Mercury.

The dark and relatively featureless lunar plains, clearly be seen with the naked eye, are called maria (Latin for “seas”; singular mare), as they were once believed to be filled with water;[54] they are now known to be vast solidified pools of ancient basaltic lava. Although similar to terrestrial basalts, lunar basalts have more iron and no minerals altered by water.[55][56] The majority of these lavas erupted or flowed into the depressions associated with impact basins. Several geologic provinces containing shield volcanoes and volcanic domes are found within the near side “maria”.[57]

Almost all maria are on the near side of the Moon, and cover 31% of the surface of the near side,[58] compared with 2% of the far side.[59] This is thought to be due to a concentration of heat-producing elements under the crust on the near side, seen on geochemical maps obtained by Lunar Prospector’s gamma-ray spectrometer, which would have caused the underlying mantle to heat up, partially melt, rise to the surface and erupt.[45][60][61] Most of the Moon’s mare basalts erupted during the Imbrian period, 3.03.5billion years ago, although some radiometrically dated samples are as old as 4.2billion years.[62] Until recently, the youngest eruptions, dated by crater counting, appeared to have been only 1.2billion years ago.[63] In 2006, a study of Ina, a tiny depression in Lacus Felicitatis, found jagged, relatively dust-free features that, due to the lack of erosion by infalling debris, appeared to be only 2 million years old.[64]Moonquakes and releases of gas also indicate some continued lunar activity.[64] In 2014 NASA announced “widespread evidence of young lunar volcanism” at 70 irregular mare patches identified by the Lunar Reconnaissance Orbiter, some less than 50 million years old. This raises the possibility of a much warmer lunar mantle than previously believed, at least on the near side where the deep crust is substantially warmer due to the greater concentration of radioactive elements.[65][66][67][68] Just prior to this, evidence has been presented for 210 million years younger basaltic volcanism inside Lowell crater,[69][70] Orientale basin, located in the transition zone between the near and far sides of the Moon. An initially hotter mantle and/or local enrichment of heat-producing elements in the mantle could be responsible for prolonged activities also on the far side in the Orientale basin.[71][72]

The lighter-coloured regions of the Moon are called terrae, or more commonly highlands, because they are higher than most maria. They have been radiometrically dated to having formed 4.4billion years ago, and may represent plagioclase cumulates of the lunar magma ocean.[62][63] In contrast to Earth, no major lunar mountains are believed to have formed as a result of tectonic events.[73]

The concentration of maria on the Near Side likely reflects the substantially thicker crust of the highlands of the Far Side, which may have formed in a slow-velocity impact of a second moon of Earth a few tens of millions of years after their formation.[74][75]

The other major geologic process that has affected the Moon’s surface is impact cratering,[76] with craters formed when asteroids and comets collide with the lunar surface. There are estimated to be roughly 300,000 craters wider than 1km (0.6mi) on the Moon’s near side alone.[77] The lunar geologic timescale is based on the most prominent impact events, including Nectaris, Imbrium, and Orientale, structures characterized by multiple rings of uplifted material, between hundreds and thousands of kilometres in diameter and associated with a broad apron of ejecta deposits that form a regional stratigraphic horizon.[78] The lack of an atmosphere, weather and recent geological processes mean that many of these craters are well-preserved. Although only a few multi-ring basins have been definitively dated, they are useful for assigning relative ages. Because impact craters accumulate at a nearly constant rate, counting the number of craters per unit area can be used to estimate the age of the surface.[78] The radiometric ages of impact-melted rocks collected during the Apollo missions cluster between 3.8 and 4.1billion years old: this has been used to propose a Late Heavy Bombardment of impacts.[79]

Blanketed on top of the Moon’s crust is a highly comminuted (broken into ever smaller particles) and impact gardened surface layer called regolith, formed by impact processes. The finer regolith, the lunar soil of silicon dioxide glass, has a texture resembling snow and a scent resembling spent gunpowder.[80] The regolith of older surfaces is generally thicker than for younger surfaces: it varies in thickness from 1020km (6.212.4mi) in the highlands and 35km (1.93.1mi) in the maria.[81] Beneath the finely comminuted regolith layer is the megaregolith, a layer of highly fractured bedrock many kilometres thick.[82]

Comparison of high-resolution images obtained by the Lunar Reconnaissance Orbiter has shown a contemporary crater-production rate significantly higher than previously estimated. A secondary cratering process caused by distal ejecta is thought to churn the top two centimetres of regolith a hundred times more quickly than previous models suggestedon a timescale of 81,000 years.[83][84]

Lunar swirls are enigmatic features found across the Moon’s surface, which are characterized by a high albedo, appearing optically immature (i.e. the optical characteristics of a relatively young regolith), and often displaying a sinuous shape. Their curvilinear shape is often accentuated by low albedo regions that wind between the bright swirls.

Liquid water cannot persist on the lunar surface. When exposed to solar radiation, water quickly decomposes through a process known as photodissociation and is lost to space. However, since the 1960s, scientists have hypothesized that water ice may be deposited by impacting comets or possibly produced by the reaction of oxygen-rich lunar rocks, and hydrogen from solar wind, leaving traces of water which could possibly survive in cold, permanently shadowed craters at either pole on the Moon.[85][86] Computer simulations suggest that up to 14,000km2 (5,400sqmi) of the surface may be in permanent shadow.[87] The presence of usable quantities of water on the Moon is an important factor in rendering lunar habitation as a cost-effective plan; the alternative of transporting water from Earth would be prohibitively expensive.[88]

In years since, signatures of water have been found to exist on the lunar surface.[89] In 1994, the bistatic radar experiment located on the Clementine spacecraft, indicated the existence of small, frozen pockets of water close to the surface. However, later radar observations by Arecibo, suggest these findings may rather be rocks ejected from young impact craters.[90] In 1998, the neutron spectrometer on the Lunar Prospector spacecraft, showed that high concentrations of hydrogen are present in the first meter of depth in the regolith near the polar regions.[91] Volcanic lava beads, brought back to Earth aboard Apollo 15, showed small amounts of water in their interior.[92]

The 2008 Chandrayaan-1 spacecraft has since confirmed the existence of surface water ice, using the on-board Moon Mineralogy Mapper. The spectrometer observed absorption lines common to hydroxyl, in reflected sunlight, providing evidence of large quantities of water ice, on the lunar surface. The spacecraft showed that concentrations may possibly be as high as 1,000ppm.[93] In 2009, LCROSS sent a 2,300kg (5,100lb) impactor into a permanently shadowed polar crater, and detected at least 100kg (220lb) of water in a plume of ejected material.[94][95] Another examination of the LCROSS data showed the amount of detected water to be closer to 15512kg (34226lb).[96]

In May 2011, 6151410 ppm water in melt inclusions in lunar sample 74220 was reported,[97] the famous high-titanium “orange glass soil” of volcanic origin collected during the Apollo 17 mission in 1972. The inclusions were formed during explosive eruptions on the Moon approximately 3.7 billion years ago. This concentration is comparable with that of magma in Earth’s upper mantle. Although of considerable selenological interest, Hauri’s announcement affords little comfort to would-be lunar coloniststhe sample originated many kilometers below the surface, and the inclusions are so difficult to access that it took 39 years to find them with a state-of-the-art ion microprobe instrument.

The gravitational field of the Moon has been measured through tracking the Doppler shift of radio signals emitted by orbiting spacecraft. The main lunar gravity features are mascons, large positive gravitational anomalies associated with some of the giant impact basins, partly caused by the dense mare basaltic lava flows that fill those basins.[98][99] The anomalies greatly influence the orbit of spacecraft about the Moon. There are some puzzles: lava flows by themselves cannot explain all of the gravitational signature, and some mascons exist that are not linked to mare volcanism.[100]

The Moon has an external magnetic field of about 1100 nanoteslas, less than one-hundredth that of Earth. It does not currently have a global dipolar magnetic field and only has crustal magnetization, probably acquired early in lunar history when a dynamo was still operating.[101][102] Alternatively, some of the remnant magnetization may be from transient magnetic fields generated during large impact events through the expansion of an impact-generated plasma cloud in the presence of an ambient magnetic field. This is supported by the apparent location of the largest crustal magnetizations near the antipodes of the giant impact basins.[103]

The Moon has an atmosphere so tenuous as to be nearly vacuum, with a total mass of less than 10 metric tons (9.8 long tons; 11 short tons).[106] The surface pressure of this small mass is around 3 1015atm (0.3nPa); it varies with the lunar day. Its sources include outgassing and sputtering, a product of the bombardment of lunar soil by solar wind ions.[9][107] Elements that have been detected include sodium and potassium, produced by sputtering (also found in the atmospheres of Mercury and Io); helium-4 and neon[108] from the solar wind; and argon-40, radon-222, and polonium-210, outgassed after their creation by radioactive decay within the crust and mantle.[109][110] The absence of such neutral species (atoms or molecules) as oxygen, nitrogen, carbon, hydrogen and magnesium, which are present in the regolith, is not understood.[109] Water vapour has been detected by Chandrayaan-1 and found to vary with latitude, with a maximum at ~6070degrees; it is possibly generated from the sublimation of water ice in the regolith.[111] These gases either return into the regolith due to the Moon’s gravity or be lost to space, either through solar radiation pressure or, if they are ionized, by being swept away by the solar wind’s magnetic field.[109]

A permanent asymmetric moon dust cloud exists around the Moon, created by small particles from comets. Estimates are 5 tons of comet particles strike the Moon’s surface each 24 hours. The particles strike the Moon’s surface ejecting moon dust above the Moon. The dust stays above the Moon approximately 10 minutes, taking 5 minutes to rise, and 5 minutes to fall. On average, 120 kilograms of dust are present above the Moon, rising to 100 kilometers above the surface. The dust measurements were made by LADEE’s Lunar Dust EXperiment (LDEX), between 20 and 100 kilometers above the surface, during a six-month period. LDEX detected an average of one 0.3 micrometer moon dust particle each minute. Dust particle counts peaked during the Geminid, Quadrantid, Northern Taurid, and Omicron Centaurid meteor showers, when the Earth, and Moon, pass through comet debris. The cloud is asymmetric, more dense near the boundary between the Moon’s dayside and nightside.[112][113]

The Moon’s axial tilt with respect to the ecliptic is only 1.5424,[114] much less than the 23.44 of Earth. Because of this, the Moon’s solar illumination varies much less with season, and topographical details play a crucial role in seasonal effects.[115] From images taken by Clementine in 1994, it appears that four mountainous regions on the rim of Peary Crater at the Moon’s north pole may remain illuminated for the entire lunar day, creating peaks of eternal light. No such regions exist at the south pole. Similarly, there are places that remain in permanent shadow at the bottoms of many polar craters,[87] and these dark craters are extremely cold: Lunar Reconnaissance Orbiter measured the lowest summer temperatures in craters at the southern pole at 35K (238C; 397F)[116] and just 26K (247C; 413F) close to the winter solstice in north polar Hermite Crater. This is the coldest temperature in the Solar System ever measured by a spacecraft, colder even than the surface of Pluto.[115] Average temperatures of the Moon’s surface are reported, but temperatures of different areas will vary greatly depending upon whether it is in sunlight or shadow.[117]

The Moon makes a complete orbit around Earth with respect to the fixed stars about once every 27.3days[g] (its sidereal period). However, because Earth is moving in its orbit around the Sun at the same time, it takes slightly longer for the Moon to show the same phase to Earth, which is about 29.5days[h] (its synodic period).[58] Unlike most satellites of other planets, the Moon orbits closer to the ecliptic plane than to the planet’s equatorial plane. The Moon’s orbit is subtly perturbed by the Sun and Earth in many small, complex and interacting ways. For example, the plane of the Moon’s orbital motion gradually rotates, which affects other aspects of lunar motion. These follow-on effects are mathematically described by Cassini’s laws.[118]

The Moon is exceptionally large relative to Earth: a quarter its diameter and 1/81 its mass.[58] It is the largest moon in the Solar System relative to the size of its planet,[i] though Charon is larger relative to the dwarf planet Pluto, at 1/9 Pluto’s mass.[j][119] Earth and the Moon are nevertheless still considered a planetsatellite system, rather than a double planet, because their barycentre, the common centre of mass, is located 1,700km (1,100mi) (about a quarter of Earth’s radius) beneath Earth’s surface.[120]

The Moon is in synchronous rotation: it rotates about its axis in about the same time it takes to orbit Earth. This results in it nearly always keeping the same face turned towards Earth. The Moon used to rotate at a faster rate, but early in its history, its rotation slowed and became tidally locked in this orientation as a result of frictional effects associated with tidal deformations caused by Earth.[121] With time, the energy of rotation of the Moon on its axis was dissipated as heat, until there was no rotation of the Moon relative to the Earth. The side of the Moon that faces Earth is called the near side, and the opposite the far side. The far side is often inaccurately called the “dark side”, but it is in fact illuminated as often as the near side: once per lunar day, during the new moon phase we observe on Earth when the near side is dark.[122] In 2016, planetary scientists, using data collected on the much earlier Nasa Lunar Prospector mission, found two hydrogen-rich areas on opposite sides of the Moon, probably in the form of water ice. It is speculated that these patches were the poles of the Moon billions of years ago, before it was tidally locked to Earth.[123]

The Moon has an exceptionally low albedo, giving it a reflectance that is slightly brighter than that of worn asphalt. Despite this, it is the brightest object in the sky after the Sun.[58][k] This is partly due to the brightness enhancement of the opposition effect; at quarter phase, the Moon is only one-tenth as bright, rather than half as bright, as at full moon.[124]

Additionally, colour constancy in the visual system recalibrates the relations between the colours of an object and its surroundings, and because the surrounding sky is comparatively dark, the sunlit Moon is perceived as a bright object. The edges of the full moon seem as bright as the centre, with no limb darkening, due to the reflective properties of lunar soil, which reflects more light back towards the Sun than in other directions. The Moon does appear larger when close to the horizon, but this is a purely psychological effect, known as the Moon illusion, first described in the 7th century BC.[125] The full moon subtends an arc of about 0.52 (on average) in the sky, roughly the same apparent size as the Sun (see Eclipses).

The highest altitude of the Moon in the sky varies with the lunar phase and the season of the year. The full moon is highest during winter. The 18.6-year nodes cycle also has an influence: when the ascending node of the lunar orbit is in the vernal equinox, the lunar declination can go as far as 28 each month. This means the Moon can go overhead at latitudes up to 28 from the equator, instead of only 18. The orientation of the Moon’s crescent also depends on the latitude of the observation site: close to the equator, an observer can see a smile-shaped crescent moon.[126]

The Moon is visible for two weeks every 27.3 days at the North and South Pole. The Moon’s light is used by zooplankton in the Arctic when the sun is below the horizon for months on end.[127]

The distance between the Moon and Earth varies from around 356,400km (221,500mi) to 406,700km (252,700mi) at perigees (closest) and apogees (farthest), respectively. On 19 March 2011, it was closer to Earth when at full phase than it has been since 1993, 14% closer than its farthest position in apogee.[128] Reported as a “super moon”, this closest point coincides within an hour of a full moon, and it was 30% more luminous than when at its greatest distance due to its angular diameter being 14% greater, because 1.14 2 1.30 {displaystyle scriptstyle 1.14^{2}approx 1.30} .[129][130][131] At lower levels, the human perception of reduced brightness as a percentage is provided by the following formula:[132][133]

perceived reduction % = 100 actual reduction % 100 {displaystyle {text{perceived reduction}}%=100times {sqrt {{text{actual reduction}}% over 100}}}

When the actual reduction is 1.00 / 1.30, or about 0.770, the perceived reduction is about 0.877, or 1.00 / 1.14. This gives a maximum perceived increase of 14% between apogee and perigee moons of the same phase.[134]

There has been historical controversy over whether features on the Moon’s surface change over time. Today, many of these claims are thought to be illusory, resulting from observation under different lighting conditions, poor astronomical seeing, or inadequate drawings. However, outgassing does occasionally occur, and could be responsible for a minor percentage of the reported lunar transient phenomena. Recently, it has been suggested that a roughly 3km (1.9mi) diameter region of the lunar surface was modified by a gas release event about a million years ago.[135][136] The Moon’s appearance, like that of the Sun, can be affected by Earth’s atmosphere: common effects are a 22 halo ring formed when the Moon’s light is refracted through the ice crystals of high cirrostratus cloud, and smaller coronal rings when the Moon is seen through thin clouds.[137]

The illuminated area of the visible sphere (degree of illumination) is given by 1 2 ( 1 cos e ) {displaystyle {frac {1}{2}}(1-cos e)} , where e {displaystyle e} is the elongation (i.e. the angle between Moon, the observer (on Earth) and the Sun).

The gravitational attraction that masses have for one another decreases inversely with the square of the distance of those masses from each other. As a result, the slightly greater attraction that the Moon has for the side of Earth closest to the Moon, as compared to the part of the Earth opposite the Moon, results in tidal forces. Tidal forces affect both the Earth’s crust and oceans.

The most obvious effect of tidal forces is to cause two bulges in the Earth’s oceans, one on the side facing the Moon and the other on the side opposite. This results in elevated sea levels called ocean tides.[138] As the Earth spins on its axis, one of the ocean bulges (high tide) is held in place “under” the Moon, while another such tide is opposite. As a result, there are two high tides, and two low tides in about 24 hours.[138] Since the Moon is orbiting the Earth in the same direction of the Earth’s rotation, the high tides occur about every 12 hours and 25 minutes; the 25 minutes is due to the Moon’s time to orbit the Earth. The Sun has the same tidal effect on the Earth, but its forces of attraction are only 40% that of the Moon’s; the Sun’s and Moon’s interplay is responsible for spring and neap tides.[138] If the Earth was a water world (one with no continents) it would produce a tide of only one meter, and that tide would be very predictable, but the ocean tides are greatly modified by other effects: the frictional coupling of water to Earth’s rotation through the ocean floors, the inertia of water’s movement, ocean basins that grow shallower near land, the sloshing of water between different ocean basins.[139] As a result, the timing of the tides at most points on the Earth is a product of observations that are explained, incidentally, by theory.

While gravitation causes acceleration and movement of the Earth’s fluid oceans, gravitational coupling between the Moon and Earth’s solid body is mostly elastic and plastic. The result is a further tidal effect of the Moon on the Earth that causes a bulge of the solid portion of the Earth nearest the Moon that acts as a torque in opposition to the Earth’s rotation. This “drains” angular momentum and rotational kinetic energy from Earth’s spin, slowing the Earth’s rotation.[138][140] That angular momentum, lost from the Earth, is transferred to the Moon in a process (confusingly known as tidal acceleration), which lifts the Moon into a higher orbit and results in its lower orbital speed about the Earth. Thus the distance between Earth and Moon is increasing, and the Earth’s spin is slowing in reaction.[140] Measurements from laser reflectors left during the Apollo missions (lunar ranging experiments) have found that the Moon’s distance increases by 38mm (1.5in) per year[141] (roughly the rate at which human fingernails grow).[142]Atomic clocks also show that Earth’s day lengthens by about 15microseconds every year,[143] slowly increasing the rate at which UTC is adjusted by leap seconds. Left to run its course, this tidal drag would continue until the spin of Earth and the orbital period of the Moon matched, creating mutual tidal locking between the two. As a result, the Moon would be suspended in the sky over one meridian, as is already currently the case with Pluto and its moon Charon. However, the Sun will become a red giant long before that, engulfing Earth and we need not worry about the consequences.[144][145]

In a like manner, the lunar surface experiences tides of around 10cm (4in) amplitude over 27days, with two components: a fixed one due to Earth, because they are in synchronous rotation, and a varying component from the Sun.[140] The Earth-induced component arises from libration, a result of the Moon’s orbital eccentricity (if the Moon’s orbit were perfectly circular, there would only be solar tides).[140] Libration also changes the angle from which the Moon is seen, allowing a total of about 59% of its surface to be seen from Earth over time.[58] The cumulative effects of stress built up by these tidal forces produces moonquakes. Moonquakes are much less common and weaker than are earthquakes, although moon quakes can last for up to an houra significantly longer time than terrestrial quakesbecause of the absence of water to damp out the seismic vibrations. The existence of moonquakes was an unexpected discovery from seismometers placed on the Moon by Apollo astronauts from 1969 through 1972.[146]

Eclipses can only occur when the Sun, Earth, and Moon are all in a straight line (termed “syzygy”). Solar eclipses occur at new moon, when the Moon is between the Sun and Earth. In contrast, lunar eclipses occur at full moon, when Earth is between the Sun and Moon. The apparent size of the Moon is roughly the same as that of the Sun, with both being viewed at close to one-half a degree wide. The Sun is much larger than the Moon but it is the precise vastly greater distance that gives it the same apparent size as the much closer and much smaller Moon from the perspective of Earth. The variations in apparent size, due to the non-circular orbits, are nearly the same as well, though occurring in different cycles. This makes possible both total (with the Moon appearing larger than the Sun) and annular (with the Moon appearing smaller than the Sun) solar eclipses.[148] In a total eclipse, the Moon completely covers the disc of the Sun and the solar corona becomes visible to the naked eye. Because the distance between the Moon and Earth is very slowly increasing over time,[138] the angular diameter of the Moon is decreasing. Also, as it evolves toward becoming a red giant, the size of the Sun, and its apparent diameter in the sky, are slowly increasing.[l] The combination of these two changes means that hundreds of millions of years ago, the Moon would always completely cover the Sun on solar eclipses, and no annular eclipses were possible. Likewise, hundreds of millions of years in the future, the Moon will no longer cover the Sun completely, and total solar eclipses will not occur.[149]

Because the Moon’s orbit around Earth is inclined by about 5 to the orbit of Earth around the Sun, eclipses do not occur at every full and new moon. For an eclipse to occur, the Moon must be near the intersection of the two orbital planes.[150] The periodicity and recurrence of eclipses of the Sun by the Moon, and of the Moon by Earth, is described by the saros, which has a period of approximately 18years.[151]

Because the Moon is continuously blocking our view of a half-degree-wide circular area of the sky,[m][152] the related phenomenon of occultation occurs when a bright star or planet passes behind the Moon and is occulted: hidden from view. In this way, a solar eclipse is an occultation of the Sun. Because the Moon is comparatively close to Earth, occultations of individual stars are not visible everywhere on the planet, nor at the same time. Because of the precession of the lunar orbit, each year different stars are occulted.[153]

Understanding of the Moon’s cycles was an early development of astronomy: by the 5th century BC, Babylonian astronomers had recorded the 18-year Saros cycle of lunar eclipses,[154] and Indian astronomers had described the Moon’s monthly elongation.[155] The Chinese astronomer Shi Shen (fl. 4th century BC) gave instructions for predicting solar and lunar eclipses. Later, the physical form of the Moon and the cause of moonlight became understood. The ancient Greek philosopher Anaxagoras (d. 428 BC) reasoned that the Sun and Moon were both giant spherical rocks, and that the latter reflected the light of the former.[157] Although the Chinese of the Han Dynasty believed the Moon to be energy equated to qi, their ‘radiating influence’ theory also recognized that the light of the Moon was merely a reflection of the Sun, and Jing Fang (7837BC) noted the sphericity of the Moon. In the 2nd century AD Lucian wrote a novel where the heroes travel to the Moon, which is inhabited. In 499AD, the Indian astronomer Aryabhata mentioned in his Aryabhatiya that reflected sunlight is the cause of the shining of the Moon.[160] The astronomer and physicist Alhazen (9651039) found that sunlight was not reflected from the Moon like a mirror, but that light was emitted from every part of the Moon’s sunlit surface in all directions.[161]Shen Kuo (10311095) of the Song dynasty created an allegory equating the waxing and waning of the Moon to a round ball of reflective silver that, when doused with white powder and viewed from the side, would appear to be a crescent.

In Aristotle’s (384322BC) description of the universe, the Moon marked the boundary between the spheres of the mutable elements (earth, water, air and fire), and the imperishable stars of aether, an influential philosophy that would dominate for centuries.[163] However, in the 2nd century BC, Seleucus of Seleucia correctly theorized that tides were due to the attraction of the Moon, and that their height depends on the Moon’s position relative to the Sun.[164] In the same century, Aristarchus computed the size and distance of the Moon from Earth, obtaining a value of about twenty times the radius of Earth for the distance. These figures were greatly improved by Ptolemy (90168AD): his values of a mean distance of 59times Earth’s radius and a diameter of 0.292Earth diameters were close to the correct values of about 60 and 0.273 respectively.[165]Archimedes (287212 BC) designed a planetarium that could calculate the motions of the Moon and other objects in the Solar System.[166]

During the Middle Ages, before the invention of the telescope, the Moon was increasingly recognised as a sphere, though many believed that it was “perfectly smooth”.[167]

In 1609, Galileo Galilei drew one of the first telescopic drawings of the Moon in his book Sidereus Nuncius and noted that it was not smooth but had mountains and craters. Telescopic mapping of the Moon followed: later in the 17th century, the efforts of Giovanni Battista Riccioli and Francesco Maria Grimaldi led to the system of naming of lunar features in use today. The more exact 183436 Mappa Selenographica of Wilhelm Beer and Johann Heinrich Mdler, and their associated 1837 book Der Mond, the first trigonometrically accurate study of lunar features, included the heights of more than a thousand mountains, and introduced the study of the Moon at accuracies possible in earthly geography.[168] Lunar craters, first noted by Galileo, were thought to be volcanic until the 1870s proposal of Richard Proctor that they were formed by collisions.[58] This view gained support in 1892 from the experimentation of geologist Grove Karl Gilbert, and from comparative studies from 1920 to the 1940s,[169] leading to the development of lunar stratigraphy, which by the 1950s was becoming a new and growing branch of astrogeology.[58]

The Cold War-inspired Space Race between the Soviet Union and the U.S. led to an acceleration of interest in exploration of the Moon. Once launchers had the necessary capabilities, these nations sent uncrewed probes on both flyby and impact/lander missions. Spacecraft from the Soviet Union’s Luna program were the first to accomplish a number of goals: following three unnamed, failed missions in 1958,[170] the first human-made object to escape Earth’s gravity and pass near the Moon was Luna 1; the first human-made object to impact the lunar surface was Luna 2, and the first photographs of the normally occluded far side of the Moon were made by Luna 3, all in 1959.

The first spacecraft to perform a successful lunar soft landing was Luna 9 and the first uncrewed vehicle to orbit the Moon was Luna 10, both in 1966.[58]Rock and soil samples were brought back to Earth by three Luna sample return missions (Luna 16 in 1970, Luna 20 in 1972, and Luna 24 in 1976), which returned 0.3kg total.[171] Two pioneering robotic rovers landed on the Moon in 1970 and 1973 as a part of Soviet Lunokhod programme.

The United States launched uncrewed probes to develop an understanding of the lunar surface for an eventual crewed landing: the Jet Propulsion Laboratory’s Ranger program produced the first close-up pictures; the Lunar Orbiter program produced maps of the entire Moon; the Surveyor program landed its first spacecraft four months after Luna 9. NASA’s crewed Apollo program was developed in parallel; after a series of uncrewed and crewed tests of the Apollo spacecraft in Earth orbit, and spurred on by a potential Soviet lunar flight, in 1968 Apollo 8 made the first crewed mission to lunar orbit. The subsequent landing of the first humans on the Moon in 1969 is seen by many as the culmination of the Space Race.[172]

Neil Armstrong became the first person to walk on the Moon as the commander of the American mission Apollo 11 by first setting foot on the Moon at 02:56UTC on 21 July 1969.[173] An estimated 500million people worldwide watched the transmission by the Apollo TV camera, the largest television audience for a live broadcast at that time.[174][175] The Apollo missions 11 to 17 (except Apollo 13, which aborted its planned lunar landing) returned 380.05 kilograms (837.87lb) of lunar rock and soil in 2,196 separate samples.[176] The American Moon landing and return was enabled by considerable technological advances in the early 1960s, in domains such as ablation chemistry, software engineering and atmospheric re-entry technology, and by highly competent management of the enormous technical undertaking.[177][178]

Scientific instrument packages were installed on the lunar surface during all the Apollo landings. Long-lived instrument stations, including heat flow probes, seismometers, and magnetometers, were installed at the Apollo 12, 14, 15, 16, and 17 landing sites. Direct transmission of data to Earth concluded in late 1977 due to budgetary considerations,[179][180] but as the stations’ lunar laser ranging corner-cube retroreflector arrays are passive instruments, they are still being used. Ranging to the stations is routinely performed from Earth-based stations with an accuracy of a few centimetres, and data from this experiment are being used to place constraints on the size of the lunar core.[181]

After the first Moon race there were years of near quietude but starting in the 1990s, many more countries have become involved in direct exploration of the Moon. In 1990, Japan became the third country to place a spacecraft into lunar orbit with its Hiten spacecraft. The spacecraft released a smaller probe, Hagoromo, in lunar orbit, but the transmitter failed, preventing further scientific use of the mission.[182] In 1994, the U.S. sent the joint Defense Department/NASA spacecraft Clementine to lunar orbit. This mission obtained the first near-global topographic map of the Moon, and the first global multispectral images of the lunar surface.[183] This was followed in 1998 by the Lunar Prospector mission, whose instruments indicated the presence of excess hydrogen at the lunar poles, which is likely to have been caused by the presence of water ice in the upper few meters of the regolith within permanently shadowed craters.[184]

India, Japan, China, the United States, and the European Space Agency each sent lunar orbiters, especially ISRO’s Chandrayaan-1 has contributed to confirming the discovery of lunar water ice in permanently shadowed craters at the poles and bound into the lunar regolith. The post-Apollo era has also seen two rover missions: the final Soviet Lunokhod mission in 1973, and China’s ongoing Chang’e 3 mission, which deployed its Yutu rover on 14 December 2013. The Moon remains, under the Outer Space Treaty, free to all nations to explore for peaceful purposes.

The European spacecraft SMART-1, the second ion-propelled spacecraft, was in lunar orbit from 15 November 2004 until its lunar impact on 3 September 2006, and made the first detailed survey of chemical elements on the lunar surface.[185]

China has pursued an ambitious program of lunar exploration, beginning with Chang’e 1, which successfully orbited the Moon from 5 November 2007 until its controlled lunar impact on 1 March 2009.[186] In its sixteen-month mission, it obtained a full image map of the Moon. China followed up this success with Chang’e 2 beginning in October 2010, which reached the Moon over twice as fast as Chang’e 1, mapped the Moon at a higher resolution over an eight-month period, then left lunar orbit in favor of an extended stay at the EarthSun L2 Lagrangian point, before finally performing a flyby of asteroid 4179 Toutatis on 13 December 2012, and then heading off into deep space. On 14 December 2013, Chang’e 3 improved upon its orbital mission predecessors by landing a lunar lander onto the Moon’s surface, which in turn deployed a lunar rover, named Yutu (Chinese: ; literally “Jade Rabbit”). In so doing, Chang’e 3 made the first lunar soft landing since Luna 24 in 1976, and the first lunar rover mission since Lunokhod 2 in 1973. China intends to launch another rover mission (Chang’e 4) before 2020, followed by a sample return mission (Chang’e 5) soon after.[187]

Between 4 October 2007 and 10 June 2009, the Japan Aerospace Exploration Agency’s Kaguya (Selene) mission, a lunar orbiter fitted with a high-definition video camera, and two small radio-transmitter satellites, obtained lunar geophysics data and took the first high-definition movies from beyond Earth orbit.[188][189] India’s first lunar mission, Chandrayaan I, orbited from 8 November 2008 until loss of contact on 27 August 2009, creating a high resolution chemical, mineralogical and photo-geological map of the lunar surface, and confirming the presence of water molecules in lunar soil.[190] The Indian Space Research Organisation planned to launch Chandrayaan II in 2013, which would have included a Russian robotic lunar rover.[191][192] However, the failure of Russia’s Fobos-Grunt mission has delayed this project.

The U.S. co-launched the Lunar Reconnaissance Orbiter (LRO) and the LCROSS impactor and follow-up observation orbiter on 18 June 2009; LCROSS completed its mission by making a planned and widely observed impact in the crater Cabeus on 9 October 2009,[193] whereas LRO is currently in operation, obtaining precise lunar altimetry and high-resolution imagery. In November 2011, the LRO passed over the Aristarchus crater, which spans 40km (25mi) and sinks more than 3.5km (2.2mi) deep. The crater is one of the most visible ones from Earth. “The Aristarchus plateau is one of the most geologically diverse places on the Moon: a mysterious raised flat plateau, a giant rille carved by enormous outpourings of lava, fields of explosive volcanic ash, and all surrounded by massive flood basalts”, said Mark Robinson, principal investigator of the Lunar Reconnaissance Orbiter Camera at Arizona State University. NASA released photos of the crater on 25 December 2011.[194]

Two NASA GRAIL spacecraft began orbiting the Moon around 1 January 2012,[195] on a mission to learn more about the Moon’s internal structure. NASA’s LADEE probe, designed to study the lunar exosphere, achieved orbit on 6 October 2013.

Upcoming lunar missions include Russia’s Luna-Glob: an uncrewed lander with a set of seismometers, and an orbiter based on its failed Martian Fobos-Grunt mission.[196][197] Privately funded lunar exploration has been promoted by the Google Lunar X Prize, announced 13 September 2007, which offers US$20million to anyone who can land a robotic rover on the Moon and meet other specified criteria.[198]Shackleton Energy Company is building a program to establish operations on the south pole of the Moon to harvest water and supply their Propellant Depots.[199]

NASA began to plan to resume crewed missions following the call by U.S. President George W. Bush on 14 January 2004 for a crewed mission to the Moon by 2019 and the construction of a lunar base by 2024.[200] The Constellation program was funded and construction and testing begun on a crewed spacecraft and launch vehicle,[201] and design studies for a lunar base.[202] However, that program has been cancelled in favor of a crewed asteroid landing by 2025 and a crewed Mars orbit by 2035.[203]India has also expressed its hope to send a crewed mission to the Moon by 2020.[204]

For many years, the Moon has been recognized as an excellent site for telescopes.[205] It is relatively nearby; astronomical seeing is not a concern; certain craters near the poles are permanently dark and cold, and thus especially useful for infrared telescopes; and radio telescopes on the far side would be shielded from the radio chatter of Earth.[206] The lunar soil, although it poses a problem for any moving parts of telescopes, can be mixed with carbon nanotubes and epoxies and employed in the construction of mirrors up to 50 meters in diameter.[207] A lunar zenith telescope can be made cheaply with ionic liquid.[208]

In April 1972, the Apollo 16 mission recorded various astronomical photos and spectra in ultraviolet with the Far Ultraviolet Camera/Spectrograph.[209]

During the Cold War, the United States Army conducted a classified feasibility study in the late 1950s called Project Horizon, to construct a crewed military outpost on the Moon, which would have been home to a bombing system targeted at rivals on Earth. The study included the possibility of conducting a lunar-based nuclear test.[210] The Air Force, which at the time was in competition with the Army for a leading role in the space program, developed its own, similar plan called Lunex.[211][212] However, both these proposals were ultimately passed over as the space program was largely transferred from the military to the civilian agency NASA.[212]

Although Luna landers scattered pennants of the Soviet Union on the Moon, and U.S. flags were symbolically planted at their landing sites by the Apollo astronauts, no nation claims ownership of any part of the Moon’s surface.[213] Russia and the U.S. are party to the 1967 Outer Space Treaty,[214] which defines the Moon and all outer space as the “province of all mankind”.[213] This treaty also restricts the use of the Moon to peaceful purposes, explicitly banning military installations and weapons of mass destruction.[215] The 1979 Moon Agreement was created to restrict the exploitation of the Moon’s resources by any single nation, but as of 2014, it has been signed and ratified by only 16 nations, none of which engages in self-launched human space exploration or has plans to do so.[216] Although several individuals have made claims to the Moon in whole or in part, none of these are considered credible.[217][218][219]

The Moon was often personified as a lunar deity in mythology and religion. A 5,000-year-old rock carving at Knowth, Ireland, may represent the Moon, which would be the earliest depiction discovered.[220] The contrast between the brighter highlands and the darker maria creates the patterns seen by different cultures as the Man in the Moon, the rabbit and the buffalo, among others. In many prehistoric and ancient cultures, the Moon was personified as a deity or other supernatural phenomenon, and astrological views of the Moon continue to be propagated today.

In the Ancient Near East, the moon god (Sin/Nanna) was masculine. In Greco-Roman mythology, Sun and Moon are represented as male and female, respectively (Helios/Sol and Selene/Luna). The crescent shape form an early time was used as a symbol representing the Moon. The Moon goddess Selene was represented as wearing a crescent on her headgear in an arrangement reminiscent of horns. The star and crescent arrangement also goes back to the Bronze Age, representing either the Sun and Moon, or the Moon and planet Venus, in combination. It came to represent the goddess Artemis or Hecate, and via the patronage of Hecate came to be used as a symbol of Byzantium.

An iconographic tradition of representing Sun and Moon with faces developed in the late medieval period.

The splitting of the moon (Arabic: ) is a miracle attributed to Muhammad.[221]

The Moon’s regular phases make it a very convenient timepiece, and the periods of its waxing and waning form the basis of many of the oldest calendars. Tally sticks, notched bones dating as far back as 2030,000 years ago, are believed by some to mark the phases of the Moon.[222][223][224] The ~30-day month is an approximation of the lunar cycle. The English noun month and its cognates in other Germanic languages stem from Proto-Germanic *mnth-, which is connected to the above-mentioned Proto-Germanic *mnn, indicating the usage of a lunar calendar among the Germanic peoples (Germanic calendar) prior to the adoption of a solar calendar.[225] The PIE root of moon, *mh1nt, derives from the PIE verbal root *meh1-, “to measure”, “indicat[ing] a functional conception of the moon, i.e. marker of the month” (cf. the English words measure and menstrual),[226][227][228] and echoing the Moon’s importance to many ancient cultures in measuring time (see Latin mensis and Ancient Greek (meis) or (mn), meaning “month”).[229][230][231][232] Most historical calendars are lunisolar. The 7th-century Islamic calendar is an exceptional example of a purely lunar calendar. Months are traditionally determined by the visual sighting of the hilal, or earliest crescent moon, over the horizon.[233]

The Moon has been the subject of many works of art and literature and the inspiration for countless others. It is a motif in the visual arts, the performing arts, poetry, prose and music.

The Moon has long been associated with insanity and irrationality; the words lunacy and lunatic (popular shortening loony) are derived from the Latin name for the Moon, Luna. Philosophers Aristotle and Pliny the Elder argued that the full moon induced insanity in susceptible individuals, believing that the brain, which is mostly water, must be affected by the Moon and its power over the tides, but the Moon’s gravity is too slight to affect any single person.[234] Even today, people who believe in a lunar effect claim that admissions to psychiatric hospitals, traffic accidents, homicides or suicides increase during a full moon, but dozens of studies invalidate these claims.[234][235][236][237][238]

Here is the original post:
Moon – Wikipedia

Posted in Moon Colonization | Comments Off on Moon – Wikipedia

Social Darwinism – Wikipedia

Posted: November 6, 2016 at 7:06 pm

Social Darwinism is a name given to various phenomena emerging in the second half of the 19th century, trying to apply biological concepts of natural selection and survival of the fittest in human society.[1][2] The term itself emerged in the 1880s. The term Social Darwinism gained widespread currency when used after 1944 by opponents of these earlier concepts. The majority of those who have been categorised as social Darwinists did not identify themselves by such a label.[3]

Scholars debate the extent to which the various social Darwinist ideologies reflect Charles Darwin’s own views on human social and economic issues. His writings have passages that can be interpreted as opposing aggressive individualism, while other passages appear to promote it.[4] Some scholars argue that Darwin’s view gradually changed and came to incorporate views from other theorists such as Herbert Spencer.[5] Spencer published[6] his Lamarckian evolutionary ideas about society before Darwin first published his theory in 1859, and both Spencer and Darwin promoted their own conceptions of moral values. Spencer supported laissez-faire capitalism on the basis of his Lamarckian belief that struggle for survival spurred self-improvement which could be inherited.[7] An important proponent in Germany was Ernst Haeckel, which popularized Darwin’s thought (and personal interpretation of it) and used it as well to contribute to a new creed, the Monist movement.

The term Darwinism had been coined by Thomas Henry Huxley in his April 1860 review of “On the Origin of Species”,[8] and by the 1870s it was used to describe a range of concepts of evolutionism or development, without any specific commitment to Charles Darwin’s own theory.[9]

The first use of the phrase “social Darwinism” was in Joseph Fisher’s 1877 article on The History of Landholding in Ireland which was published in the Transactions of the Royal Historical Society.[10] Fisher was commenting on how a system for borrowing livestock which had been called “tenure” had led to the false impression that the early Irish had already evolved or developed land tenure;[11]

These arrangements did not in any way affect that which we understand by the word ” tenure”, that is, a man’s farm, but they related solely to cattle, which we consider a chattel. It has appeared necessary to devote some space to this subject, inasmuch as that usually acute writer Sir Henry Maine has accepted the word ” tenure ” in its modern interpretation, and has built up a theory under which the Irish chief ” developed ” into a feudal baron. I can find nothing in the Brehon laws to warrant this theory of social Darwinism, and believe further study will show that the Cain Saerrath and the Cain Aigillue relate solely to what we now call chattels, and did not in any way affect what we now call the freehold, the possession of the land.

Despite the fact that social Darwinism bears Charles Darwin’s name, it is also linked today with others, notably Herbert Spencer, Thomas Malthus, and Francis Galton, the founder of eugenics. In fact, Spencer was not described as a social Darwinist until the 1930s, long after his death.[12] The social Darwinism term first appeared in Europe in 1880, the journalist Emilie Gautier had coined the term with reference to a health conference in Berlin 1877.[10] Around 1900 it was used by sociologists, some being opposed to the concept.[13] The term was popularized in the United States in 1944 by the American historian Richard Hofstadter who used it in the ideological war effort against fascism to denote a reactionary creed which promoted competitive strife, racism and chauvinism. Hofstadter later also recognized (what he saw as) the influence of Darwinist and other evolutionary ideas upon those with collectivist views, enough to devise a term for the phenomenon, “Darwinist collectivism”.[14] Before Hofstadter’s work the use of the term “social Darwinism” in English academic journals was quite rare.[15] In fact,

… there is considerable evidence that the entire concept of “social Darwinism” as we know it today was virtually invented by Richard Hofstadter. Eric Foner, in an introduction to a then-new edition of Hofstadter’s book published in the early 1990s, declines to go quite that far. “Hofstadter did not invent the term Social Darwinism”, Foner writes, “which originated in Europe in the 1860s and crossed the Atlantic in the early twentieth century. But before he wrote, it was used only on rare occasions; he made it a standard shorthand for a complex of late-nineteenth-century ideas, a familiar part of the lexicon of social thought.”

Social Darwinism has many definitions, and some of them are incompatible with each other. As such, social Darwinism has been criticized for being an inconsistent philosophy, which does not lead to any clear political conclusions. For example, The Concise Oxford Dictionary of Politics states:

Part of the difficulty in establishing sensible and consistent usage is that commitment to the biology of natural selection and to ‘survival of the fittest’ entailed nothing uniform either for sociological method or for political doctrine. A ‘social Darwinist’ could just as well be a defender of laissez-faire as a defender of state socialism, just as much an imperialist as a domestic eugenist.[16]

The term “social Darwinism” has rarely been used by advocates of the supposed ideologies or ideas; instead it has almost always been used pejoratively by its opponents.[3] The term draws upon the common use of the term Darwinism, which has been used to describe a range of evolutionary views, but in the late 19th century was applied more specifically to natural selection as first advanced by Charles Darwin to explain speciation in populations of organisms. The process includes competition between individuals for limited resources, popularly but inaccurately described by the phrase “survival of the fittest”, a term coined by sociologist Herbert Spencer.

Creationists have often maintained that social Darwinismleading to policies designed to reward the most competitiveis a logical consequence of “Darwinism” (the theory of natural selection in biology).[17] Biologists and historians have stated that this is a fallacy of appeal to nature should not be taken to imply that this phenomenon ought to be used as a moral guide in human society.[citation needed] While there are historical links between the popularisation of Darwin’s theory and forms of social Darwinism, social Darwinism is not a necessary consequence of the principles of biological evolution.

While the term has been applied to the claim that Darwin’s theory of evolution by natural selection can be used to understand the social endurance of a nation or country, social Darwinism commonly refers to ideas that predate Darwin’s publication of On the Origin of Species. Others whose ideas are given the label include the 18th century clergyman Thomas Malthus, and Darwin’s cousin Francis Galton who founded eugenics towards the end of the 19th century.

Herbert Spencer’s ideas, like those of evolutionary progressivism, stemmed from his reading of Thomas Malthus, and his later theories were influenced by those of Darwin. However, Spencer’s major work, Progress: Its Law and Cause (1857), was released two years before the publication of Darwin’s On the Origin of Species, and First Principles was printed in 1860.

In The Social Organism (1860), Spencer compares society to a living organism and argues that, just as biological organisms evolve through natural selection, society evolves and increases in complexity through analogous processes.[18]

In many ways, Spencer’s theory of cosmic evolution has much more in common with the works of Lamarck and Auguste Comte’s positivism than with Darwin’s.

Jeff Riggenbach argues that Spencer’s view was that culture and education made a sort of Lamarckism possible[1] and notes that Herbert Spencer was a proponent of private charity.[1]

Spencer’s work also served to renew interest in the work of Malthus. While Malthus’s work does not itself qualify as social Darwinism, his 1798 work An Essay on the Principle of Population, was incredibly popular and widely read by social Darwinists. In that book, for example, the author argued that as an increasing population would normally outgrow its food supply, this would result in the starvation of the weakest and a Malthusian catastrophe.

According to Michael Ruse, Darwin read Malthus’ famous Essay on a Principle of Population in 1838, four years after Malthus’ death. Malthus himself anticipated the social Darwinists in suggesting that charity could exacerbate social problems.

Another of these social interpretations of Darwin’s biological views, later known as eugenics, was put forth by Darwin’s cousin, Francis Galton, in 1865 and 1869. Galton argued that just as physical traits were clearly inherited among generations of people, the same could be said for mental qualities (genius and talent). Galton argued that social morals needed to change so that heredity was a conscious decision in order to avoid both the over-breeding by less fit members of society and the under-breeding of the more fit ones.

In Galton’s view, social institutions such as welfare and insane asylums were allowing inferior humans to survive and reproduce at levels faster than the more “superior” humans in respectable society, and if corrections were not soon taken, society would be awash with “inferiors”. Darwin read his cousin’s work with interest, and devoted sections of Descent of Man to discussion of Galton’s theories. Neither Galton nor Darwin, though, advocated any eugenic policies restricting reproduction, due to their Whiggish distrust of government.[19]

Friedrich Nietzsche’s philosophy addressed the question of artificial selection, yet Nietzsche’s principles did not concur with Darwinian theories of natural selection. Nietzsche’s point of view on sickness and health, in particular, opposed him to the concept of biological adaptation as forged by Spencer’s “fitness”. Nietzsche criticized Haeckel, Spencer, and Darwin, sometimes under the same banner by maintaining that in specific cases, sickness was necessary and even helpful.[20] Thus, he wrote:

Wherever progress is to ensue, deviating natures are of greatest importance. Every progress of the whole must be preceded by a partial weakening. The strongest natures retain the type, the weaker ones help to advance it. Something similar also happens in the individual. There is rarely a degeneration, a truncation, or even a vice or any physical or moral loss without an advantage somewhere else. In a warlike and restless clan, for example, the sicklier man may have occasion to be alone, and may therefore become quieter and wiser; the one-eyed man will have one eye the stronger; the blind man will see deeper inwardly, and certainly hear better. To this extent, the famous theory of the survival of the fittest does not seem to me to be the only viewpoint from which to explain the progress of strengthening of a man or of a race.[21]

Ernst Haeckel’s recapitulation theory was not Darwinism, but rather attempted to combine the ideas of Goethe, Lamarck and Darwin. It was adopted by emerging social sciences to support the concept that non-European societies were “primitive” in an early stage of development towards the European ideal, but since then it has been heavily refuted on many fronts[22] Haeckel’s works led to the formation of the Monist League in 1904 with many prominent citizens among its members, including the Nobel Prize winner Wilhelm Ostwald.

The simpler aspects of social Darwinism followed the earlier Malthusian ideas that humans, especially males, require competition in their lives in order to survive in the future. Further, the poor should have to provide for themselves and not be given any aid. However, amidst this climate, most social Darwinists of the early twentieth century actually supported better working conditions and salaries. Such measures would grant the poor a better chance to provide for themselves yet still distinguish those who are capable of succeeding from those who are poor out of laziness, weakness, or inferiority.

“Social Darwinism” was first described by Oscar Schmidt of the University of Strasbourg, reporting at a scientific and medical conference held in Munich in 1877. He noted how socialists, although opponents of Darwin’s theory, used it to add force to their political arguments. Schmidt’s essay first appeared in English in Popular Science in March 1879.[23] There followed an anarchist tract published in Paris in 1880 entitled “Le darwinisme social” by mile Gautier. However, the use of the term was very rareat least in the English-speaking world (Hodgson, 2004)[24]until the American historian Richard Hofstadter published his influential Social Darwinism in American Thought (1944) during World War II.

Hypotheses of social evolution and cultural evolution were common in Europe. The Enlightenment thinkers who preceded Darwin, such as Hegel, often argued that societies progressed through stages of increasing development. Earlier thinkers also emphasized conflict as an inherent feature of social life. Thomas Hobbes’s 17th century portrayal of the state of nature seems analogous to the competition for natural resources described by Darwin. Social Darwinism is distinct from other theories of social change because of the way it draws Darwin’s distinctive ideas from the field of biology into social studies.

Darwin, unlike Hobbes, believed that this struggle for natural resources allowed individuals with certain physical and mental traits to succeed more frequently than others, and that these traits accumulated in the population over time, which under certain conditions could lead to the descendants being so different that they would be defined as a new species.

However, Darwin felt that “social instincts” such as “sympathy” and “moral sentiments” also evolved through natural selection, and that these resulted in the strengthening of societies in which they occurred, so much so that he wrote about it in Descent of Man:

The following proposition seems to me in a high degree probablenamely, that any animal whatever, endowed with well-marked social instincts, the parental and filial affections being here included, would inevitably acquire a moral sense or conscience, as soon as its intellectual powers had become as well, or nearly as well developed, as in man. For, firstly, the social instincts lead an animal to take pleasure in the society of its fellows, to feel a certain amount of sympathy with them, and to perform various services for them.[25]

Spencer proved to be a popular figure in the 1880s primarily because his application of evolution to areas of human endeavor promoted an optimistic view of the future as inevitably becoming better. In the United States, writers and thinkers of the gilded age such as Edward L. Youmans, William Graham Sumner, John Fiske, John W. Burgess, and others developed theories of social evolution as a result of their exposure to the works of Darwin and Spencer.

In 1883, Sumner published a highly influential pamphlet entitled “What Social Classes Owe to Each Other”, in which he insisted that the social classes owe each other nothing, synthesizing Darwin’s findings with free enterprise Capitalism for his justification.[citation needed] According to Sumner, those who feel an obligation to provide assistance to those unequipped or under-equipped to compete for resources, will lead to a country in which the weak and inferior are encouraged to breed more like them, eventually dragging the country down. Sumner also believed that the best equipped to win the struggle for existence was the American businessman, and concluded that taxes and regulations serve as dangers to his survival. This pamphlet makes no mention of Darwinism, and only refers to Darwin in a statement on the meaning of liberty, that “There never has been any man, from the primitive barbarian up to a Humboldt or a Darwin, who could do as he had a mind to.”[26]

Sumner never fully embraced Darwinian ideas, and some contemporary historians do not believe that Sumner ever actually believed in social Darwinism.[27] The great majority of American businessmen rejected the anti-philanthropic implications of the theory. Instead they gave millions to build schools, colleges, hospitals, art institutes, parks and many other institutions. Andrew Carnegie, who admired Spencer, was the leading philanthropist in the world (18901920), and a major leader against imperialism and warfare.[28]

H. G. Wells was heavily influenced by Darwinist thoughts, and novelist Jack London wrote stories of survival that incorporated his views on social Darwinism.[29]Film director Stanley Kubrick has been described as having held social Darwinist opinions.[30]

Social Darwinism has influenced political, public health and social movements in Japan since the late 19th and early 20th century. Social Darwinism was originally brought to Japan through the works of Francis Galton and Ernst Haeckel as well as United States, British and French Lamarkian eugenic written studies of the late 19th and early 20th centuries.[31] Eugenism as a science was hotly debated at the beginning of the 20th century, in Jinsei-Der Mensch, the first eugenics journal in the empire. As Japan sought to close ranks with the west, this practice was adopted wholesale along with colonialism and its justifications.

Social Darwinism was formally introduced to China through the translation by Yan Fu of Huxley’s Evolution and Ethics, in the course of an extensive series of translations of influential Western thought.[32] Yan’s translation strongly impacted Chinese scholars because he added national elements not found in the original. He understood Spencer’s sociology as “not merely analytical and descriptive, but prescriptive as well”, and saw Spencer building on Darwin, whom Yan summarized thus:

By the 1920s, social Darwinism found expression in the promotion of eugenics by the Chinese sociologist Pan Guangdan. When Chiang Kai-shek started the New Life movement in 1934, he

Social evolution theories in Germany gained large popularity in the 1860s and had a strong antiestablishment connotation first. Social Darwinism allowed to counter the connection of Thron und Altar, the intertwined establishment of clergy and nobility and provided as well the idea of progressive change and evolution of society as a whole. Ernst Haeckel propagated both Darwinism as a part of natural history and as a suitable base for a modern Weltanschauung, a world view based on scientific reasoning in his Monistenbund. Friedrich von Hellwald had a strong role in popularizing it in Austria. Darwin’s work served as a catalyst to popularize evolutionary thinking. [35] Darwin himself called Haeckels connection between Socialism and Evolution through Natural Selection a foolish ideaprevailingin Germany.

A sort of aristocratic turn, the use of the struggle for life as base of social darwinism sensu strictu came up after 1900 with Alexander Tilles 1895 work Entwicklungsethik (ethics of evolution) which asked to move from Darwin till Nietzsche. Further interpretations moved to ideologies propagating a racist and radical elbow society and provided ground for the later radical versions of social Darwinism. [35]

Continued here:

Social Darwinism – Wikipedia

Posted in Darwinism | Comments Off on Social Darwinism – Wikipedia

Life expectancy – Wikipedia

Posted: October 27, 2016 at 11:56 am

Life expectancy is a statistical measure of the average time an organism is expected to live, based on the year of their birth, their current age and other demographic factors including sex. The most commonly used measure of life expectancy is at birth (LEB), which can be defined in two ways: while cohort LEB is the mean length of life of an actual birth cohort (all individuals born a given year) and can be computed only for cohorts born many decades ago, so that all their members died, period LEB is the mean length of life of a hypothetical cohort assumed to be exposed since birth until death of all their members to the mortality rates observed at a given year.[1]

National LEB figures reported by statistical national agencies and international organizations are indeed estimates of period LEB. In the Bronze Age and the Iron Age, LEB was 26 years; the 2010 world LEB was 67.2 years. For recent years, in Swaziland LEB is about 49, and in Japan, it is about 83. The combination of high infant mortality and deaths in young adulthood from accidents, epidemics, plagues, wars, and childbirth, particularly before modern medicine was widely available, significantly lowers LEB. But for those who survive early hazards, a life expectancy of 60 or 70 would not be uncommon. For example, a society with a LEB of 40 may have few people dying at precisely 40: most will die before 30 or very few after 55. In populations with high infant mortality rates, LEB is highly sensitive to the rate of death in the first few years of life. Because of this sensitivity to infant mortality, LEB can be subjected to gross misinterpretation, leading one to believe that a population with a low LEB will necessarily have a small proportion of older people.[2] For example, in a hypothetical stationary population in which half the population dies before the age of five but everybody else dies at exactly 70 years old, LEB will be about 36, but about 25% of the population will be between the ages of 50 and 70. Another measure, such as life expectancy at age 5 (e5), can be used to exclude the effect of infant mortality to provide a simple measure of overall mortality rates other than in early childhood; in the hypothetical population above, life expectancy at 5 would be another 65. Aggregate population measures, such as the proportion of the population in various age groups, should also be used along individual-based measures like formal life expectancy when analyzing population structure and dynamics.

Mathematically, life expectancy is the mean number of years of life remaining at a given age, assuming age-specific mortality rates remain at their most recently measured levels.[3] It is denoted by e x {displaystyle e_{x}} ,[a] which means the mean number of subsequent years of life for someone now aged x {displaystyle x} , according to a particular mortality experience. Longevity, maximum lifespan, and life expectancy are not synonyms. Life expectancy is defined statistically as the mean number of years remaining for an individual or a group of people at a given age. Longevity refers to the characteristics of the relatively long life span of some members of a population. Maximum lifespan is the age at death for the longest-lived individual of a species. Moreover, because life expectancy is an average, a particular person may die many years before or many years after the “expected” survival. The term “maximum life span” has a quite different meaning and is more related to longevity.

Life expectancy is also used in plant or animal ecology;[4]life tables (also known as actuarial tables). The term life expectancy may also be used in the context of manufactured objects,[5] but the related term shelf life is used for consumer products, and the terms “mean time to breakdown” (MTTB) and “mean time between failures” (MTBF) are used in engineering.

Human beings are expected to live on average 49.42 years in Swaziland[6] and 82.6 years in Japan, but the latter’s recorded life expectancy may have been very slightly increased by counting many infant deaths as stillborn.[7] An analysis published in 2011 in The Lancet attributes Japanese life expectancy to equal opportunities and public health as well as diet.[8][9]

The oldest confirmed recorded age for any human is 122 years (see Jeanne Calment). This is referred to as the “maximum life span”, which is the upper boundary of life, the maximum number of years any human is known to have lived.[10]

The following information is derived from the 1961 Encyclopdia Britannica and other sources, some with questionable accuracy. Unless otherwise stated, it represents estimates of the life expectancies of the world population as a whole. In many instances, life expectancy varied considerably according to class and gender.

Life expectancy at birth takes account of infant mortality but not prenatal mortality.

Life expectancy increases with age as the individual survives the higher mortality rates associated with childhood. For instance, the table above listed the life expectancy at birth among 13th-century English nobles at 30. Having survived until the age of 21, a male member of the English aristocracy in this period could expect to live:[26]

17th-century English life expectancy was only about 35 years, largely because infant and child mortality remained high. Life expectancy was under 25 years in the early Colony of Virginia,[29] and in seventeenth-century New England, about 40 per cent died before reaching adulthood.[30] During the Industrial Revolution, the life expectancy of children increased dramatically.[31] The under-5 mortality rate in London decreased from 745 in 17301749 to 318 in 18101829.[32][33]

Public health measures are credited with much of the recent increase in life expectancy. During the 20th century, despite a brief drop due to the 1918 flu pandemic[34] starting around that time the average lifespan in the United States increased by more than 30 years, of which 25 years can be attributed to advances in public health.[35]

>80

77.5-80

75-77.5

72.5-75

70-72.5

67.5-70

65-67.5

60-65

55-60

50-55

45-50

40-45

There are great variations in life expectancy between different parts of the world, mostly caused by differences in public health, medical care, and diet. The impact of AIDS on life expectancy is particularly notable in many African countries. According to projections made by the United Nations (UN) in 2002, the life expectancy at birth for 20102015 (if HIV/AIDS did not exist) would have been:[37]

The UN’s predictions were too pessimistic. Actual life expectancy in Botswana declined from 65 in 1990 to 49 in 2000 before increasing to 66 in 2011. In South Africa, life expectancy was 63 in 1990, 57 in 2000, and 58 in 2011. And in Zimbabwe, life expectancy was 60 in 1990, 43 in 2000, and 54 in 2011.[38]

During the last 200 years, African countries have generally not had the same improvements in mortality rates that have been enjoyed by countries in Asia, Latin America, and Europe.[39][40]

In the United States, African-American people have shorter life expectancies than their European-American counterparts. For example, white Americans born in 2010 are expected to live until age 78.9, but black Americans only until age 75.1. This 3.8-year gap, however, is the lowest it has been since at least 1975. The greatest difference was 7.1 years in 1993.[41] In contrast, Asian-American women live the longest of all ethnic groups in the United States, with a life expectancy of 85.8 years.[42] The life expectancy of Hispanic Americans is 81.2 years.[41]

Cities also experience a wide range of life expectancy based on neighborhood breakdowns. This is largely due to economic clustering and poverty conditions that tend to associate based on geographic location. Multi-generational poverty found in struggling neighborhoods also contributes. In United States cities such as Cincinnati, the life expectancy gap between low income and high income neighborhoods touches 20 years.[43]

Economic circumstances also affect life expectancy. For example, in the United Kingdom, life expectancy in the wealthiest and richest areas is several years higher than in the poorest areas. This may reflect factors such as diet and lifestyle, as well as access to medical care. It may also reflect a selective effect: people with chronic life-threatening illnesses are less likely to become wealthy or to reside in affluent areas.[44] In Glasgow, the disparity is amongst the highest in the world: life expectancy for males in the heavily deprived Calton area stands at 54, which is 28 years less than in the affluent area of Lenzie, which is only 8km away.[45][46]

A 2013 study found a pronounced relationship between economic inequality and life expectancy.[47] However, a study by Jos A. Tapia Granados and Ana Diez Roux at the University of Michigan found that life expectancy actually increased during the Great Depression, and during recessions and depressions in general.[48] The authors suggest that when people are working extra hard during good economic times, they undergo more stress, exposure to pollution, and likelihood of injury among other longevity-limiting factors.

Life expectancy is also likely to be affected by exposure to high levels of highway air pollution or industrial air pollution. This is one way that occupation can have a major effect on life expectancy. Coal miners (and in prior generations, asbestos cutters) often have lower life expediencies than average life expediencies. Other factors affecting an individual’s life expectancy are genetic disorders, drug use, tobacco smoking, excessive alcohol consumption, obesity, access to health care, diet and exercise.

In the womb, male fetuses have a higher mortality rate (babies are conceived in a ratio estimated to be from 107 to 170 males to 100 females, but the ratio at birth in the United States is only 105 males to 100 females).[51] Among the smallest premature babies (those under 2 pounds or 900 g), females again have a higher survival rate. At the other extreme, about 90% of individuals aged 110 are female. The difference in life expectancy between men and women in the United States dropped from 7.8years in 1979 to 5.3years in 2005, with women expected to live to age80.1 in 2005.[52] Also, data from the UK shows the gap in life expectancy between men and women decreasing in later life. This may be attributable to the effects of infant mortality and young adult death rates.[53]

In the past, mortality rates for females in child-bearing age groups were higher than for males at the same age. This is no longer the case, and female human life expectancy is considerably higher than that of males. The reasons for this are not entirely certain. Traditional arguments tend to favor socio-environmental factors: historically, men have generally consumed more tobacco, alcohol and drugs than women in most societies, and are more likely to die from many associated diseases such as lung cancer, tuberculosis and cirrhosis of the liver.[54] Men are also more likely to die from injuries, whether unintentional (such as occupational, war or car accidents) or intentional (suicide).[54] Men are also more likely to die from most of the leading causes of death (some already stated above) than women. Some of these in the United States include: cancer of the respiratory system, motor vehicle accidents, suicide, cirrhosis of the liver, emphysema, prostate cancer, and coronary heart disease.[10] These far outweigh the female mortality rate from breast cancer and cervical cancer.

Some argue that shorter male life expectancy is merely another manifestation of the general rule, seen in all mammal species, that larger (size) individuals (within a species) tend, on average, to have shorter lives.[55][56] This biological difference occurs because women have more resistance to infections and degenerative diseases.[10]

In her extensive review of the existing literature, Kalben concluded that the fact that women live longer than men was observed at least as far back as 1750 and that, with relatively equal treatment, today males in all parts of the world experience greater mortality than females. Of 72 selected causes of death, only 6 yielded greater female than male age-adjusted death rates in 1998 in the United States. With the exception of birds, for almost all of the animal species studied, males have higher mortality than females. Evidence suggests that the sex mortality differential in people is due to both biological/genetic and environmental/behavioral risk and protective factors.[57]

There is a recent suggestion that mitochondrial mutations that shorten lifespan continue to be expressed in males (but less so in females) because mitochondria are inherited only through the mother. By contrast, natural selection weeds out mitochondria that reduce female survival; therefore such mitochondria are less likely to be passed on to the next generation. This thus suggests that females tend to live longer than males. The authors claim that this is a partial explanation.[58][59]

In developed countries, starting around 1880, death rates decreased faster among women, leading to differences in mortality rates between males and females. Before 1880 death rates were the same. In people born after 1900, the death rate of 50- to 70-year-old men was double that of women of the same age. Cardiovascular disease was the main cause of the higher death rates among men. Men may be more vulnerable to cardiovascular disease than women, but this susceptibility was evident only after deaths from other causes, such as infections, started to decline.[60]

In developed countries, the number of centenarians is increasing at approximately 5.5% per year, which means doubling the centenarian population every 13 years, pushing it from some 455,000 in 2009 to 4.1 million in 2050.[61] Japan is the country with the highest ratio of centenarians (347 for every 1 million inhabitants in September 2010). Shimane prefecture had an estimated 743 centenarians per million inhabitants.[62]

In the United States, the number of centenarians grew from 32,194 in 1980 to 71,944 in November 2010 (232 centenarians per million inhabitants).[63]

The seriously mentally ill have a 10 to 25 year reduction in life expectancy. Psychiatric medicines can increase the chance of developing the disease of diabetes.[64][65][66][67] Psychiatric medicine can also cause Agranulocytosis. Psychiatric medicines also affect the stomach, where the mentally ill have a four times risk of gastrointestinal disease.[68][69][70][71]

The reduction of lifespan has been studied and documented.[72][73][74][75][76][77]

Various species of plants and animals, including humans, have different lifespans. Evolutionary theory states that organisms that, by virtue of their defenses or lifestyle, live for long periods and avoid accidents, disease, predation, etc. are likely to have genes that code for slow aging, which often translates to good cellular repair. One theory is that if predation or accidental deaths prevent most individuals from living to an old age, there will be less natural selection to increase the intrinsic life span.[78] That finding was supported in a classic study of opossums by Austad;[79] however, the opposite relationship was found in an equally prominent study of guppies by Reznick.[80][81]

One prominent and very popular theory states that lifespan can be lengthened by a tight budget for food energy called caloric restriction.[82] Caloric restriction observed in many animals (most notably mice and rats) shows a near doubling of life span from a very limited calorific intake. Support for the theory has been bolstered by several new studies linking lower basal metabolic rate to increased life expectancy.[83][84][85] That is the key to why animals like giant tortoises can live so long.[86] Studies of humans with life spans of at least 100 have shown a link to decreased thyroid activity, resulting in their lowered metabolic rate.

In a broad survey of zoo animals, no relationship was found between the fertility of the animal and its life span.[87]

The starting point for calculating life expectancy is the age-specific death rates of the population members. If a large number of data is available, the age-specific death rates can be simply taken as the mortality rates actually experienced at each age (the number of deaths divided by the number of years “exposed to risk” in each data cell). However, it is customary to apply smoothing to iron out, as much as possible, the random statistical fluctuations from one year of age to the next. In the past, a very simple model used for this purpose was the Gompertz function, but more sophisticated methods are now used.[88]

These are the most common methods now used for that purpose:

While the data required are easily identified in the case of humans, the computation of life expectancy of industrial products and wild animals involves more indirect techniques. The life expectancy and demography of wild animals are often estimated by capturing, marking, and recapturing them.[89] The life of a product, more often termed shelf life, is also computed using similar methods. In the case of long-lived components, such as those used in critical applications: in aircraft, methods like accelerated aging are used to model the life expectancy of a component.[5]

The age-specific death rates are calculated separately for separate groups of data that are believed to have different mortality rates (such as males and females, and perhaps smokers and non-smokers if data are available separately for those groups) and are then used to calculate a life table from which one can calculate the probability of surviving to each age. In actuarial notation, the probability of surviving from age x {displaystyle x} to age x + n {displaystyle x+n} is denoted n p x {displaystyle ,_{n}p_{x}!} and the probability of dying during age x {displaystyle x} (between ages x {displaystyle x} and x + 1 {displaystyle x+1} ) is denoted q x {displaystyle q_{x}!} . For example, if 10% of a group of people alive at their 90th birthday die before their 91st birthday, the age-specific death probability at 90 would be 10%. That is a probability, not a mortality rate.

The expected future lifetime of a life age x {displaystyle x} in whole years (the curtate expected lifetime of (x)) is denoted by the symbol e x {displaystyle ,e_{x}!} .[a] It is the conditional expected future lifetime (in whole years), assuming survival to age x {displaystyle x} . If K ( x ) {displaystyle K(x)} denotes the curtate future lifetime at x {displaystyle x} , e x = E [ K ( x ) ] = k = 0 k P r ( K ( x ) = k ) = k = 0 k k p x q x + k . {displaystyle e_{x}=E[K(x)]=sum _{k=0}^{infty }k,Pr(K(x)=k)=sum _{k=0}^{infty }k,,_{k}p_{x},,q_{x+k}.} Substituting k p x q x + k = k p x k + 1 p x {displaystyle {}_{k}p_{x},q_{x+k}={}_{k}p_{x}-{}_{k+1}p_{x}} in the sum and simplifying gives the equivalent formula:[90] e x = k = 1 k p x . {displaystyle e_{x}=sum _{k=1}^{infty }{},_{k}p_{x}.} If rge the assumption is made that on average, people live a half year in the year of death, the complete expectation of future lifetime at age x {displaystyle x} is e x + 1 / 2 {displaystyle e_{x}+1/2} .[clarification needed]

Life expectancy is by definition an arithmetic mean. It can also be calculated by integrating the survival curve from 0 to positive infinity (or equivalently to the maximum lifespan, sometimes called ‘omega’). For an extinct or completed cohort (all people born in year 1850, for example), it can of course simply be calculated by averaging the ages at death. For cohorts with some survivors, it is estimated by using mortality experience in recent years. The estimates are called period cohort life expectancies.

It is important to note that the statistic is usually based on past mortality experience and assumes that the same age-specific mortality rates will continue into the future. Thus, such life expectancy figures need to be adjusted for temporal trends before calculating how long a currently living individual of a particular age is expected to live. Period life expectancy remains a commonly used statistic to summarize the current health status of a population.

However, for some purposes, such as pensions calculations, it is usual to adjust the life table used by assuming that age-specific death rates will continue to decrease over the years, as they have usually done in the past. That is often done by simply extrapolating past trends; but some models exist to account for the evolution of mortality like the LeeCarter model.[91]

As discussed above, on an individual basis, a number of factors correlate with a longer life. Factors that are associated with variations in life expectancy include family history, marital status, economic status, physique, exercise, diet, drug use including smoking and alcohol consumption, disposition, education, environment, sleep, climate, and health care.[10]

In order to assess the quality of these additional years of life, ‘healthy life expectancies’ have been calculated for the last 30 years. Since 2001, the World Health Organization has published statistics called Healthy life expectancy (HALE), defined as the average number of years that a person can expect to live in “full health” excluding the years lived in less than full health due to disease and/or injury. Since 2004, Eurostat publishes annual statistics called Healthy Life Years (HLY) based on reported activity limitations. The United States uses similar indicators in the framework of the national health promotion and disease prevention plan “Healthy People 2010”. More and more countries are using health expectancy indicators to monitor the health of their population.

Forecasting life expectancy and mortality forms an important subdivision of demography. Future trends in life expectancy have huge implications for old-age support programs like U.S. Social Security and pension since the cash flow in these systems depends on the number of recipients who are still living (along with the rate of return on the investments or the tax rate in pay-as-you-go systems). With longer life expectancies, the systems see increased cash outflow; if the systems underestimate increases in life-expectancies, they will be unprepared for the large payments that will occur, as humans live longer and longer.

Life expectancy forecasting is usually based on two different approaches:

Life expectancy is one of the factors in measuring the Human Development Index (HDI) of each nation along with adult literacy, education, and standard of living.[93]

Life expectancy is also used in describing the physical quality of life of an area or, for an individual when the value of a life settlement is determined a life insurance policy sold for a cash asset.

Disparities in life expectancy are often cited as demonstrating the need for better medical care or increased social support. A strongly associated indirect measure is income inequality. For the top 21 industrialised countries, if each person is counted equally, life expectancy is lower in more unequal countries (r = -0.907).[94] There is a similar relationship among states in the US (r = -.620).[95]

Life expectancy differs from maximum life span. Life expectancy is an average[96] that is computed over all people including those who die shortly after birth, those who die in early adulthood in childbirth or in wars, and those who live unimpeded until old age, and lifespan is an individual-specific concept and maximum lifespan is an upper bound rather than an average.

However, these two terms are often confused with each other to the point that when people hear ‘life expectancy was 35 years’ they often interpret this as meaning that people of that time or place had short maximum life spans.[97] One such example can be seen in the In Search of… episode “The Man Who Would Not Die” (About Count of St. Germain) where it is stated “Evidence recently discovered in the British Museum indicates that St. Germain may have well been the long lost third son of Rkczi born in Transylvania in 1694. If he died in Germany in 1784, he lived 90 years. The average life expectancy in the 18th century was 35 years. Fifty was a ripe old age. Ninety… was forever.”

In reality, there are other examples of people living significantly longer than the life expectancy of their time period, such as Socrates, Saint Anthony, Michelangelo, and Ben Franklin.[98]

It can be argued that it is better to compare life expectancies of the period after childhood to get a better handle on life span.[99] Life expectancy can change dramatically after childhood, as is demonstrated by the Roman Life Expectancy table in which at birth, the life expectancy was 21, but by the age of 5, it jumped to 42. Studies like Plymouth Plantation; “Dead at Forty” and Life Expectancy by Age, 18502004 similarly show a dramatic increase in life expectancy once adulthood was reached.

a. ^ ^ In standard actuarial notation, ex refers to the expected future lifetime of (x) in whole years, while ex with a circle above the e denotes the complete expected future lifetime of (x), including the fraction.

Link:
Life expectancy – Wikipedia

Posted in Human Longevity | Comments Off on Life expectancy – Wikipedia

Casino Gambling Web | Best Online Gambling News and Casinos …

Posted: October 13, 2016 at 5:36 am

The Top Online Casino Gambling News Reporting Site Since 2002! Latest News From the Casino Gambling Industry

Cheers and Jeers Abound for New UK Online Gambling Law May 19, 2014 The new UK betting law is expected to be finalized by July 1st and go into effect by September 1st. However, many are concerned the law could create another wild-west situation in the UK… Speculation on Casino Gambling Legalization in Japan Continues May 13, 2014 LVS owner Sheldon Adelson continues to create gambling news across the world, this time in Japan as he salivates at the possibility of legalization before the 2020 Olympics… LVS Owner Adelson Pulling the Strings of Politicians in the US May 8, 2014 Las Vegas Sands is playing the political system, and its owner, Sheldon Adelson, is the puppet master behind the curtain pulling the strings, according to new reports… New Jersey Bets Big on Sports Gambling, Loses – So Far… May 5, 2014 Governor Chris Christie may need a win in the Supreme Court to justify his defense for his initiative to legalize sports betting in the state… Tribal And Private Gaming Owners Square Off In Massachusetts April 28, 2014 Steve Wynn and the Mohegan Sun are squaring off in a battle for a casino license in Massachusetts, and the two have vastly different views of how regulations are being constructed…

Below is a quick guide to the best gambling sites online. One is for USA players, the other is for players in the rest of the world. Good luck!

As laws change in 2012 the internet poker craze is set to boom once again in North America. Bovada, formerly known as Bodog, is one of the only sites that weathered the storm and they are now the best place to play online. More players gamble here than anywhere else.

The goal of Casino Gambling Web is to provide each of our visitors with an insider’s view of every aspect of the gambling world. We have over 30 feeds releasing news to more than 30 specific gaming related categories in order to achieve our important goal of keeping you well updated and informed.

The main sections of our site are broken up into 5 broad areas of gambling news. The first area of news we cover is about issues concerning brick and mortar casinos like those found in Atlantic City, Las Vegas, the Gulf Coast Region, and well, now the rest of the USA. The second area of gambling news we cover concerns itself with the Internet casino community. We also have reporters who cover the international poker community and also the world of sports gambling. And finally, we cover news about the law when it effects any part of the gambling community; such legal news could include information on updates to the UIGEA, or issues surrounding gambling petitions to repeal that law, or information and stories related to new poker laws that are constantly being debated in state congresses.

We go well beyond simply reporting the news. We get involved with the news and sometimes we even become the news. We pride ourselves on providing follow up coverage to individual news stories. We had reporters in Washington D.C. on the infamous night when the internet gambling ban was passed by a now proven to be corrupt, former senator Bill Frist led congress, and we have staff constantly digging to get important details to American citizens. We had reporters at the World Series of Poker in Las Vegas when Jamie Gold won his ring and changed the online gambling world, and we have representatives playing in the tournament each and every year.

It is our pleasure and proud duty to serve as a reliable source of gambling news and quality online casino reviews for all of the international gaming community. Please take a few moments to look around our site and discover why we, and most other insiders of the industry, have considered CGW the #1 Top Casino Gambling News eporting Organization since 2002.

The United States changed internet gambling when they passed the Unlawful Internet Gambling Enforcement Act (UIGEA), so now when searching for top online casinos you must focus your energies on finding post-UIGEA information as opposed to pre-UIGEA information. Before the law passed you could find reliable info on most gambling portals across the internet. Most of those portals simply advertised casinos and gambling sites that were tested and approved by eCogra, and in general you would be hard pressed to find an online casino that had a bad reputation. However, now that these gambling sites were forced out of the US they may be changing how they run their business. That is why it important to get your information from reliable sources who have been following the industry and keeping up with which companies have remained honorable. So good luck and happy hunting!

The Unlawful Internet Gambling Enforcement Act (UIGEA), in short, states that anything that may be illegal on a state level is now also illegal on a federal level. However, the day after Christmas in 2011, President Barrack Obama’s administration delivered what the online gaming industry will view forever as a great big beautifully wrapped present. The government released a statement declaring that the 1961 Federal Wire Act only covers sports betting. What this means for the industry on an international level is still unknown, but what it means in the USA is that states can begin running online poker sites and selling lottery tickets to its citizens within its borders. The EU and WTO will surely have some analysis and we will keep you updated as this situation unfolds. Be sure to check with state laws before you start to gamble online.

The UK was the first high-power territory to legalize and regulate gambling online with a law passed in 2007. They allow all forms of betting but have strict requirements on advertisers. They first attracted offshore companies to come on land, which gave the gambling companies who complied the appearance of legitamacy. However, high taxes forced many who originally came to land, back out to sea and the battle forever rages on, but on a whole, the industry regulations have proven greatly successful and have since served as a model for other gaming enlightened countries around the world.

Since then, many European countries have regulated the industry, breaking up long term monopolies, sometimes even breaking up government backed empires, finally allowing competition – and the industry across the globe (outside of the USA) is thriving with rave reviews, even from those who are most interested in protecting the innocent and vulnerable members of society.

We strive to provide our visitors with the most valuable information about problem gambling and addiction in society. We have an entire section of our site dedicated to news about the subject. When a state or territory implements new technology to safeguard itself from allowing problem gamblers to proliferate, we will report it to you. If there is a new story that reveals some positive or negative information about gambling as it is related to addiction, we will report it to you. And if you think you have a problem with gambling right now, please visit Gamblers Anonymous if you feel you have a gambling problem.

In order to get all the information you need about this industry it is important to visit Wiki’s Online Gambling page. It provides an unbiased view of the current state of the Internet gambling industry. If you are interested in learning about other issues you may also enjoy visiting the National Council on Problem Gambling, a righteous company whose sole purpose is to help protect and support problem gamblers. They have a lot of great resources for anyone interested in learning more.

Read the original post:

Casino Gambling Web | Best Online Gambling News and Casinos …

Posted in Gambling | Comments Off on Casino Gambling Web | Best Online Gambling News and Casinos …