Tag Archives: european

Space Travel and Exploration

Posted: July 25, 2016 at 3:56 pm

NASA Establishes Institute to Explore New Ways to Protect Astronauts 20 New Countries to Invest in Space Programs by 2025 NASA, USAID Open Environmental Monitoring Hub in West Africa Russia, US Discuss Lunar Station for Mars Mission Dark Matter Particle Remains Elusive NASA Seeks Picometer Accuracy For Webb Telescope Return to the underwater Space Station .. A decade of plant biology in space On this day 10 years ago, Space Shuttle Discovery was launched to the International Space Station carrying ESA’s European Modular Cultivation System – a miniature greenhouse to probe how plants grow … more .. Mathematical framework prioritizes key patterns to accelerate scientific discovery Networks are mathematical representations to explore and understand diverse, complex systems-everything from military logistics and global finance to air traffic, social media, and the biological pr … more .. Exploring inner space for outer space An international team of six astronauts from China, Japan, USA, Spain and Russia have descended into the caves of Sardinia, Italy, to explore the depths and train for life in outer space. One of the … more .. Quantum technologies to revolutionize 21st century Is quantum technology the future of the 21st century? On the occasion of the 66th Lindau Nobel Laureate Meeting, this is the key question to be explored today in a panel discussion with the Nobel La … more .. Blue Origin has fourth successful rocket booster landing US space firm Blue Origin conducted a successful fourth test Sunday of its reusable New Shepard rocket, which dropped back to Earth for a flawless upright landing seen on a live webcast. … more .. TED Talks aim for wider global reach TED Talks, known for “ideas worth spreading,” are aiming for a wider global audience with a new mobile application that can be used in two dozen languages. … more .. Disney brings its brand to Shanghai with new theme park Entertainment giant Disney brings the ultimate American cultural concept to Communist-ruled China on Thursday, opening a massive theme park in Shanghai catering to a rising middle class. … more .. Tech, beauty intersect in Silicon Valley The beauty industry has long relied on creating a sense of mystery, magic even, around its creams, powders and potions. But now it has something else up its sleeve: high technology. … more

Read the original:

Space Travel and Exploration

Posted in Space Travel | Comments Off on Space Travel and Exploration

U.S. Mission to NATO

Posted: July 21, 2016 at 2:09 am

11 July | Key Documents, NATO Summits

Warsaw Declaration on Transatlantic Security Warsaw Summit Communiqu NATO-EU Joint Declaration Commitment to Enhance Resilience Cyber Defense Pledge NATO Policy for the Protection of Civilians

10 July | Fact Sheets, U.S. & NATO

FACT SHEET: U.S. and NATO Efforts in Support of NATO Partners, including Georgia, Ukraine, and Moldova From The White House The United States strongly

10 July | Fact Sheets, U.S. & NATO

FACT SHEET: U.S. Contributions to Enhancing Allied Resilience From The White House At the NATO Warsaw Summit, heads of state and government will commit their

9 July | NATO Summits, President Barack Obama, Speeches, Transcripts

Remarks by President Obama at Press Conference After NATO Summit July 9,2016 PRESIDENT OBAMA:Good evening, everybody. Once again, I want to thank the government and

9 July | Key Documents, NATO Summits

Joint statement of the NATO-Ukraine Commission at the Level of Heads of State and Government We, the Heads of State and Government of the

9 July | Key Documents, NATO Summits

The Warsaw Declaration on Transatlantic Security Issued by the Heads of State and Government participating in the meeting of the North Atlantic Council in Warsaw

9 July | Key Documents, NATO Summits

Endorsed by the Heads of State and Government participating in the meeting of the North Atlantic Council in Warsaw 8-9 July 2016 I. INTRODUCTION 1.

9 July | Key Documents, NATO Summits

Issued by the Heads of State and Government participating in the meeting of the North Atlantic Council in Warsaw 8-9 July 2016 1. We, the

9 July | Fact Sheets

FACT SHEET: NATOs Enduring Commitment to Afghanistan From The White House NATOs mission in Afghanistan has been the Alliances largest and one of its

9 July | NATO Summits, Speeches

NATO Secretary General Jens Stoltenberg Opening Remarks Following the Meeting of the North Atlantic Council at the Level of Heads of State and Government in

9 July | Key Documents, NATO Summits

Issued by the Heads of State and Government of Afghanistan and Alliesand their Resolute Support Operational Partners We, the Heads of State and Government of

8 July | Key Documents, NATO Summits

Cyber Defence Pledge 1. In recognition of the new realities of security threats to NATO, we, the Allied Heads of State and Government, pledge to

8 July | Key Documents, NATO Summits

Issued by the Heads of State and Government participating in the meeting of the North Atlantic Council in Warsaw, 8-9 July 2016 We, the Heads

8 July | Key Documents, NATO Summits

Joint statement of the NATO-Georgia Commission at the level of Foreign Ministers We, Allied Foreign Ministers and the Foreign Minister of Georgia, met today in

8 July | NATO Summits, Speeches, Transcripts

Press Statement by NATO Secretary General Jens Stoltenberg at the Signing Ceremony of the EU-NATO Joint Declaration Followed by Statements by President Tuskand PresidentJuncker July

8 July | NATO Summits, President Barack Obama, Speeches

Remarks by President Obama, President Tusk of the European Council, and President Juncker of the European Commission After U.S.-EU Meeting July 8, 2016 PRESIDENT OBAMA:

8 July | Cooperative Security, Fact Sheets, U.S. & NATO

FACT SHEET: U.S. Assurance and Deterrence Efforts in Support of NATO Allies From The White House In the last 18 months, the United States

8 July | Key Documents, NATO Summits

Joint Declaration by the President of the European Council, the President of the European Commission, and the Secretary General of the North Atlantic Treaty Organization

Visit link:
U.S. Mission to NATO

Posted in NATO | Comments Off on U.S. Mission to NATO

Liberal Democrat Voice

Posted: July 14, 2016 at 4:35 pm

There is a smell of defeatism in the air, a widespread view that the people have spoken and that we must respect them and accept their verdict. What nonsense! There is nothing sacred about a referendum vote, any more than the result of a General Election. We Lib Dems cannot accept Brexit because it would be a calamity that would undo everything we have always fought for. Furthermore reversing Brexit is not a hopeless cause.

When the time is right, there is every justification for a new referendum. A referendum must offer a clear choice, which the last did not. When Theresa May says Brexit means Brexit, what does Brexit mean? Some Leavers want no more free movement of labour, which means no access to the single market. Others want access, which means the free movement of labour must stay. Indeed with only a very tiny margin in favour of Leave, far more votes were cast for Remain than for each of these two incompatible objectives of the Leave Camps.

A re-run is especially justified if there is a dramatic change in circumstances, such as a massive shift in public opinion. This is very likely. Most economists and every independent expert organization, the IMF, the IFS and the Bank of England, predict a serious recession. Leavers promised a future in the sunny uplands, and lots of new money for the NHS, not more austerity and severe cuts in spending. Now they may be ringing their bells, but soon they will be wringing their hands.

Finally the report from The Committee of Climate Change on fracking has been released and produced some interesting results, raising concerns of the effect of fracking on the UKs climate change targets.

Shale gas production of the UK is not going to be the answer to our energy needs when it comes to meeting our climate change targets. It is now obvious the UK has missed the boat on this payday unless development is done on a huge scale, industrializing vast areas of rural England. The recommended regulations in the report to facilitate the size of expansion needed will never be in place.

The regulations needed to mitigate fugitive emissions are also not financially viable, making the cost of fracking even more expensive. There will always be methane leaks, the industry cannot stop it. The industrys own figures of 2% to 5% expected leakage of methane from exploration, production and the supporting infrastructure needed, will put the UKs climate change targets in jeopardy.

The report states that UK shale gas production must displace imported gas rather than increasing domestic consumption. Allowing unabated consumption above these levels would not be consistent with the decarbonisation required under the Climate Change Act. Each alternative has an almost identical climate change footprint and the imports are likely to be cheaper. If the government commits to use domestic fracked gas this will drive up energy prices and eventually hit the poorest families in the pocket!

The report does not consider the ongoing technical issues such waste disposal, water pollution, set back distances, community disruption, seismic concerns, industrialisation, etc. etc. etc! It is time for the government to stop bending over for the gas and oil lobbyists and realise they are backing the wrong horse.

A familiar face heads back to Lib Dem HQ. Phil Reilly, the man who wrote Nick Cleggs brilliant resignation speech which inspired 20,000 people to join the party, has been appointed interim Head of Communications following the departure of James Holt to pastures new. Phil has been working for Nick since then including helping Nick with his new book which is coming out in September.

Since the election, hes shared some funny stories on his blog, Blimey OReilly.

The most recent involves his old colleague Mr Holt, who had a bit of a brainwave at the Eastleigh by-election to get Nick Clegg out of the campaign HQ without being harassed by a throng of journalists. I wonder if Boris might consider using the same technique when he leaves home every day although I doubt the same personnel would be as willing to help him.

The entrance to the building was an enormous roll-up, corrugated metal affair, like a huge garage door or the sort of thing you would use to protect a massive off license after hours. The press pack were all expecting the DPM to come out through the smaller front door, built into the roll-up wall, into an open car park, where they could pounce on him like jaguars on a gazelle. So, Holty arranged dozens of activists, some gripping placards and bright orange diamonds, inside the building facing the entrance, like infantry preparing to march into battle.

Behind the advanced guard was Nick Clegg flanked by dozens more activists and, rather conspicuously, a couple of the Metropolitan Polices finest close protection officers.

Mark Easton presented some interesting Brexit expectations polling by ComRes for the BBC last night on the Ten Oclock News. Here are a couple of highlights:

Most Britons think that maintaining access to the single market should be the priority for the Government when negotiating the UKs withdrawal from the EU (66%), while just a third say this of restricting freedom of movement (31%).

The new Secretary of State for Exiting the European Union, David Davis, has already helpfully set out his Brexit negotiating positions in a speech to the Institute of Chartered Engineers in March (carried in full on his website). He has also more recently written a detailed article on the subject on Conservative Home.

The Federal Policy Committee is traditionally very busy in the immediate run-up to the summer holiday. That is because of conference deadlines and the need to get everything concluded before August when a lot of people are away.

The most recent meeting of the committee, which came hot on the heels of the last one, was on 13th July 2016. It also happened to be the day that Labour plunged further into disarray following the revelation that Jeremy Corbyn will appear on the ballot paper in their leadership election and, of course, the country had a new Prime Minister foisted upon it.

As we were going through the meeting, government announcements were being about new Cabinet members. We paused several time for a collective intake of breath.

There was a lot to discuss. We did not finish until some time after 9pm.

Gareth Epps has resigned from the committee because he has taken a job that is politically restricted. Gareth has been a very active member of FPC for a long time and he will certainly be missed from the committee. We were, however, delighted to welcome Antony Hook as his replacement.

The committee agreed the chairs, membership, and remits of three new working groups. Each of those groups was recommended by the Agenda 2020 exercise.

The first of these was education. The remit requires the group to identify proposals for new policy in Education in England. The group is particularly to be directed to identify policies which could be strong campaigning issues within education, reinforcing our overall liberal vision of creating opportunity for everyone regardless of background. The group is also expected to consider and address Liberal Democrat principles on diversity and equalities in developing their proposals. It will deal with the overall principles of education, Early Years, funding, structures, academies, governors, standards and inspections, quality, teacher recruitment, closing the attainment gap between disadvantaged and non-disadvantaged students, school and the world of work, Further Education and adult education. It will not deal with Higher Education.

The chair is to be Lucy Nethsingha. The membership of the group was appointed. It is fair to say that there was very strong competition for places. In fact, we had over 830 applications for the working groups.

It does seem that the news over the past fortnight or so has been dominated by people saying goodbye to spend more time with their families or whatever. In some cases, they will be more missed than in others, and, on this occasion, it is time to mark the retirement from the House of Lords of our longtime spokesperson on Universities, Baroness (Margaret) Sharp of Guildford, who has decided to take up the option to retire at the still relatively spritely age of 77.

Margaret is another of those whose work over many years led to a triumph celebrated by others, in that it was her success in reducing the Conservative majority in Guildford from over 20,000 to a rather more slender 4,500 that helped Sue Doughty to her famous success in 2001.

An economist of some regard, Margaret taught at the London School of Economics, as well as working in the National Economic Development Office in the 1970s, before becoming politically active with the onset of the Social Democrats.

Originally posted here:

Liberal Democrat Voice

Posted in Liberal | Comments Off on Liberal Democrat Voice

North Atlantic Treaty Organization (NATO) | Britannica.com

Posted: July 12, 2016 at 6:20 am

Alternative title: NATO

North Atlantic Treaty Organization (NATO), military alliance established by the North Atlantic Treaty (also called the Washington Treaty) of April 4, 1949, which sought to create a counterweight to Soviet armies stationed in central and eastern Europe after World War II. Its original members were Belgium, Canada, Denmark, France, Iceland, Italy, Luxembourg, the Netherlands, Norway, Portugal, the United Kingdom, and the United States. Joining the original signatories were Greece and Turkey (1952); West Germany (1955; from 1990 as Germany); Spain (1982); the Czech Republic, Hungary, and Poland (1999); Bulgaria, Estonia, Latvia, Lithuania, Romania, Slovakia, and Slovenia (2004); and Albania and Croatia (2009). France withdrew from the integrated military command of NATO in 1966 but remained a member of the organization; it resumed its position in NATOs military command in 2009.

The heart of NATO is expressed in Article 5 of the North Atlantic Treaty, in which the signatory members agree that

an armed attack against one or more of them in Europe or North America shall be considered an attack against them all; and consequently they agree that, if such an armed attack occurs, each of them, in exercise of the right of individual or collective self-defense recognized by Article 51 of the Charter of the United Nations, will assist the Party or Parties so attacked by taking forthwith, individually and in concert with the other Parties, such action as it deems necessary, including the use of armed force, to restore and maintain the security of the North Atlantic area.

NATO invoked Article 5 for the first time in 2001, after terrorist attacks organized by exiled Saudi Arabian millionaire Osama bin Laden destroyed the World Trade Center in New York City and part of the Pentagon outside Washington, D.C., killing some 3,000 people.

Article 6 defines the geographic scope of the treaty as covering an armed attack on the territory of any of the Parties in Europe or North America. Other articles commit the allies to strengthening their democratic institutions, to building their collective military capability, to consulting each other, and to remaining open to inviting other European states to join.

Barkley, Alben W.: North Atlantic Treaty signingEncyclopdia Britannica, Inc.After World War II in 1945, western Europe was economically exhausted and militarily weak (the western Allies had rapidly and drastically reduced their armies at the end of the war), and newly powerful communist parties had arisen in France and Italy. By contrast, the Soviet Union had emerged from the war with its armies dominating all the states of central and eastern Europe, and by 1948 communists under Moscows sponsorship had consolidated their control of the governments of those countries and suppressed all noncommunist political activity. What became known as the Iron Curtain, a term popularized by Winston Churchill, had descended over central and eastern Europe. Further, wartime cooperation between the western Allies and the Soviets had completely broken down. Each side was organizing its own sector of occupied Germany, so that two German states would emerge, a democratic one in the west and a communist one in the east.

In 1948 the United States launched the Marshall Plan, which infused massive amounts of economic aid to the countries of western and southern Europe on the condition that they cooperate with each other and engage in joint planning to hasten their mutual recovery. As for military recovery, under the Brussels Treaty of 1948, the United Kingdom, France, and the Low CountriesBelgium, the Netherlands, and Luxembourgconcluded a collective-defense agreement called the Western European Union. It was soon recognized, however, that a more formidable alliance would be required to provide an adequate military counterweight to the Soviets.

By this time Britain, Canada, and the United States had already engaged in secret exploratory talks on security arrangements that would serve as an alternative to the United Nations (UN), which was becoming paralyzed by the rapidly emerging Cold War. In March 1948, following a virtual communist coup dtat in Czechoslovakia in February, the three governments began discussions on a multilateral collective-defense scheme that would enhance Western security and promote democratic values. These discussions were eventually joined by France, the Low Countries, and Norway and in April 1949 resulted in the North Atlantic Treaty.

Spurred by the North Korean invasion of South Korea in June 1950, the United States took steps to demonstrate that it would resist any Soviet military expansion or pressures in Europe. General Dwight D. Eisenhower, the leader of the Allied forces in western Europe in World War II, was named Supreme Allied Commander Europe (SACEUR) by the North Atlantic Council (NATOs governing body) in December 1950. He was followed as SACEUR by a succession of American generals.

The North Atlantic Council, which was established soon after the treaty came into effect, is composed of ministerial representatives of the member states, who meet at least twice a year. At other times the council, chaired by the NATO secretary-general, remains in permanent session at the ambassadorial level. Just as the position of SACEUR has always been held by an American, the secretary-generalship has always been held by a European.

NATOs military organization encompasses a complete system of commands for possible wartime use. The Military Committee, consisting of representatives of the military chiefs of staff of the member states, subsumes two strategic commands: Allied Command Operations (ACO) and Allied Command Transformation (ACT). ACO is headed by the SACEUR and located at Supreme Headquarters Allied Powers Europe (SHAPE) in Casteau, Belgium. ACT is headquartered in Norfolk, Virginia, U.S. During the alliances first 20 years, more than $3 billion worth of infrastructure for NATO forcesbases, airfields, pipelines, communications networks, depotswas jointly planned, financed, and built, with about one-third of the funding from the United States. NATO funding generally is not used for the procurement of military equipment, which is provided by the member statesthough the NATO Airborne Early Warning Force, a fleet of radar-bearing aircraft designed to protect against a surprise low-flying attack, was funded jointly.

A serious issue confronting NATO in the early and mid-1950s was the negotiation of West Germanys participation in the alliance. The prospect of a rearmed Germany was understandably greeted with widespread unease and hesitancy in western Europe, but the countrys strength had long been recognized as necessary to protect western Europe from a possible Soviet invasion. Accordingly, arrangements for West Germanys safe participation in the alliance were worked out as part of the Paris Agreements of October 1954, which ended the occupation of West German territory by the western Allies and provided for both the limitation of West German armaments and the countrys accession to the Brussels Treaty. In May 1955 West Germany joined NATO, which prompted the Soviet Union to form the Warsaw Pact alliance in central and eastern Europe the same year. The West Germans subsequently contributed many divisions and substantial air forces to the NATO alliance. By the time the Cold War ended, some 900,000 troopsnearly half of them from six countries (United States, United Kingdom, France, Belgium, Canada, and the Netherlands)were stationed in West Germany.

Frances relationship with NATO became strained after 1958, as President Charles de Gaulle increasingly criticized the organizations domination by the United States and the intrusion upon French sovereignty by NATOs many international staffs and activities. He argued that such integration subjected France to automatic war at the decision of foreigners. In July 1966 France formally withdrew from the military command structure of NATO and required NATO forces and headquarters to leave French soil; nevertheless, de Gaulle proclaimed continued French adherence to the North Atlantic Treaty in case of unprovoked aggression. After NATO moved its headquarters from Paris to Brussels, France maintained a liaison relationship with NATOs integrated military staffs, continued to sit in the council, and continued to maintain and deploy ground forces in West Germany, though it did so under new bilateral agreements with the West Germans rather than under NATO jurisdiction. In 2009 France rejoined the military command structure of NATO.

From its founding, NATOs primary purpose was to unify and strengthen the Western Allies military response to a possible invasion of western Europe by the Soviet Union and its Warsaw Pact allies. In the early 1950s NATO relied partly on the threat of massive nuclear retaliation from the United States to counter the Warsaw Pacts much larger ground forces. Beginning in 1957, this policy was supplemented by the deployment of American nuclear weapons in western European bases. NATO later adopted a flexible response strategy, which the United States interpreted to mean that a war in Europe did not have to escalate to an all-out nuclear exchange. Under this strategy, many Allied forces were equipped with American battlefield and theatre nuclear weapons under a dual-control (or dual-key) system, which allowed both the country hosting the weapons and the United States to veto their use. Britain retained control of its strategic nuclear arsenal but brought it within NATOs planning structures; Frances nuclear forces remained completely autonomous.

A conventional and nuclear stalemate between the two sides continued through the construction of the Berlin Wall in the early 1960s, dtente in the 1970s, and the resurgence of Cold War tensions in the 1980s after the Soviet Unions invasion of Afghanistan in 1979 and the election of U.S. President Ronald Reagan in 1980. After 1985, however, far-reaching economic and political reforms introduced by Soviet leader Mikhail Gorbachev fundamentally altered the status quo. In July 1989 Gorbachev announced that Moscow would no longer prop up communist governments in central and eastern Europe and thereby signaled his tacit acceptance of their replacement by freely elected (and noncommunist) administrations. Moscows abandonment of control over central and eastern Europe meant the dissipation of much of the military threat that the Warsaw Pact had formerly posed to western Europe, a fact that led some to question the need to retain NATO as a military organizationespecially after the Warsaw Pacts dissolution in 1991. The reunification of Germany in October 1990 and its retention of NATO membership created both a need and an opportunity for NATO to be transformed into a more political alliance devoted to maintaining international stability in Europe.

After the Cold War, NATO was reconceived as a cooperative-security organization whose mandate was to include two main objectives: to foster dialogue and cooperation with former adversaries in the Warsaw Pact and to manage conflicts in areas on the European periphery, such as the Balkans. In keeping with the first objective, NATO established the North Atlantic Cooperation Council (1991; later replaced by the Euro-Atlantic Partnership Council) to provide a forum for the exchange of views on political and security issues, as well as the Partnership for Peace (PfP) program (1994) to enhance European security and stability through joint military training exercises with NATO and non-NATO states, including the former Soviet republics and allies. Special cooperative links were also set up with two PfP countries: Russia and Ukraine.

The second objective entailed NATOs first use of military force, when it entered the war in Bosnia and Herzegovina in 1995 by staging air strikes against Bosnian Serb positions around the capital city of Sarajevo. The subsequent Dayton Accords, which were initialed by representatives of Bosnia and Herzegovina, the Republic of Croatia, and the Federal Republic of Yugoslavia, committed each state to respecting the others sovereignty and to settling disputes peacefully; it also laid the groundwork for stationing NATO peacekeeping troops in the region. A 60,000-strong Implementation Force (IFOR) was initially deployed, though a smaller contingent remained in Bosnia under a different name, the Stabilization Force (SFOR). In March 1999 NATO launched massive air strikes against Serbia in an attempt to force the Yugoslav government of Slobodan Miloevi to accede to diplomatic provisions designed to protect the predominantly Muslim Albanian population in the province of Kosovo. Under the terms of a negotiated settlement to the fighting, NATO deployed a peacekeeping force called the Kosovo Force (KFOR).

The crisis over Kosovo and the ensuing war gave renewed impetus to efforts by the European Union (EU) to construct a new crisis-intervention force, which would make the EU less dependent on NATO and U.S. military resources for conflict management. These efforts prompted significant debates about whether enhancing the EUs defensive capabilities would strengthen or weaken NATO. Simultaneously there was much discussion of the future of NATO in the post-Cold War era. Some observers argued that the alliance should be dissolved, noting that it was created to confront an enemy that no longer existed; others called for a broad expansion of NATO membership to include Russia. Most suggested alternative roles, including peacekeeping. By the start of the second decade of the 21st century, it appeared likely that the EU would not develop capabilities competitive with those of NATO or even seek to do so; as a result, earlier worries associated with the spectre of rivalry between the two Brussels-based organizations dissipated.

North Atlantic Treaty Organization: flag-raising ceremony, 1999NATO photosDuring the presidency of Bill Clinton (19932001), the United States led an initiative to enlarge NATO membership gradually to include some of the former Soviet allies. In the concurrent debate over enlargement, supporters of the initiative argued that NATO membership was the best way to begin the long process of integrating these states into regional political and economic institutions such as the EU. Some also feared future Russian aggression and suggested that NATO membership would guarantee freedom and security for the newly democratic regimes. Opponents pointed to the enormous cost of modernizing the military forces of new members; they also argued that enlargement, which Russia would regard as a provocation, would hinder democracy in that country and enhance the influence of hard-liners. Despite these disagreements, the Czech Republic, Hungary, and Poland joined NATO in 1999; Bulgaria, Estonia, Latvia, Lithuania, Romania, Slovakia, and Slovenia were admitted in 2004; and Albania and Croatia acceded to the alliance in 2009.

Meanwhile, by the beginning of the 21st century, Russia and NATO had formed a strategic relationship. No longer considered NATOs chief enemy, Russia cemented a new cooperative bond with NATO in 2001 to address such common concerns as international terrorism, nuclear nonproliferation, and arms control. This bond was subsequently subject to fraying, however, in large part because of reasons associated with Russian domestic politics.

Events following the September 11 terrorist attacks in 2001 led to the forging of a new dynamic within the alliance, one that increasingly favoured the military engagement of members outside Europe, initially with a mission against Taliban forces in Afghanistan beginning in the summer of 2003 and subsequently with air operations against the regime of Muammar al-Qaddafi in Libya in early 2011. As a result of the increased tempo of military operations undertaken by the alliance, the long-standing issue of burden sharing was revived, with some officials warning that failure to share the costs of NATO operations more equitably would lead to unraveling of the alliance. Most observers regarded that scenario as unlikely, however.

Corrections? Updates? Help us improve this article! Contact our editors with your Feedback.

See the original post:
North Atlantic Treaty Organization (NATO) | Britannica.com

Posted in NATO | Comments Off on North Atlantic Treaty Organization (NATO) | Britannica.com

Minn. police shooting reignites debate over Second Amendment …

Posted: at 6:19 am

President Obama responded to the recent police shootings in Louisiana and Minnesota by recognizing the need to root out bias in law enforcement and encouraging communities to trust their local police department.

A memorial left for Philando Castile following the police shooting death of the black man on July 7, 2016, in St. Paul, Minn. 8(Photo: Stephen Maturen, Getty Images)

A black Minnesota man fatally shot by police Wednesday during a stop for a broken tail light was a licensed gun owner, prompting some observers to suggest that the debate over gun control and the Second Amendment has racial undertones.

When police in Falcon Heights, Minn.,stopped the car in which Philando Castile, 37, was riding on Wednesday night, Castile attempted to give them his license and registration, as requested. He also told them he was a licensed weapon owner, according to the Facebook Live video posted by Diamond “Lavish” Reynolds, who identified herself as Castile’s fiance.

As Castile put his hands up, police fired into his arm four times, according to the video. He was pronounced dead later at a hospital.

“I’m waiting to hear the human outcry from Second Amendment defenders over (this incident),” NAACP president and CEO Cornell William Brooks told USA TODAY Thursday.

Brookswas preparing to travel to Minnesota to get up to speed on the Castile case after a trip to Baton Rouge, La., to get details on the police-involved shooting of another black man earlier this week.

“When it comes to an African American with a license to carry a firearm, it appears that his pigmentation, his degree of pigmentation, is more important than the permit or license to carry a firearm,” Brooks said. “One would hope and pray that’s not true.”

Tweeted author and TV commentator Keith Boykin: “Does the Second Amendment only apply to White People?”

Amanda Zantal-Wiener, tweeted aboutthe National Rifle Association, perhaps the most powerful of the national organizations supporting the Second Amendment, saying: “Hey, NRA, I’m sure you’re just moments away from defending Philando Castile’s second amendment rights. Right? Any minute now, right?”

The NRA did not immediatelyrespond to a request for an interview. The organization has been publicly silent regarding the Minnesota shooting.

But at least two organizations, the Second Amendment Foundation and the Citizens Committee for the Right to Keep and Bear Arms, both based in Bellevue, Wash., expressed concern over the case and called for an investigation by state-level entities, perhaps even from a state outside of Minnesota.

“Wednesday nights shooting of Philando Castile is very troubling, especially to the firearms community, because he was a legally-armed private citizen who may have done nothing more than reach for his identification and carry permit,” Allan Gottleib, founder and executive vice president of the foundation, and chair of the Citizens Committee, said in a statement Thursday.

“We are cognizant of the racial overtones arising from Mr. Castiles death,”Gottlieb said. “The concerns of our members, and honest gun owners everywhere, go even deeper. Exercising our right to bear arms should not translate to a death sentence over something so trivial as a traffic stop for a broken tail light, and we are going to watch this case with a magnifying glass.”

Survey data show that white Americans and black Americans appear to have two different and distinct relationships with firearms.

Data released in 2014 by the Pew Research Center showed that blacks are less likely than whites to have a firearm at home.According to the study, 41% of whites said they had a gun at home compared to 19% of blacks.

But there has been much research to show that black Americans are more likely than white Americans to be gun homicide victims.

In 2010, blacks were 55% of shooting homicide victims but 13% of the U.S. population, according to a Pew review of data from the Centers for Disease Control and Prevention. By contrast, in the same year, whites were 25% of gun homicide victims but 65% of the population, according to the same data.

In the early days of the Second Amendment, blacks were prohibited from possessing firearms, according to the National Constitution Center, a nonprofit organization in Philadelphia. The measure was intended to protect Americans’ right to bear arms, and designated states as the entities who would manage this.

Gerald Horne, an historian at the University of Houston, said during a recent interview with the Real News Network that there was a race and class bias inherent in the amendment’s provisions.

“The Second Amendment certainly did not apply to enslaved Africans,” Horne said. “All measures were taken to keep arms out of their hands. The Second Amendment did not apply to indigenous people because the European settlers were at war with the indigenous people to take their land. And providing arms to them was considered somewhat akin to a capital offense. So the Second Amendment was mostly applicable to the settler class.”

Horne says that many of the battles during reconstruction were about keeping arms out of the hands of black Americans hesays one of the key reasons the Ku Klux Klan was formed in the post-Civil War era was to keep arms out of the hands of blacks.

Said Brooks, “I would just simply note that in a state like Texas, where we have thousands upon thousands of people with concealed weapons permits, a permit is sufficient proof to vote while a college ID is not. Think about that.”

Follow Melanie Eversley on Twitter:@MelanieEversley

USA TODAY

Obama, angered by police shootings, calls for elimination of racial bias

USA TODAY

Minn. governor: Castile would be alive if he had been white

Read or Share this story: http://usat.ly/29maUsC

More here:
Minn. police shooting reignites debate over Second Amendment …

Posted in Second Amendment | Comments Off on Minn. police shooting reignites debate over Second Amendment …

North Atlantic Treaty Organization (NATO) | Britannica.com

Posted: at 5:27 am

Alternative title: NATO

North Atlantic Treaty Organization (NATO), military alliance established by the North Atlantic Treaty (also called the Washington Treaty) of April 4, 1949, which sought to create a counterweight to Soviet armies stationed in central and eastern Europe after World War II. Its original members were Belgium, Canada, Denmark, France, Iceland, Italy, Luxembourg, the Netherlands, Norway, Portugal, the United Kingdom, and the United States. Joining the original signatories were Greece and Turkey (1952); West Germany (1955; from 1990 as Germany); Spain (1982); the Czech Republic, Hungary, and Poland (1999); Bulgaria, Estonia, Latvia, Lithuania, Romania, Slovakia, and Slovenia (2004); and Albania and Croatia (2009). France withdrew from the integrated military command of NATO in 1966 but remained a member of the organization; it resumed its position in NATOs military command in 2009.

The heart of NATO is expressed in Article 5 of the North Atlantic Treaty, in which the signatory members agree that

an armed attack against one or more of them in Europe or North America shall be considered an attack against them all; and consequently they agree that, if such an armed attack occurs, each of them, in exercise of the right of individual or collective self-defense recognized by Article 51 of the Charter of the United Nations, will assist the Party or Parties so attacked by taking forthwith, individually and in concert with the other Parties, such action as it deems necessary, including the use of armed force, to restore and maintain the security of the North Atlantic area.

NATO invoked Article 5 for the first time in 2001, after terrorist attacks organized by exiled Saudi Arabian millionaire Osama bin Laden destroyed the World Trade Center in New York City and part of the Pentagon outside Washington, D.C., killing some 3,000 people.

Article 6 defines the geographic scope of the treaty as covering an armed attack on the territory of any of the Parties in Europe or North America. Other articles commit the allies to strengthening their democratic institutions, to building their collective military capability, to consulting each other, and to remaining open to inviting other European states to join.

Barkley, Alben W.: North Atlantic Treaty signingEncyclopdia Britannica, Inc.After World War II in 1945, western Europe was economically exhausted and militarily weak (the western Allies had rapidly and drastically reduced their armies at the end of the war), and newly powerful communist parties had arisen in France and Italy. By contrast, the Soviet Union had emerged from the war with its armies dominating all the states of central and eastern Europe, and by 1948 communists under Moscows sponsorship had consolidated their control of the governments of those countries and suppressed all noncommunist political activity. What became known as the Iron Curtain, a term popularized by Winston Churchill, had descended over central and eastern Europe. Further, wartime cooperation between the western Allies and the Soviets had completely broken down. Each side was organizing its own sector of occupied Germany, so that two German states would emerge, a democratic one in the west and a communist one in the east.

In 1948 the United States launched the Marshall Plan, which infused massive amounts of economic aid to the countries of western and southern Europe on the condition that they cooperate with each other and engage in joint planning to hasten their mutual recovery. As for military recovery, under the Brussels Treaty of 1948, the United Kingdom, France, and the Low CountriesBelgium, the Netherlands, and Luxembourgconcluded a collective-defense agreement called the Western European Union. It was soon recognized, however, that a more formidable alliance would be required to provide an adequate military counterweight to the Soviets.

By this time Britain, Canada, and the United States had already engaged in secret exploratory talks on security arrangements that would serve as an alternative to the United Nations (UN), which was becoming paralyzed by the rapidly emerging Cold War. In March 1948, following a virtual communist coup dtat in Czechoslovakia in February, the three governments began discussions on a multilateral collective-defense scheme that would enhance Western security and promote democratic values. These discussions were eventually joined by France, the Low Countries, and Norway and in April 1949 resulted in the North Atlantic Treaty.

Spurred by the North Korean invasion of South Korea in June 1950, the United States took steps to demonstrate that it would resist any Soviet military expansion or pressures in Europe. General Dwight D. Eisenhower, the leader of the Allied forces in western Europe in World War II, was named Supreme Allied Commander Europe (SACEUR) by the North Atlantic Council (NATOs governing body) in December 1950. He was followed as SACEUR by a succession of American generals.

The North Atlantic Council, which was established soon after the treaty came into effect, is composed of ministerial representatives of the member states, who meet at least twice a year. At other times the council, chaired by the NATO secretary-general, remains in permanent session at the ambassadorial level. Just as the position of SACEUR has always been held by an American, the secretary-generalship has always been held by a European.

NATOs military organization encompasses a complete system of commands for possible wartime use. The Military Committee, consisting of representatives of the military chiefs of staff of the member states, subsumes two strategic commands: Allied Command Operations (ACO) and Allied Command Transformation (ACT). ACO is headed by the SACEUR and located at Supreme Headquarters Allied Powers Europe (SHAPE) in Casteau, Belgium. ACT is headquartered in Norfolk, Virginia, U.S. During the alliances first 20 years, more than $3 billion worth of infrastructure for NATO forcesbases, airfields, pipelines, communications networks, depotswas jointly planned, financed, and built, with about one-third of the funding from the United States. NATO funding generally is not used for the procurement of military equipment, which is provided by the member statesthough the NATO Airborne Early Warning Force, a fleet of radar-bearing aircraft designed to protect against a surprise low-flying attack, was funded jointly.

A serious issue confronting NATO in the early and mid-1950s was the negotiation of West Germanys participation in the alliance. The prospect of a rearmed Germany was understandably greeted with widespread unease and hesitancy in western Europe, but the countrys strength had long been recognized as necessary to protect western Europe from a possible Soviet invasion. Accordingly, arrangements for West Germanys safe participation in the alliance were worked out as part of the Paris Agreements of October 1954, which ended the occupation of West German territory by the western Allies and provided for both the limitation of West German armaments and the countrys accession to the Brussels Treaty. In May 1955 West Germany joined NATO, which prompted the Soviet Union to form the Warsaw Pact alliance in central and eastern Europe the same year. The West Germans subsequently contributed many divisions and substantial air forces to the NATO alliance. By the time the Cold War ended, some 900,000 troopsnearly half of them from six countries (United States, Unite
d Kingdom, France, Belgium, Canada, and the Netherlands)were stationed in West Germany.

Frances relationship with NATO became strained after 1958, as President Charles de Gaulle increasingly criticized the organizations domination by the United States and the intrusion upon French sovereignty by NATOs many international staffs and activities. He argued that such integration subjected France to automatic war at the decision of foreigners. In July 1966 France formally withdrew from the military command structure of NATO and required NATO forces and headquarters to leave French soil; nevertheless, de Gaulle proclaimed continued French adherence to the North Atlantic Treaty in case of unprovoked aggression. After NATO moved its headquarters from Paris to Brussels, France maintained a liaison relationship with NATOs integrated military staffs, continued to sit in the council, and continued to maintain and deploy ground forces in West Germany, though it did so under new bilateral agreements with the West Germans rather than under NATO jurisdiction. In 2009 France rejoined the military command structure of NATO.

From its founding, NATOs primary purpose was to unify and strengthen the Western Allies military response to a possible invasion of western Europe by the Soviet Union and its Warsaw Pact allies. In the early 1950s NATO relied partly on the threat of massive nuclear retaliation from the United States to counter the Warsaw Pacts much larger ground forces. Beginning in 1957, this policy was supplemented by the deployment of American nuclear weapons in western European bases. NATO later adopted a flexible response strategy, which the United States interpreted to mean that a war in Europe did not have to escalate to an all-out nuclear exchange. Under this strategy, many Allied forces were equipped with American battlefield and theatre nuclear weapons under a dual-control (or dual-key) system, which allowed both the country hosting the weapons and the United States to veto their use. Britain retained control of its strategic nuclear arsenal but brought it within NATOs planning structures; Frances nuclear forces remained completely autonomous.

A conventional and nuclear stalemate between the two sides continued through the construction of the Berlin Wall in the early 1960s, dtente in the 1970s, and the resurgence of Cold War tensions in the 1980s after the Soviet Unions invasion of Afghanistan in 1979 and the election of U.S. President Ronald Reagan in 1980. After 1985, however, far-reaching economic and political reforms introduced by Soviet leader Mikhail Gorbachev fundamentally altered the status quo. In July 1989 Gorbachev announced that Moscow would no longer prop up communist governments in central and eastern Europe and thereby signaled his tacit acceptance of their replacement by freely elected (and noncommunist) administrations. Moscows abandonment of control over central and eastern Europe meant the dissipation of much of the military threat that the Warsaw Pact had formerly posed to western Europe, a fact that led some to question the need to retain NATO as a military organizationespecially after the Warsaw Pacts dissolution in 1991. The reunification of Germany in October 1990 and its retention of NATO membership created both a need and an opportunity for NATO to be transformed into a more political alliance devoted to maintaining international stability in Europe.

After the Cold War, NATO was reconceived as a cooperative-security organization whose mandate was to include two main objectives: to foster dialogue and cooperation with former adversaries in the Warsaw Pact and to manage conflicts in areas on the European periphery, such as the Balkans. In keeping with the first objective, NATO established the North Atlantic Cooperation Council (1991; later replaced by the Euro-Atlantic Partnership Council) to provide a forum for the exchange of views on political and security issues, as well as the Partnership for Peace (PfP) program (1994) to enhance European security and stability through joint military training exercises with NATO and non-NATO states, including the former Soviet republics and allies. Special cooperative links were also set up with two PfP countries: Russia and Ukraine.

The second objective entailed NATOs first use of military force, when it entered the war in Bosnia and Herzegovina in 1995 by staging air strikes against Bosnian Serb positions around the capital city of Sarajevo. The subsequent Dayton Accords, which were initialed by representatives of Bosnia and Herzegovina, the Republic of Croatia, and the Federal Republic of Yugoslavia, committed each state to respecting the others sovereignty and to settling disputes peacefully; it also laid the groundwork for stationing NATO peacekeeping troops in the region. A 60,000-strong Implementation Force (IFOR) was initially deployed, though a smaller contingent remained in Bosnia under a different name, the Stabilization Force (SFOR). In March 1999 NATO launched massive air strikes against Serbia in an attempt to force the Yugoslav government of Slobodan Miloevi to accede to diplomatic provisions designed to protect the predominantly Muslim Albanian population in the province of Kosovo. Under the terms of a negotiated settlement to the fighting, NATO deployed a peacekeeping force called the Kosovo Force (KFOR).

The crisis over Kosovo and the ensuing war gave renewed impetus to efforts by the European Union (EU) to construct a new crisis-intervention force, which would make the EU less dependent on NATO and U.S. military resources for conflict management. These efforts prompted significant debates about whether enhancing the EUs defensive capabilities would strengthen or weaken NATO. Simultaneously there was much discussion of the future of NATO in the post-Cold War era. Some observers argued that the alliance should be dissolved, noting that it was created to confront an enemy that no longer existed; others called for a broad expansion of NATO membership to include Russia. Most suggested alternative roles, including peacekeeping. By the start of the second decade of the 21st century, it appeared likely that the EU would not develop capabilities competitive with those of NATO or even seek to do so; as a result, earlier worries associated with the spectre of rivalry between the two Brussels-based organizations dissipated.

North Atlantic Treaty Organization: flag-raising ceremony, 1999NATO photosDuring the presidency of Bill Clinton (19932001), the United States led an initiative to enlarge NATO membership gradually to include some of the former Soviet allies. In the concurrent debate over enlargement, supporters of the initiative argued that NATO membership was the best way to begin the long process of integrating these states into regional political and economic institutions such as the EU. Some also feared future Russian aggression and suggested that NATO membership would guarantee freedom and security for the newly democratic regimes. Opponents pointed to the enormous cost of modernizing the military forces of new members; they also argued that enlargement, which Russia would regard as a provocation, would hinder democracy in that country and enhance the influence of hard-liners. Despite these disagreements, the Czech Republic, Hungary, and Poland joined NATO in 1999; Bulgaria, Estonia, Latvia, Lithuania, Romania, Slovakia, and Slovenia were admitted in 2004; and Albania and Croatia acceded to the alliance in 2009.

Meanwhile, by the beginning of the 21st century, Russia and NATO had formed a strategic relationship. No longer considered NATOs chief enemy, Russ
ia cemented a new cooperative bond with NATO in 2001 to address such common concerns as international terrorism, nuclear nonproliferation, and arms control. This bond was subsequently subject to fraying, however, in large part because of reasons associated with Russian domestic politics.

Events following the September 11 terrorist attacks in 2001 led to the forging of a new dynamic within the alliance, one that increasingly favoured the military engagement of members outside Europe, initially with a mission against Taliban forces in Afghanistan beginning in the summer of 2003 and subsequently with air operations against the regime of Muammar al-Qaddafi in Libya in early 2011. As a result of the increased tempo of military operations undertaken by the alliance, the long-standing issue of burden sharing was revived, with some officials warning that failure to share the costs of NATO operations more equitably would lead to unraveling of the alliance. Most observers regarded that scenario as unlikely, however.

Corrections? Updates? Help us improve this article! Contact our editors with your Feedback.

Originally posted here:
North Atlantic Treaty Organization (NATO) | Britannica.com

Posted in NATO | Comments Off on North Atlantic Treaty Organization (NATO) | Britannica.com

Alternative medicine – Wikipedia, the free encyclopedia

Posted: July 9, 2016 at 8:10 pm

Alternative medicine is any practice that is put forward as having the healing effects of medicine, but does not originate from evidence gathered using the scientific method,[n 1][n 2][n 3] is not part of biomedicine,[n 1][n 4][n 5][n 6] or is contradicted by scientific evidence or established science.[1][2][3] It consists of a wide variety of health care practices, products and therapies, ranging from being biologically plausible but not well tested, to being directly contradicted by evidence and science, or even harmful or toxic.[n 4][1][3][4][5][6] Examples include new and traditional medicine practices such as homeopathy, naturopathy, chiropractic, energy medicine, various forms of acupuncture, traditional Chinese medicine, Ayurvedic medicine, Sekkotsu, and Christian faith healing. The treatments are those that are not part of the science-based healthcare system, and are not clearly backed by scientific evidence.[7][8][10] Despite significant expenditures on testing alternative medicine, including $2.5 billion spent by the United States government, almost none have shown any effectiveness greater than that of false treatments (placebo), and alternative medicine has been criticized by prominent figures in science and medicine as being quackery, nonsense, fraudulent, or unethical.[11][12]

Complementary medicine is alternative medicine used together with conventional medical treatment, in a belief not confirmed using the scientific method that it “complements” (improves the efficacy of) the treatment.[n 7][14][15][16]CAM is the abbreviation for complementary and alternative medicine.[17][18]Integrative medicine (or integrative health) is the combination of the practices and methods of alternative medicine with conventional medicine.[19]

Alternative medical diagnoses and treatments are not included as science-based treatments that are taught in medical schools, and are not used in medical practice where treatments are based on what is established using the scientific method. Alternative therapies lack such scientific validation, and their effectiveness is either unproved or disproved.[n 8][1][14][21][22] Alternative medicine is usually based on religion, tradition, superstition, belief in supernatural energies, pseudoscience, errors in reasoning, propaganda, or fraud.[1][2][3][14] Regulation and licensing of alternative medicine and health care providers varies from country to country, and state to state.

The scientific community has criticized alternative medicine as being based on misleading statements, quackery, pseudoscience, antiscience, fraud, or poor scientific methodology. Promoting alternative medicine has been called dangerous and unethical.[n 9][24] Testing alternative medicine has been called a waste of scarce medical research resources.[25][26] Critics have said “there is really no such thing as alternative medicine, just medicine that works and medicine that doesn’t”,[27] and “Can there be any reasonable ‘alternative’ [to medicine based on evidence]?”[28]

Alternative medicine consists of a wide range of health care practices, products, and therapies. The shared feature is a claim to heal that is not based on the scientific method. Alternative medicine practices are diverse in their foundations and methodologies.[7] Alternative medicine practices may be classified by their cultural origins or by the types of beliefs upon which they are based.[1][2][7][14] Methods may incorporate or base themselves on traditional medicinal practices of a particular culture, folk knowledge, supersition, spiritual beliefs, belief in supernatural energies (antiscience), pseudoscience, errors in reasoning, propaganda, fraud, new or different concepts of health and disease, and any bases other than being proven by scientific methods.[1][2][3][14] Different cultures may have their own unique traditional or belief based practices developed recently or over thousands of years, and specific practices or entire systems of practices.

Alternative medical systems can be based on common belief systems that are not consistent with facts of science, such as in naturopathy or homeopathy.[7]

Homeopathy is a system developed in a belief that a substance that causes the symptoms of a disease in healthy people will cure similar symptoms in sick people.[n 10] It was developed before knowledge of atoms and molecules, and of basic chemistry, which shows that repeated dilution as practiced in homeopathy produces only water and that homeopathy is scientifically implausible.[31][32][33][34] Homeopathy is considered quackery in the medical community.[35]

Naturopathic medicine is based on a belief that the body heals itself using a supernatural vital energy that guides bodily processes,[36] a view in conflict with the paradigm of evidence-based medicine.[37] Many naturopaths have opposed vaccination,[38] and “scientific evidence does not support claims that naturopathic medicine can cure cancer or any other disease”.[39]

Alternative medical systems may be based on traditional medicine practices, such as traditional Chinese medicine, Ayurveda in India, or practices of other cultures around the world.[7]

Traditional Chinese medicine is a combination of traditional practices and beliefs developed over thousands of years in China, together with modifications made by the Communist party. Common practices include herbal medicine, acupuncture (insertion of needles in the body at specified points), massage (Tui na), exercise (qigong), and dietary therapy. The practices are based on belief in a supernatural energy called qi, considerations of Chinese Astrology and Chinese numerology, traditional use of herbs and other substances found in China, a belief that a map of the body is contained on the tongue which reflects changes in the body, and an incorrect model of the anatomy and physiology of internal organs.[1][40][41][42][43][44]

The Chinese Communist Party Chairman Mao Zedong, in response to the lack of modern medical practitioners, revived acupuncture and its theory was rewritten to adhere to the political, economic and logistic necessities of providing for the medical needs of China’s population.[45][pageneeded] In the 1950s the “history” and theory of traditional Chinese medicine was rewritten as communist propaganda, at Mao’s insistence, to correct the supposed “bourgeois thought of Western doctors of medicine”.Acupuncture gained attention in the United States when President Richard Nixon visited China in 1972, and the delegation was shown a patient undergoing major surgery while fully awake, ostensibly receiving acupuncture rather than anesthesia. Later it was found that the patients selected for the surgery had both a high pain tolerance and received heavy indoctrination before the operation; these demonstration cases were also frequently receiving morphine surreptitiously through an intravenous drip that observers were told contained only fluids and nutrients.[40]Cochrane reviews found acupuncture is not effective for a wide range of conditions.[47] A systematic review of systematic reviews found that for reducing pain, real acupuncture was no better than sham acupuncture.[48] Although, other reviews have found that acupuncture is successful at reducing chronic pain, where as sham acupuncture was not found to be better than a placebo as well as no-acupuncture groups.[49]

Ayurvedic medicine is a traditional medicine of India. Ayurveda believes in the existence of three elemental substances, the doshas (called Vata, Pitta and Kapha), and states that a balance of the doshas results in health, while imbalanc
e results in disease. Such disease-inducing imbalances can be adjusted and balanced using traditional herbs, minerals and heavy metals. Ayurveda stresses the use of plant-based medicines and treatments, with some animal products, and added minerals, including sulfur, arsenic, lead, copper sulfate.[citation needed]

Safety concerns have been raised about Ayurveda, with two U.S. studies finding about 20 percent of Ayurvedic Indian-manufactured patent medicines contained toxic levels of heavy metals such as lead, mercury and arsenic. Other concerns include the use of herbs containing toxic compounds and the lack of quality control in Ayurvedic facilities. Incidents of heavy metal poisoning have been attributed to the use of these compounds in the United States.[5][52][53][54]

Bases of belief may include belief in existence of supernatural energies undetected by the science of physics, as in biofields, or in belief in properties of the energies of physics that are inconsistent with the laws of physics, as in energy medicine.[7]

Biofield therapies are intended to influence energy fields that, it is purported, surround and penetrate the body.[7] Writers such as noted astrophysicist and advocate of skeptical thinking (Scientific skepticism) Carl Sagan (1934-1996) have described the lack of empirical evidence to support the existence of the putative energy fields on which these therapies are predicated.

Acupuncture is a component of traditional Chinese medicine. In acupuncture, it is believed that a supernatural energy called qi flows through the universe and through the body, and helps propel the blood, blockage of which leads to disease.[41] It is believed that insertion of needles at various parts of the body determined by astrological calculations can restore balance to the blocked flows, and thereby cure disease.[41]

Chiropractic was developed in the belief that manipulating the spine affects the flow of a supernatural vital energy and thereby affects health and disease.

In the western version of Japanese Reiki, the palms are placed on the patient near Chakras, believed to be centers of supernatural energies, in a belief that the supernatural energies can transferred from the palms of the practitioner, to heal the patient.

Bioelectromagnetic-based therapies use verifiable electromagnetic fields, such as pulsed fields, alternating-current, or direct-current fields in an unconventional manner.[7]Magnetic healing does not claim existence of supernatural energies, but asserts that magnets can be used to defy the laws of physics to influence health and disease.

Mind-body medicine takes a holistic approach to health that explores the interconnection between the mind, body, and spirit. It works under the premise that the mind can affect “bodily functions and symptoms”.[7] Mind body medicines includes healing claims made in yoga, meditation, deep-breathing exercises, guided imagery, hypnotherapy, progressive relaxation, qi gong, and tai chi.[7]

Yoga, a method of traditional stretches, exercises, and meditations in Hinduism, may also be classified as an energy medicine insofar as its healing effects are believed to be due to a healing “life energy” that is absorbed into the body through the breath, and is thereby believed to treat a wide variety of illnesses and complaints.[56]

Since the 1990s, tai chi (t’ai chi ch’uan) classes that purely emphasise health have become popular in hospitals, clinics, as well as community and senior centers. This has occurred as the baby boomers generation has aged and the art’s reputation as a low-stress training method for seniors has become better known.[57][58] There has been some divergence between those that say they practice t’ai chi ch’uan primarily for self-defence, those that practice it for its aesthetic appeal (see wushu below), and those that are more interested in its benefits to physical and mental health.

Qigong, chi kung, or chi gung, is a practice of aligning body, breath, and mind for health, meditation, and martial arts training. With roots in traditional Chinese medicine, philosophy, and martial arts, qigong is traditionally viewed as a practice to cultivate and balance qi (chi) or what has been translated as “life energy”.[59]

Substance based practices use substances found in nature such as herbs, foods, non-vitamin supplements and megavitamins, animal and fungal products, and minerals, including use of these products in traditional medical practices that may also incorporate other methods.[7][12][60] Examples include healing claims for nonvitamin supplements, fish oil, Omega-3 fatty acid, glucosamine, echinacea, flaxseed oil, and ginseng.[61]Herbal medicine, or phytotherapy, includes not just the use of plant products, but may also include the use of animal and mineral products.[12] It is among the most commercially successful branches of alternative medicine, and includes the tablets, powders and elixirs that are sold as “nutritional supplements”.[12] Only a very small percentage of these have been shown to have any efficacy, and there is little regulation as to standards and safety of their contents.[12] This may include use of known toxic substances, such as use of the poison lead in traditional Chinese medicine.[61]

Manipulative and body-based practices feature the manipulation or movement of body parts, such as is done in bodywork and chiropractic manipulation.

Osteopathic manipulative medicine, also known as osteopathic manipulative treatment, is a core set of techniques of osteopathy and osteopathic medicine distinguishing these fields from mainstream medicine.[62]

Religion based healing practices, such as use of prayer and the laying of hands in Christian faith healing, and shamanism, rely on belief in divine or spiritual intervention for healing.

Shamanism is a practice of many cultures around the world, in which a practitioner reaches an altered states of consciousness in order to encounter and interact with the spirit world or channel supernatural energies in the belief they can heal.[63]

Some alternative medicine practices may be based on pseudoscience, ignorance, or flawed reasoning.[64] This can lead to fraud.[1]

Practitioners of electricity and magnetism based healing methods may deliberately exploit a patient’s ignorance of physics in order to defraud them.[14]

“Alternative medicine” is a loosely defined set of products, practices, and theories that are believed or perceived by their users to have the healing effects of medicine,[n 2][n 4] but whose effectiveness has not been clearly established using scientific methods,[n 2][n 3][1][3][20][22] whose theory and practice is not part of biomedicine,[n 4][n 1][n 5][n 6] or whose theories or practices are directly contradicted by scientific evidence or scientific principles used in biomedicine.[1][2][3] “Biomedicine” is that part of medical science that applies principles of biology, physiology, molecular biology, biophysics, and other natural sciences to clinical practice, using scientific methods to establish the effectiveness of that practice. Alternative medicine is a diverse group of medical and health care systems, practices, and products that originate outside of biomedicine,[n 1] are not considered part of biomedicine,[7] are not widely used by the biomedical healthcare professions,[69] and are not taught as skills practiced in biomedicine.[69] Unlike biomedicine,[n 1] an alternative medicine product or practice does not originate from the sciences or from using scientific methodology, but may instead
be based on testimonials, religion, tradition, superstition, belief in supernatural energies, pseudoscience, errors in reasoning, propaganda, fraud, or other unscientific sources.[n 3][1][3][14] The expression “alternative medicine” refers to a diverse range of related and unrelated products, practices, and theories, originating from widely varying sources, cultures, theories, and belief systems, and ranging from biologically plausible practices and products and practices with some evidence, to practices and theories that are directly contradicted by basic science or clear evidence, and products that have proven to be ineffective or even toxic and harmful.[n 4][4][5]

“Alternative medicine”, “complementary medicine”, “holistic medicine”, “natural medicine”, “unorthodox medicine”, “fringe medicine”, “unconventional medicine”, and “new age medicine” may be used interchangeably as having the same meaning (synonyms) in some contexts,[70][71][72] but may have different meanings in other contexts, for example, unorthodox medicine may refer to biomedicine that is different from what is commonly practiced, and fringe medicine may refer to biomedicine that is based on fringe science, which may be scientifically valid but is not mainstream.

The meaning of the term “alternative” in the expression “alternative medicine”, is not that it is an actual effective alternative to medical science, although some alternative medicine promoters may use the loose terminology to give the appearance of effectiveness.[1]Marcia Angell stated that “alternative medicine” is “a new name for snake oil. There’s medicine that works and medicine that doesn’t work.”[73] Loose terminology may also be used to suggest meaning that a dichotomy exists when it does not, e.g., the use of the expressions “western medicine” and “eastern medicine” to suggest that the difference is a cultural difference between the Asiatic east and the European west, rather than that the difference is between evidence-based medicine and treatments which don’t work.[1]

“Complementary medicine” refers to use of alternative medical treatments alongside conventional medicine, in the belief that it increases the effectiveness of the science-based medicine.[74][75][76] An example of “complementary medicine” is use of acupuncture (sticking needles in the body to influence the flow of a supernatural energy), along with using science-based medicine, in the belief that the acupuncture increases the effectiveness or “complements” the science-based medicine.[76] “CAM” is an abbreviation for “complementary and alternative medicine”.

The expression “Integrative medicine” (or “integrated medicine”) is used in two different ways. One use refers to a belief that medicine based on science can be “integrated” with practices that are not. Another use refers only to a combination of alternative medical treatments with conventional treatments that have some scientific proof of efficacy, in which case it is identical with CAM.[19] “holistic medicine” (or holistic health) is an alternative medicine practice which claim to treat the “whole person” and not just the illness itself.

“Traditional medicine” and “folk medicine” refer to prescientific practices of a culture, not to what is traditionally practiced in cultures where medical science dominates. “Eastern medicine” typically refers to prescientific traditional medicines of Asia. “Western medicine”, when referring to modern practice, typically refers to medical science, and not to alternative medicines practiced in the west (Europe and the Americas). “Western medicine”, “biomedicine”, “mainstream medicine”, “medical science”, “science-based medicine”, “evidence-based medicine”, “conventional medicine”, “standard medicine”, “orthodox medicine”, “allopathic medicine”, “dominant health system”, and “medicine”, are sometimes used interchangeably as having the same meaning, when contrasted with alternative medicine, but these terms may have different meanings in some contexts, e.g., some practices in medical science are not supported by rigorous scientific testing so “medical science” is not strictly identical with “science-based medicine”, and “standard medical care” may refer to “best practice” when contrasted with other biomedicine that is less used or less recommended.[n 11][79]

Prominent members of the science[27][80] and biomedical science community[21] assert that it is not meaningful to define an alternative medicine that is separate from a conventional medicine, that the expressions “conventional medicine”, “alternative medicine”, “complementary medicine”, “integrative medicine”, and “holistic medicine” do not refer to anything at all.[21][27][80][81] Their criticisms of trying to make such artificial definitions include: “There’s no such thing as conventional or alternative or complementary or integrative or holistic medicine. There’s only medicine that works and medicine that doesn’t;”[21][27][80] “By definition, alternative medicine has either not been proved to work, or been proved not to work. You know what they call alternative medicine that’s been proved to work? Medicine;”[82] “There cannot be two kinds of medicine conventional and alternative. There is only medicine that has been adequately tested and medicine that has not, medicine that works and medicine that may or may not work. Once a treatment has been tested rigorously, it no longer matters whether it was considered alternative at the outset. If it is found to be reasonably safe and effective, it will be accepted;”[21] and “There is no alternative medicine. There is only scientifically proven, evidence-based medicine supported by solid data or unproven medicine, for which scientific evidence is lacking.”[81]

Others in both the biomedical and CAM communities point out that CAM cannot be precisely defined because of the diversity of theories and practices it includes, and because the boundaries between CAM and biomedicine overlap, are porous, and change. The expression “complementary and alternative medicine” (CAM) resists easy definition because the health systems and practices to which it refers are diffuse and its boundaries are poorly defined.[4][n 12] Healthcare practices categorized as alternative may differ in their historical origin, theoretical basis, diagnostic technique, therapeutic practice and in their relationship to the medical mainstream. Some alternative therapies, including traditional Chinese medicine (TCM) and Ayurveda, have antique origins in East or South Asia and are entirely alternative medical systems;[87] others, such as homeopathy and chiropractic, have origins in Europe or the United States and emerged in the eighteenth and nineteenth centuries. Some, such as osteopathy and chiropractic, employ manipulative physical methods of treatment; others, such as meditation and prayer, are based on mind-body interventions. Treatments considered alternative in one location may be considered conventional in another.[90] Thus, chiropractic is not considered alternative in Denmark and likewise osteopathic medicine is no longer thought of as an alternative therapy in the United States.[90]

One common feature of all definitions of alternative medicine is its designation as “other than” conventional medicine. For example, the widely referenced descriptive definition of complementary and alternative medicine devised by the US National Center for Complementary and Integrative Health (NCCIH) of the National Institutes of Health (NIH), states that it is “a group of diverse medical and health care systems, practices, and products that are not generally considered part of conventional medicine.”[7] For conventional med
ical practitioners, it does not necessarily follow that either it or its practitioners would no longer be considered alternative.[n 13]

Some definitions seek to specify alternative medicine in terms of its social and political marginality to mainstream healthcare.[95] This can refer to the lack of support that alternative therapies receive from the medical establishment and related bodies regarding access to research funding, sympathetic coverage in the medical press, or inclusion in the standard medical curriculum.[95] In 1993, the British Medical Association (BMA), one among many professional organizations who have attempted to define alternative medicine, stated that it[n 14] referred to “those forms of treatment which are not widely used by the conventional healthcare professions, and the skills of which are not taught as part of the undergraduate curriculum of conventional medical and paramedical healthcare courses”.[69] In a US context, an influential definition coined in 1993 by the Harvard-based physician,[96] David M. Eisenberg,[97] characterized alternative medicine “as interventions neither taught widely in medical schools nor generally available in US hospitals”.[98] These descriptive definitions are inadequate in the present-day when some conventional doctors offer alternative medical treatments and CAM introductory courses or modules can be offered as part of standard undergraduate medical training;[99] alternative medicine is taught in more than 50 per cent of US medical schools and increasingly US health insurers are willing to provide reimbursement for CAM therapies. In 1999, 7.7% of US hospitals reported using some form of CAM therapy; this proportion had risen to 37.7% by 2008.[101]

An expert panel at a conference hosted in 1995 by the US Office for Alternative Medicine (OAM),[102][n 15] devised a theoretical definition[102] of alternative medicine as “a broad domain of healing resources… other than those intrinsic to the politically dominant health system of a particular society or culture in a given historical period.”[103] This definition has been widely adopted by CAM researchers,[102] cited by official government bodies such as the UK Department of Health,[104] attributed as the definition used by the Cochrane Collaboration,[105] and, with some modification,[dubious discuss] was preferred in the 2005 consensus report of the US Institute of Medicine, Complementary and Alternative Medicine in the United States.[n 4]

The 1995 OAM conference definition, an expansion of Eisenberg’s 1993 formulation, is silent regarding questions of the medical effectiveness of alternative therapies.[106] Its proponents hold that it thus avoids relativism about differing forms of medical knowledge and, while it is an essentially political definition, this should not imply that the dominance of mainstream biomedicine is solely due to political forces.[106] According to this definition, alternative and mainstream medicine can only be differentiated with reference to what is “intrinsic to the politically dominant health system of a particular society of culture”.[107] However, there is neither a reliable method to distinguish between cultures and subcultures, nor to attribute them as dominant or subordinate, nor any accepted criteria to determine the dominance of a cultural entity.[107] If the culture of a politically dominant healthcare system is held to be equivalent to the perspectives of those charged with the medical management of leading healthcare institutions and programs, the definition fails to recognize the potential for division either within such an elite or between a healthcare elite and the wider population.[107]

Normative definitions distinguish alternative medicine from the biomedical mainstream in its provision of therapies that are unproven, unvalidated or ineffective and support of theories which have no recognized scientific basis. These definitions characterize practices as constituting alternative medicine when, used independently or in place of evidence-based medicine, they are put forward as having the healing effects of medicine, but which are not based on evidence gathered with the scientific method.[7][14][21][74][75][109] Exemplifying this perspective, a 1998 editorial co-authored by Marcia Angell, a former editor of the New England Journal of Medicine, argued that:

This line of division has been subject to criticism, however, as not all forms of standard medical practice have adequately demonstrated evidence of benefit, [n 1][79] and it is also unlikely in most instances that conventional therapies, if proven to be ineffective, would ever be classified as CAM.[102]

Public information websites maintained by the governments of the US and of the UK make a distinction between “alternative medicine” and “complementary medicine”, but mention that these two overlap. The National Center for Complementary and Integrative Health (NCCIH) of the National Institutes of Health (NIH) (a part of the US Department of Health and Human Services) states that “alternative medicine” refers to using a non-mainstream approach in place of conventional medicine and that “complementary medicine” generally refers to using a non-mainstream approach together with conventional medicine, and comments that the boundaries between complementary and conventional medicine overlap and change with time.[7]

The National Health Service (NHS) website NHS Choices (owned by the UK Department of Health), adopting the terminology of NCCIH, states that when a treatment is used alongside conventional treatments, to help a patient cope with a health condition, and not as an alternative to conventional treatment, this use of treatments can be called “complementary medicine”; but when a treatment is used instead of conventional medicine, with the intention of treating or curing a health condition, the use can be called “alternative medicine”.[111]

Similarly, the public information website maintained by the National Health and Medical Research Council (NHMRC) of the Commonwealth of Australia uses the acronym “CAM” for a wide range of health care practices, therapies, procedures and devices not within the domain of conventional medicine. In the Australian context this is stated to include acupuncture; aromatherapy; chiropractic; homeopathy; massage; meditation and relaxation therapies; naturopathy; osteopathy; reflexology, traditional Chinese medicine; and the use of vitamin supplements.[112]

The Danish National Board of Health’s “Council for Alternative Medicine” (Sundhedsstyrelsens Rd for Alternativ Behandling (SRAB)), an independent institution under the National Board of Health (Danish: Sundhedsstyrelsen), uses the term “alternative medicine” for:

In General Guidelines for Methodologies on Research and Evaluation of Traditional Medicine, published in 2000 by the World Health Organization (WHO), complementary and alternative medicine were defined as a broad set of health care practices that are not part of that country’s own tradition and are not integrated into the dominant health care system.[114] Some herbal therapies are mainstream in Europe but are alternative in the US.[116]

The history of alternative medicine may refer to the history of a group of diverse medical practices that were collectively promoted as “alternative medicine” beginning in the 1970s, to the collection of individual histories of members of that group, or to the history of western medical practices that were labeled “irregular practices” by the western medical establishment.[1][117][118][119][120] It includes the histories of complementary m
edicine and of integrative medicine. Before the 1970s, western practitioners that were not part of the increasingly science-based medical establishment were referred to “irregular practitioners”, and were dismissed by the medical establishment as unscientific and as practicing quackery.[117][118] Until the 1970’s, irregular practice became increasingly marginalized as quackery and fraud, as western medicine increasingly incorporated scientific methods and discoveries, and had a corresponding increase in success of its treatments.[119] In the 1970s, irregular practices were grouped with traditional practices of nonwestern cultures and with other unproven or disproven practices that were not part of biomedicine, with the entire group collectively marketed and promoted under the single expression “alternative medicine”.[1][117][118][119][121]

Use of alternative medicine in the west began to rise following the counterculture movement of the 1960s, as part of the rising new age movement of the 1970s.[1][122][123] This was due to misleading mass marketing of “alternative medicine” being an effective “alternative” to biomedicine, changing social attitudes about not using chemicals and challenging the establishment and authority of any kind, sensitivity to giving equal measure to beliefs and practices of other cultures (cultural relativism), and growing frustration and desperation by patients about limitations and side effects of science-based medicine.[1][118][119][120][121][123][124] At the same time, in 1975, the American Medical Association, which played the central role in fighting quackery in the United States, abolished its quackery committee and closed down its Department of Investigation.[117]:xxi[124] By the early to mid 1970s the expression “alternative medicine” came into widespread use, and the expression became mass marketed as a collection of “natural” and effective treatment “alternatives” to science-based biomedicine.[1][124][125][126] By 1983, mass marketing of “alternative medicine” was so pervasive that the British Medical Journal (BMJ) pointed to “an apparently endless stream of books, articles, and radio and television programmes urge on the public the virtues of (alternative medicine) treatments ranging from meditation to drilling a hole in the skull to let in more oxygen”.[124] In this 1983 article, the BMJ wrote, “one of the few growth industries in contemporary Britain is alternative medicine”, noting that by 1983, “33% of patients with rheumatoid arthritis and 39% of those with backache admitted to having consulted an alternative practitioner”.[124]

By about 1990, the American alternative medicine industry had grown to a $27 Billion per year, with polls showing 30% of Americans were using it.[123][127] Moreover, polls showed that Americans made more visits for alternative therapies than the total number of visits to primary care doctors, and American out-of-pocket spending (non-insurance spending) on alternative medicine was about equal to spending on biomedical doctors.[117]:172 In 1991, Time magazine ran a cover story, “The New Age of Alternative Medicine: Why New Age Medicine Is Catching On”.[123][127] In 1993, the New England Journal of Medicine reported one in three Americans as using alternative medicine.[123] In 1993, the Public Broadcasting System ran a Bill Moyers special, Healing and the Mind, with Moyers commenting that “…people by the tens of millions are using alternative medicine. If established medicine does not understand that, they are going to lose their clients.”[123]

Another explosive growth began in the 1990s, when senior level political figures began promoting alternative medicine, investing large sums of government medical research funds into testing alternative medicine, including testing of scientifically implausible treatments, and relaxing government regulation of alternative medicine products as compared to biomedical products.[1][117]:xxi[118][119][120][121][128][129] Beginning with a 1991 appropriation of $2 million for funding research of alternative medicine research, federal spending grew to a cumulative total of about $2.5 billion by 2009, with 50% of Americans using alternative medicine by 2013.[11][130]

In 1991, pointing to a need for testing because of the widespread use of alternative medicine without authoritative information on its efficacy, United States Senator Tom Harkin used $2 million of his discretionary funds to create the Office for the Study of Unconventional Medical Practices (OSUMP), later renamed to be the Office of Alternative Medicine (OAM).[117]:170[131][132] The OAM was created to be within the National Institute of Health (NIH), the scientifically prestigious primary agency of the United States government responsible for biomedical and health-related research.[117]:170[131][132] The mandate was to investigate, evaluate, and validate effective alternative medicine treatments, and alert the public as the results of testing its efficacy.[127][131][132][133]

Sen. Harkin had become convinced his allergies were cured by taking bee pollen pills, and was urged to make the spending by two of his influential constituents.[127][131][132] Bedell, a longtime friend of Sen. Harkin, was a former member of the United States House of Representatives who believed that alternative medicine had twice cured him of diseases after mainstream medicine had failed, claiming that cow’s milk colostrum cured his Lyme disease, and an herbal derivative from camphor had prevented post surgical recurrence of his prostate cancer.[117][127] Wiewel was a promoter of unproven cancer treatments involving a mixture of blood sera that the Food and Drug Administration had banned from being imported.[127] Both Bedell and Wiewel became members of the advisory panel for the OAM. The company that sold the bee pollen was later fined by the Federal Trade Commission for making false health claims about their bee-pollen products reversing the aging process, curing allergies, and helping with weight loss.[134]

In 1993, Britain’s Prince Charles, who claimed that homeopathy and other alternative medicine was an effective alternative to biomedicine, established the Foundation for Integrated Health (FIH), as a charity to explore “how safe, proven complementary therapies can work in conjunction with mainstream medicine”.[135] The FIH received government funding through grants from Britain’s Department of Health.[135]

In 1994, Sen. Harkin (D) and Senator Orrin Hatch (R) introduced the Dietary Supplement Health and Education Act (DSHEA).[136][137] The act reduced authority of the FDA to monitor products sold as “natural” treatments.[136] Labeling standards were reduced to allow health claims for supplements based only on unconfirmed preliminary studies that were not subjected to scientific peer review, and the act made it more difficult for the FDA to promptly seize products or demand proof of safety where there was evidence of a product being dangerous.[137] The Act became known as the “The 1993 Snake Oil Protection Act” following a New York Times editorial under that name.[136]

Senator Harkin complained about the “unbendable rules of randomized clinical trials”, citing his use of bee pollen to treat his allergies, which he claimed to be effective even though it was biologically implausible and efficacy was not established using scientific methods.[131][138] Sen. Harkin asserted that claims for alternative medicine efficacy be allowed not only without conventional scientific testing, even when they are biologically implausible, “It is not necessary for the scientific community to understand the process before the American public can b
enefit from these therapies.”[136] Following passage of the act, sales rose from about $4 billion in 1994, to $20 billion by the end of 2000, at the same time as evidence of their lack of efficacy or harmful effects grew.[136] Senator Harkin came into open public conflict with the first OAM Director Joseph M. Jacobs and OAM board members from the scientific and biomedical community.[132] Jacobs’ insistence on rigorous scientific methodology caused friction with Senator Harkin.[131][138][139] Increasing political resistance to the use of scientific methodology was publicly criticized by Dr. Jacobs and another OAM board member complained that “nonsense has trickled down to every aspect of this office”.[131][138] In 1994, Senator Harkin appeared on television with cancer patients who blamed Dr. Jacobs for blocking their access to untested cancer treatment, leading Jacobs to resign in frustration.[131][138]

In 1995, Wayne Jonas, a promoter of homeopathy and political ally of Senator Harkin, became the director of the OAM, and continued in that role until 1999.[140] In 1997, the NCCAM budget was increased from $12 million to $20 million annually.[141] From 1990 to 1997, use of alternative medicine in the US increased by 25%, with a corresponding 50% increase in expenditures.[142] The OAM drew increasing criticism from eminent members of the scientific community with letters to the Senate Appropriations Committee when discussion of renewal of funding OAM came up.[117]:175 Nobel laureate Paul Berg wrote that prestigious NIH should not be degraded to act as a cover for quackery, calling the OAM “an embarrassment to serious scientists.”[117]:175[141] The president of the American Physical Society wrote complaining that the government was spending money on testing products and practices that “violate basic laws of physics and more clearly resemble witchcraft”.[117]:175[141] In 1998, the President of the North Carolina Medical Association publicly called for shutting down the OAM.[143]

In 1998, NIH director and Nobel laureate Harold Varmus came into conflict with Senator Harkin by pushing to have more NIH control of alternative medicine research.[144] The NIH Director placed the OAM under more strict scientific NIH control.[141][144] Senator Harkin responded by elevating OAM into an independent NIH “center”, just short of being its own “institute”, and renamed to be the National Center for Complementary and Alternative Medicine (NCCAM). NCCAM had a mandate to promote a more rigorous and scientific approach to the study of alternative medicine, research training and career development, outreach, and “integration”. In 1999, the NCCAM budget was increased from $20 million to $50 million.[143][144] The United States Congress approved the appropriations without dissent. In 2000, the budget was increased to about $68 million, in 2001 to $90 million, in 2002 to $104 million, and in 2003, to $113 million.[143]

In 2004, modifications of the European Parliament’s 2001 Directive 2001/83/EC, regulating all medicine products, were made with the expectation of influencing development of the European market for alternative medicine products.[145] Regulation of alternative medicine in Europe was loosened with “a simplified registration procedure” for traditional herbal medicinal products.[145][146] Plausible “efficacy” for traditional medicine was redefined to be based on long term popularity and testimonials (“the pharmacological effects or efficacy of the medicinal product are plausible on the basis of long-standing use and experience.”), without scientific testing.[145][146] The Committee on Herbal Medicinal Products (HMPC) was created within the European Medicines Agency in London (EMEA). A special working group was established for homeopathic remedies under the Heads of Medicines Agencies.[145]

Through 2004, alternative medicine that was traditional to Germany continued to be a regular part of the health care system, including homeopathy and anthroposophic medicine.[145] The German Medicines Act mandated that science-based medical authorities consider the “particular characteristics” of complementary and alternative medicines.[145] By 2004, homeopathy had grown to be the most used alternative therapy in France, growing from 16% of the population using homeopathic medicine in 1982, to 29% by 1987, 36% percent by 1992, and 62% of French mothers using homeopathic medicines by 2004, with 94.5% of French pharmacists advising pregnant women to use homeopathic remedies.[147] As of 2004[update], 100 million people in India depended solely on traditional German homeopathic remedies for their medical care.[148] As of 2010[update], homeopathic remedies continued to be the leading alternative treatment used by European physicians.[147] By 2005, sales of homeopathic remedies and anthroposophical medicine had grown to $930 million Euros, a 60% increase from 1995.[147][149]

In 2008, London’s The Times published a letter from Edzard Ernst that asked the FIH to recall two guides promoting alternative medicine, saying: “the majority of alternative therapies appear to be clinically ineffective, and many are downright dangerous.” In 2010, Brittan’s FIH closed after allegations of fraud and money laundering led to arrests of its officials.[135]

In 2009, after a history of 17 years of government testing and spending of nearly $2.5 billion on research had produced almost no clearly proven efficacy of alternative therapies, Senator Harkin complained, “One of the purposes of this center was to investigate and validate alternative approaches. Quite frankly, I must say publicly that it has fallen short. It think quite frankly that in this center and in the office previously before it, most of its focus has been on disproving things rather than seeking out and approving.”[144][150][151] Members of the scientific community criticized this comment as showing Senator Harkin did not understand the basics of scientific inquiry, which tests hypotheses, but never intentionally attempts to “validate approaches”.[144] Members of the scientific and biomedical communities complained that after a history of 17 years of being tested, at a cost of over $2.5 Billion on testing scientifically and biologically implausible practices, almost no alternative therapy showed clear efficacy.[11] In 2009, the NCCAM’s budget was increased to about $122 million.[144] Overall NIH funding for CAM research increased to $300 Million by 2009.[144] By 2009, Americans were spending $34 Billion annually on CAM.[152]

Since 2009, according to Art. 118a of the Swiss Federal Constitution, the Swiss Confederation and the Cantons of Switzerland shall within the scope of their powers ensure that consideration is given to complementary medicine.[153]

In 2012, the Journal of the American Medical Association (JAMA) published a criticism that study after study had been funded by NCCAM, but “failed to prove that complementary or alternative therapies are anything more than placebos”.[154] The JAMA criticism pointed to large wasting of research money on testing scientifically implausible treatments, citing “NCCAM officials spending $374,000 to find that inhaling lemon and lavender scents does not promote wound healing; $750,000 to find that prayer does not cure AIDS or hasten recovery from breast-reconstruction surgery; $390,000 to find that ancient Indian remedies do not control type 2 diabetes; $700,000 to find that magnets do not treat arthritis, carpal tunnel syndrome, or migraine headaches; and $406,000 to find that coffee enemas do not cure pancreatic cancer.”[154] It was pointed out that negative results from testing were generally ignored by the public,
that people continue to “believe what they want to believe, arguing that it does not matter what the data show: They know what works for them”.[154] Continued increasing use of CAM products was also blamed on the lack of FDA ability to regulate alternative products, where negative studies do not result in FDA warnings or FDA-mandated changes on labeling, whereby few consumers are aware that many claims of many supplements were found not to have not to be supported.[154]

By 2013, 50% of Americans were using CAM.[130] As of 2013[update], CAM medicinal products in Europe continued to be exempted from documented efficacy standards required of other medicinal products.[155]

In 2014 the NCCAM was renamed to the National Center for Complementary and Integrative Health (NCCIH) with a new charter requiring that 12 of the 18 council members shall be selected with a preference to selecting leading representatives of complementary and alternative medicine, 9 of the members must be licensed practitioners of alternative medicine, 6 members must be general public leaders in the fields of public policy, law, health policy, economics, and management, and 3 members must represent the interests of individual consumers of complementary and alternative medicine.[156]

Much of what is now categorized as alternative medicine was developed as independent, complete medical systems. These were developed long before biomedicine and use of scientific methods. Each system was developed in relatively isolated regions of the world where there was little or no medical contact with pre-scientific western medicine, or with each other’s systems. Examples are traditional Chinese medicine and the Ayurvedic medicine of India.

Other alternative medicine practices, such as homeopathy, were developed in western Europe and in opposition to western medicine, at a time when western medicine was based on unscientific theories that were dogmatically imposed by western religious authorities. Homeopathy was developed prior to discovery of the basic principles of chemistry, which proved homeopathic remedies contained nothing but water. But homeopathy, with its remedies made of water, was harmless compared to the unscientific and dangerous orthodox western medicine practiced at that time, which included use of toxins and draining of blood, often resulting in permanent disfigurement or death.[118]

Other alternative practices such as chiropractic and osteopathic manipulative medicine were developed in the United States at a time that western medicine was beginning to incorporate scientific methods and theories, but the biomedical model was not yet totally dominant. Practices such as chiropractic and osteopathic, each considered to be irregular practices by the western medical establishment, also opposed each other, both rhetorically and politically with licensing legislation. Osteopathic practitioners added the courses and training of biomedicine to their licensing, and licensed Doctor of Osteopathic Medicine holders began diminishing use of the unscientific origins of the field. Without the original nonscientific practices and theories, osteopathic medicine is now considered the same as biomedicine.

Further information: Rise of modern medicine

Until the 1970s, western practitioners that were not part of the medical establishment were referred to “irregular practitioners”, and were dismissed by the medical establishment as unscientific, as practicing quackery.[118] Irregular practice became increasingly marginalized as quackery and fraud, as western medicine increasingly incorporated scientific methods and discoveries, and had a corresponding increase in success of its treatments.

Dating from the 1970s, medical professionals, sociologists, anthropologists and other commentators noted the increasing visibility of a wide variety of health practices that had neither derived directly from nor been verified by biomedical science.[157] Since that time, those who have analyzed this trend have deliberated over the most apt language with which to describe this emergent health field.[157] A variety of terms have been used, including heterodox, irregular, fringe and alternative medicine while others, particularly medical commentators, have been satisfied to label them as instances of quackery.[157] The most persistent term has been alternative medicine but its use is problematic as it assumes a value-laden dichotomy between a medical fringe, implicitly of borderline acceptability at best, and a privileged medical orthodoxy, associated with validated medico-scientific norms.[158] The use of the category of alternative medicine has also been criticized as it cannot be studied as an independent entity but must be understood in terms of a regionally and temporally specific medical orthodoxy.[159] Its use can also be misleading as it may erroneously imply that a real medical alternative exists.[160] As with near-synonymous expressions, such as unorthodox, complementary, marginal, or quackery, these linguistic devices have served, in the context of processes of professionalisation and market competition, to establish the authority of official medicine and police the boundary between it and its unconventional rivals.[158]

An early instance of the influence of this modern, or western, scientific medicine outside Europe and North America is Peking Union Medical College.[161][n 16][n 17]

From a historical perspective, the emergence of alternative medicine, if not the term itself, is typically dated to the 19th century.[162] This is despite the fact that there are variants of Western non-conventional medicine that arose in the late-eighteenth century or earlier and some non-Western medical traditions, currently considered alternative in the West and elsewhere, which boast extended historical pedigrees.[158] Alternative medical systems, however, can only be said to exist when there is an identifiable, regularized and authoritative standard medical practice, such as arose in the West during the nineteenth century, to which they can function as an alternative.

During the late eighteenth and nineteenth centuries regular and irregular medical practitioners became more clearly differentiated throughout much of Europe and,[164] as the nineteenth century progressed, most Western states converged in the creation of legally delimited and semi-protected medical markets.[165] It is at this point that an “official” medicine, created in cooperation with the state and employing a scientific rhetoric of legitimacy, emerges as a recognizable entity and that the concept of alternative medicine as a historical category becomes tenable.[166]

As part of this process, professional adherents of mainstream medicine in countries such as Germany, France, and Britain increasingly invoked the scientific basis of their discipline as a means of engendering internal professional unity and of external differentiation in the face of sustained market competition from homeopaths, naturopaths, mesmerists and other nonconventional medical practitioners, finally achieving a degree of imperfect dominance through alliance with the state and the passage of regulatory legislation.[158][160] In the US the Johns Hopkins University School of Medicine, based in Baltimore, Maryland, opened in 1893, with William H. Welch and William Osler among the founding physicians, and was the first medical school devoted to teaching “German scientific medicine”.[167]

Buttressed by the increased authority arising from significant advances in the medical sciences of the late 19th century onwardsincluding the development and application of the
germ theory of disease by the chemist Louis Pasteur and the surgeon Joseph Lister, of microbiology co-founded by Robert Koch (in 1885 appointed professor of hygiene at the University of Berlin), and of the use of X-rays (Rntgen rays)the 1910 Flexner Report called upon American medical schools to follow the model set by the Johns Hopkins School of Medicine and adhere to mainstream science in their teaching and research. This was in a belief, mentioned in the Report’s introduction, that the preliminary and professional training then prevailing in medical schools should be reformed in view of the new means for diagnosing and combating disease being made available to physicians and surgeons by the sciences on which medicine depended.[n 18][169]

Among putative medical practices available at the time which later became known as “alternative medicine” were homeopathy (founded in Germany in the early 19c.) and chiropractic (founded in North America in the late 19c.). These conflicted in principle with the developments in medical science upon which the Flexner reforms were based, and they have not become compatible with further advances of medical science such as listed in Timeline of medicine and medical technology, 19001999 and 2000present, nor have Ayurveda, acupuncture or other kinds of alternative medicine.[citation needed]

At the same time “Tropical medicine” was being developed as a specialist branch of western medicine in research establishments such as Liverpool School of Tropical Medicine founded in 1898 by Alfred Lewis Jones, London School of Hygiene & Tropical Medicine, founded in 1899 by Patrick Manson and Tulane University School of Public Health and Tropical Medicine, instituted in 1912. A distinction was being made between western scientific medicine and indigenous systems. An example is given by an official report about indigenous systems of medicine in India, including Ayurveda, submitted by Mohammad Usman of Madras and others in 1923. This stated that the first question the Committee considered was “to decide whether the indigenous systems of medicine were scientific or not”.[170][171]

By the later twentieth century the term ‘alternative medicine’ entered public discourse,[n 19][174] but it was not always being used with the same meaning by all parties. Arnold S. Relman remarked in 1998 that in the best kind of medical practice, all proposed treatments must be tested objectively, and that in the end there will only be treatments that pass and those that do not, those that are proven worthwhile and those that are not. He asked ‘Can there be any reasonable “alternative”?'[28] But also in 1998 the then Surgeon General of the United States, David Satcher,[175] issued public information about eight common alternative treatments (including acupuncture, holistic and massage), together with information about common diseases and conditions, on nutrition, diet, and lifestyle changes, and about helping consumers to decipher fraud and quackery, and to find healthcare centers and doctors who practiced alternative medicine.[176]

By 1990, approximately 60 million Americans had used one or more complementary or alternative therapies to address health issues, according to a nationwide survey in the US published in 1993 by David Eisenberg.[177] A study published in the November 11, 1998 issue of the Journal of the American Medical Association reported that 42% of Americans had used complementary and alternative therapies, up from 34% in 1990.[142] However, despite the growth in patient demand for complementary medicine, most of the early alternative/complementary medical centers failed.[178]

Mainly as a result of reforms following the Flexner Report of 1910[179]medical education in established medical schools in the US has generally not included alternative medicine as a teaching topic.[n 20] Typically, their teaching is based on current practice and scientific knowledge about: anatomy, physiology, histology, embryology, neuroanatomy, pathology, pharmacology, microbiology and immunology.[181] Medical schools’ teaching includes such topics as doctor-patient communication, ethics, the art of medicine,[182] and engaging in complex clinical reasoning (medical decision-making).[183] Writing in 2002, Snyderman and Weil remarked that by the early twentieth century the Flexner model had helped to create the 20th-century academic health center in which education, research and practice were inseparable. While this had much improved medical practice by defining with increasing certainty the pathophysiological basis of disease, a single-minded focus on the pathophysiological had diverted much of mainstream American medicine from clinical conditions which were not well understood in mechanistic terms and were not effectively treated by conventional therapies.[184]

By 2001 some form of CAM training was being offered by at least 75 out of 125 medical schools in the US.[185] Exceptionally, the School of Medicine of the University of Maryland, Baltimore includes a research institute for integrative medicine (a member entity of the Cochrane Collaboration).[186][187] Medical schools are responsible for conferring medical degrees, but a physician typically may not legally practice medicine until licensed by the local government authority. Licensed physicians in the US who have attended one of the established medical schools there have usually graduated Doctor of Medicine (MD).[188] All states require that applicants for MD licensure be graduates of an approved medical school and complete the United States Medical Licensing Exam (USMLE).[188]

The British Medical Association, in its publication Complementary Medicine, New Approach to Good Practice (1993), gave as a working definition of non-conventional therapies (including acupuncture, chiropractic and homeopathy): “those forms of treatment which are not widely used by the orthodox health-care professions, and the skills of which are not part of the undergraduate curriculum of orthodox medical and paramedical health-care courses”. By 2000 some medical schools in the UK were offering CAM familiarisation courses to undergraduate medical students while some were also offering modules specifically on CAM.[190]

The Cochrane Collaboration Complementary Medicine Field explains its “Scope and Topics” by giving a broad and general definition for complementary medicine as including practices and ideas which are outside the domain of conventional medicine in several countries and defined by its users as preventing or treating illness, or promoting health and well being, and which complement mainstream medicine in three ways: by contributing to a common whole, by satisfying a demand not met by conventional practices, and by diversifying the conceptual framework of medicine.[191]

Proponents of an evidence-base for medicine[n 21][193][194][195][196] such as the Cochrane Collaboration (founded in 1993 and from 2011 providing input for WHO resolutions) take a position that all systematic reviews of treatments, whether “mainstream” or “alternative”, ought to be held to the current standards of scientific method.[187] In a study titled Development and classification of an operational definition of complementary and alternative medicine for the Cochrane Collaboration (2011) it was proposed that indicators that a therapy is accepted include government licensing of practitioners, coverage by health insurance, statements of approval by government agencies, and recommendation as part of a practice guideline; and that if something is currently a standard, accepted therapy, then it is not likely to be widely considered as CAM.[102
]

That alternative medicine has been on the rise “in countries where Western science and scientific method generally are accepted as the major foundations for healthcare, and ‘evidence-based’ practice is the dominant paradigm” was described as an “enigma” in the Medical Journal of Australia.[197]

Critics in the US say the expression is deceptive because it implies there is an effective alternative to science-based medicine, and that complementary is deceptive because the word implies that the treatment increases the effectiveness of (complements) science-based medicine, while alternative medicines which have been tested nearly always have no measurable positive effect compared to a placebo.[1][198][199][200]

Some opponents, focused upon health fraud, misinformation, and quackery as public health problems in the US, are highly critical of alternative medicine, notably Wallace Sampson and Paul Kurtz founders of Scientific Review of Alternative Medicine and Stephen Barrett, co-founder of The National Council Against Health Fraud and webmaster of Quackwatch.[201] Grounds for opposing alternative medicine which have been stated in the US and elsewhere are:

Paul Offit has proposed four ways that “alternative medicine becomes quackery”:[80]

A United States government agency, the National Center on Complementary and Integrative Health (NCCIH), has created its own classification system for branches of complementary and alternative medicine. It classifies complementary and alternative therapies into five major groups, which have some overlap and two types of energy medicine are distinguished: one, “Veritable” involving scientifically observable energy, including magnet therapy, colorpuncture and light therapy; the other “Putative” which invoke physically undetectable or unverifiable energy.[210]

Alternative medicine practices and beliefs are diverse in their foundations and methodologies. The wide range of treatments and practices referred to as alternative medicine includes some stemming from nineteenth century North America, such as chiropractic and naturopathy, others, mentioned by Jtte, that originated in eighteenth- and nineteenth-century Germany, such as homeopathy and hydropathy,[160] and some that have originated in China or India, while African, Caribbean, Pacific Island, Native American, and other regional cultures have traditional medical systems as diverse as their diversity of cultures.[7]

Examples of CAM as a broader term for unorthodox treatment and diagnosis of illnesses, disease, infections, etc.,[211] include yoga, acupuncture, aromatherapy, chiropractic, herbalism, homeopathy, hypnotherapy, massage, osteopathy, reflexology, relaxation therapies, spiritual healing and tai chi.[211] CAM differs from conventional medicine. It is normally private medicine and not covered by health insurance.[211] It is paid out of pocket by the patient and is an expensive treatment.[211] CAM tends to be a treatment for upper class or more educated people.[142]

The NCCIH classification system is –

Alternative therapies based on electricity or magnetism use verifiable electromagnetic fields, such as pulsed fields, alternating-current, or direct-current fields in an unconventional manner rather than claiming the existence of imponderable or supernatural energies.[7]

Substance based practices use substances found in nature such as herbs, foods, non-vitamin supplements and megavitamins, and minerals, and includes traditional herbal remedies with herbs specific to regions in which the cultural practices arose.[7] Nonvitamin supplements include fish oil, Omega-3 fatty acid, glucosamine, echinacea, flaxseed oil or pills, and ginseng, when used under a claim to have healing effects.[61]

Mind-body interventions, working under the premise that the mind can affect “bodily functions and symptoms”,[7] include healing claims made in hypnotherapy,[212] and in guided imagery, meditation, progressive relaxation, qi gong, tai chi and yoga.[7] Meditation practices including mantra meditation, mindfulness meditation, yoga, tai chi, and qi gong have many uncertainties. According to an AHRQ review, the available evidence on meditation practices through September 2005 is of poor methodological quality and definite conclusions on the effects of meditation in healthcare cannot be made using existing research.[213][214]

Naturopathy is based on a belief in vitalism, which posits that a special energy called vital energy or vital force guides bodily processes such as metabolism, reproduction, growth, and adaptation.[36] The term was coined in 1895[215] by John Scheel and popularized by Benedict Lust, the “father of U.S. naturopathy”.[216] Today, naturopathy is primarily practiced in the United States and Canada.[217] Naturopaths in unregulated jurisdictions may use the Naturopathic Doctor designation or other titles regardless of level of education.[218]

Here is the original post:

Alternative medicine – Wikipedia, the free encyclopedia

Posted in Alternative Medicine | Comments Off on Alternative medicine – Wikipedia, the free encyclopedia

Space tourism – Wikipedia, the free encyclopedia

Posted: July 8, 2016 at 7:53 am

This article is about paying space travellers. For other commercial spacefarers, see Commercial astronaut.

Space tourism is space travel for recreational, leisure or business purposes. A number of startup companies have sprung up in recent years, such as Virgin Galactic and XCOR Aerospace, hoping to create a sub-orbital space tourism industry. Orbital space tourism opportunities have been limited and expensive, with only the Russian Space Agency providing transport to date.

The publicized price for flights brokered by Space Adventures to the International Space Station aboard a Russian Soyuz spacecraft have been US $2040 million, during the period 20012009 when 7 space tourists made 8 space flights. Some space tourists have signed contracts with third parties to conduct certain research activities while in orbit.

Russia halted orbital space tourism in 2010 due to the increase in the International Space Station crew size, using the seats for expedition crews that would have been sold to paying spaceflight participants.[1][2] Orbital tourist flights are planned to resume in 2015.[3]

As an alternative term to “tourism”, some organizations such as the Commercial Spaceflight Federation use the term “personal spaceflight”. The Citizens in Space project uses the term “citizen space exploration”.[4]

As of September 2012[update], multiple companies are offering sales of orbital and suborbital flights, with varying durations and creature comforts.[5]

The Soviet space program was aggressive in broadening the pool of cosmonauts. The Soviet Intercosmos program included cosmonauts selected from Warsaw Pact members (from Czechoslovakia, Poland, East Germany, Bulgaria, Hungary, Romania) and later from allies of the USSR (Cuba, Mongolia, Vietnam) and non-aligned countries (India, Syria, Afghanistan). Most of these cosmonauts received full training for their missions and were treated as equals, but especially after the Mir program began, were generally given shorter flights than Soviet cosmonauts. The European Space Agency (ESA) took advantage of the program as well.

The U.S. space shuttle program included payload specialist positions which were usually filled by representatives of companies or institutions managing a specific payload on that mission. These payload specialists did not receive the same training as professional NASA astronauts and were not employed by NASA. In 1983, Ulf Merbold from ESA and Byron Lichtenberg from MIT (engineer and Air Force fighter pilot) were the first payload specialists to fly on the Space Shuttle, on mission STS-9.[6][7]

In 1984, Charles D. Walker became the first non-government astronaut to fly, with his employer McDonnell Douglas paying $40,000 for his flight.[8]:7475 NASA was also eager to prove its capability to Congressional sponsors. Senator Jake Garn was flown on the Shuttle in 1985,[9] followed by Representative Bill Nelson in 1986.[10]

During the 1970s, Shuttle prime contractor Rockwell International studied a $200300 million removable cabin that could fit into the Shuttle’s cargo bay. The cabin could carry up to 74 passengers into orbit for up to three days. Space Habitation Design Associates proposed, in 1983, a cabin for 72 passengers in the bay. Passengers were located in six sections, each with windows and its own loading ramp, and with seats in different configurations for launch and landing. Another proposal was based on the Spacelab habitation modules, which provided 32 seats in the payload bay in addition to those in the cockpit area. A 1985 presentation to the National Space Society stated that although flying tourists in the cabin would cost $1 to 1.5 million per passenger without government subsidy, within 15 years 30,000 people a year would pay $25,000 each to fly in space on new spacecraft. The presentation also forecast flights to lunar orbit within 30 years and visits to the lunar surface within 50 years.[11]

As the shuttle program expanded in the early 1980s, NASA began a Space Flight Participant program to allow citizens without scientific or governmental roles to fly. Christa McAuliffe was chosen as the first Teacher in Space in July 1985 from 11,400 applicants. 1,700 applied for the Journalist in Space program, including Walter Cronkite, Tom Brokaw, Tom Wolfe, and Sam Donaldson. An Artist in Space program was considered, and NASA expected that after McAuliffe’s flight two to three civilians a year would fly on the shuttle.[8] After McAuliffe was killed in the Challenger disaster in January 1986 the programs were canceled. McAuliffe’s backup, Barbara Morgan, eventually got hired in 1998 as a professional astronaut and flew on STS-118 as a mission specialist.[8]:8485 A second journalist-in-space program, in which NASA green-lighted Miles O’Brien to fly on the space shuttle, was scheduled to be announced in 2003. That program was canceled in the wake of the Columbia disaster on STS-107 and subsequent emphasis on finishing the International Space Station before retiring the space shuttle.

With the realities of the post-Perestroika economy in Russia, its space industry was especially starved for cash. The Tokyo Broadcasting System (TBS) offered to pay for one of its reporters to fly on a mission. For $28 million, Toyohiro Akiyama was flown in 1990 to Mir with the eighth crew and returned a week later with the seventh crew. Akiyama gave a daily TV broadcast from orbit and also performed scientific experiments for Russian and Japanese companies. However, since the cost of the flight was paid by his employer, Akiyama could be considered a business traveler rather than a tourist.

In 1991, British chemist Helen Sharman was selected from a pool of 13,000 applicants to be the first Briton in space.[12] The program was known as Project Juno and was a cooperative arrangement between the Soviet Union and a group of British companies. The Project Juno consortium failed to raise the funds required, and the program was almost cancelled. Reportedly Mikhail Gorbachev ordered it to proceed under Soviet expense in the interests of international relations, but in the absence of Western underwriting, less expensive experiments were substituted for those in the original plans. Sharman flew aboard Soyuz TM-12 to Mir and returned aboard Soyuz TM-11.

At the end of the 1990s, MirCorp, a private venture that was by then in charge of the space station, began seeking potential space tourists to visit Mir in order to offset some of its maintenance costs. Dennis Tito, an American businessman and former JPL scientist, became their first candidate. When the decision to de-orbit Mir was made, Tito managed to switch his trip to the International Space Station (ISS) through a deal between MirCorp and U.S.-based Space Adventures, Ltd., despite strong opposition from senior figures at NASA; from the beginning of the ISS expeditions, NASA stated it wasn’t interested in space guests.[13] Nonetheless, Dennis Tito visited the ISS on April 28, 2001, and stayed for seven days, becoming the first “fee-paying” space tourist. He was followed in 2002 by South African computer millionaire Mark Shuttleworth. The third was Gregory Olsen in 2005, who was trained as a scientist and whose company produced specialist high-sensitivity cameras. Olsen planned to use his time on the ISS to conduct a number of experiments, in part to test his company’s products. Olsen had planned an earlier flight, but had to cancel for health reasons. The Subcommittee on Space and Aeronautics Committee On Science of the House of Representatives held on June 26, 2001 reveals the shifting attitude of NASA towards paying space tourists wanting to travel to the ISS. The hearing’s purpose was to, “Review the issues and opportunities for flying nonprofessional astronauts in space, the appropriate government role for supporting the nascent space tourism industry, use of the Shuttle and Space Station for Tourism, safety and training criteria for space tourists, and the potential commercial market for space tourism”.[14] The subcommittee report was interested in evaluating Dennis Tito’s extensive training and his experience in space as a nonprofessional astronaut.

By 2007, space tourism was thought to be one of the earliest markets that would emerge for commercial spaceflight.[15]:11 However, as of 2014[update] this private exchange market has not emerged to any significant extent.

Space Adventures remains the only company to have sent paying passengers to space.[16][17] In conjunction with the Federal Space Agency of the Russian Federation and Rocket and Space Corporation Energia, Space Adventures facilitated the flights for all of the world’s first private space explorers. The first three participants paid in excess of $20 million (USD) each for their 10-day visit to the ISS.

After the Columbia disaster, space tourism on the Russian Soyuz program was temporarily put on hold, because Soyuz vehicles became the only available transport to the ISS. On July 26, 2005, Space Shuttle Discovery (mission STS-114) marked the shuttle’s return to space. Consequently, in 2006, space tourism was resumed. On September 18, 2006, an Iranian American named Anousheh Ansari became the fourth space tourist (Soyuz TMA-9).[18]) On April 7, 2007, Charles Simonyi, an American businessman of Hungarian descent, joined their ranks (Soyuz TMA-10). Simonyi became the first repeat space tourist, paying again to fly on Soyuz TMA-14 in MarchApril 2009. Canadian Guy Lalibert became the next space tourist in September, 2009 aboard Soyuz TMA-16.

As reported by Reuters on March 3, 2010, Russia announced that the country would double the number of launches of three-man Soyuz ships to four that year, because “permanent crews of professional astronauts aboard the expanded [ISS] station are set to rise to six”; regarding space tourism, the head of the Russian Cosmonauts’ Training Center said “for some time there will be a break in these journeys”.[1]

On January 12, 2011, Space Adventures and the Russian Federal Space Agency announced that orbital space tourism would resume in 2013 with the increase of manned Soyuz launches to the ISS from four to five per year.[19] However, this has not materialized, and the current preferred option, instead of producing an additional Soyuz, would be to extend the duration of an ISS Expedition to one year, paving the way for the flight of new spaceflight participants. The British singer Sarah Brightman initiated plans (costing a reported $52 million) and participated in preliminary training in early 2015, expecting to then fly (and to perform while in orbit) in September 2015, but in May 2015 she postponed the plans indefinitely.[3][20][21]

Several plans have been proposed for using a space station as a hotel:

No suborbital space tourism has occurred yet, but since it is projected to be more affordable, many companies view it as a money-making proposition. Most are proposing vehicles that make suborbital flights peaking at an altitude of 100160km (6299mi).[38] Passengers would experience three to six minutes of weightlessness, a view of a twinkle-free starfield, and a vista of the curved Earth below. Projected costs are expected to be about $200,000 per passenger.[39]

Under the Outer Space Treaty signed in 1967, the launch operator’s nationality and the launch site’s location determine which country is responsible for any damages occurred from a launch.[53]

After valuable resources were detected on the Moon, private companies began to formulate methods to extract the resources. Article II of the Outer Space Treaty dictates that “outer space, including the Moon and other celestial bodies, is not subject to national appropriation by claim of sovereignty, by means of use or occupation, or by any other means”.[54] However, countries have the right to freely explore the Moon and any resources collected are property of that country when they return.

In December 2005, the U.S. Government released a set of proposed rules for space tourism.[55] These included screening procedures and training for emergency situations, but not health requirements.

Under current US law, any company proposing to launch paying passengers from American soil on a suborbital rocket must receive a license from the Federal Aviation Administration’s Office of Commercial Space Transportation (FAA/AST). The licensing process focuses on public safety and safety of property, and the details can be found in the Code of Federal Regulations, Title 14, Chapter III.[56] This is in accordance with the Commercial Space Launch Amendments Act passed by Congress in 2004.[57]

In March 2010, the New Mexico legislature passed the Spaceflight Informed Consent Act. The SICA gives legal protection to companies who provide private space flights in the case of accidental harm or death to individuals. Participants sign an Informed Consent waiver, dictating that spaceflight operators can not be held liable in the “death of a participant resulting from the inherent risks of space flight activities”. Operators are however not covered in the case of gross negligence or willful misconduct.[58]

A 2010 study published in Geophysical Research Letters raised concerns that the growing commercial spaceflight industry could accelerate global warming. The study, funded by NASA and The Aerospace Corporation, simulated the impact of 1,000 suborbital launches of hybrid rockets from a single location, calculating that this would release a total of 600 tonnes of black carbon into the stratosphere. They found that the resultant layer of soot particles remained relatively localised, with only 20% of the carbon straying into the southern hemisphere, thus creating a strong hemispherical asymmetry.[59] This unbalance would cause the temperature to decrease by about 0.4C (0.72F) in the tropics and subtropics, whereas the temperature at the poles would increase by between 0.2 and 1C (0.36 and 1.80F). The ozone layer would also be affected, with the tropics losing up to 1.7% of ozone cover, and the polar regions gaining 56%.[60] The researchers stressed that these results should not be taken as “a precise forecast of the climate response to a specific launch rate of a specific rocket type”, but as a demonstration of the sensitivity of the atmosphere to the large-scale disruption that commercial space tourism could bring.[59]

Several organizations have been formed to promote the space tourism industry, including the Space Tourism Society, Space Future, and HobbySpace. UniGalactic Space Travel Magazine is a bi-monthly educational publication covering space tourism and space exploration developments in companies like SpaceX, Orbital Sciences, Virgin Galactic and organizations like NASA.

Classes in space tourism are currently taught at the Rochester Institute of Technology in New York,[61] and Keio University in Japan.[62]

A web-based survey suggested that over 70% of those surveyed wanted less than or equal to 2 weeks in space; in addition, 88% wanted to spacewalk (only 14% of these would do it for a 50% premium), and 21% wanted a hotel or space station.[63]

The concept has met with some criticism from some, including politicians, notably Gnter Verheugen, vice-president of the European Commission, who said of the EADS Astrium Space Tourism Project: “It’s only for the super rich, which is against my social convictions”.[64]

As of October 2013, NBC News and Virgin Galactic have come together to create a new reality television show titled Space Race. The show “will follow contestants as they compete to win a flight into space aboard Virgin Galactic’s SpaceShipTwo rocket plane. It is not to be confused with the Children’s Space TV show called “Space Racers””[65]

Many private space travelers have objected to the term “space tourist”, often pointing out that their role went beyond that of an observer, since they also carried out scientific experiments in the course of their journey. Richard Garriott additionally emphasized that his training was identical to the requirements of non-Russian Soyuz crew members, and that teachers and other non-professional astronauts chosen to fly with NASA are called astronauts. He has said that if the distinction has to be made, he would rather be called “private astronaut” than “tourist”.[66] Dennis Tito has asked to be known as an “independent researcher”,[citation needed] and Mark Shuttleworth described himself as a “pioneer of commercial space travel”.[67] Gregory Olsen prefers “private researcher”,[68] and Anousheh Ansari prefers the term “private space explorer”.[18] Other space enthusiasts object to the term on similar grounds. Rick Tumlinson of the Space Frontier Foundation, for example, has said: “I hate the word tourist, and I always will … ‘Tourist’ is somebody in a flowered shirt with three cameras around his neck.”[69] Russian cosmonaut Maksim Surayev told the press in 2009 not to describe Guy Lalibert as a tourist: “It’s become fashionable to speak of space tourists. He is not a tourist but a participant in the mission.”[70]

“Spaceflight participant” is the official term used by NASA and the Russian Federal Space Agency to distinguish between private space travelers and career astronauts. Tito, Shuttleworth, Olsen, Ansari, and Simonyi were designated as such during their respective space flights. NASA also lists Christa McAuliffe as a spaceflight participant (although she did not pay a fee), apparently due to her non-technical duties aboard the STS-51-L flight.

The U.S. Federal Aviation Administration awards the title of “Commercial Astronaut” to trained crew members of privately funded spacecraft. The only people currently holding this title are Mike Melvill and Brian Binnie, the pilots of SpaceShipOne.

A 2010 report from the Federal Aviation Administration, titled “The Economic Impact of Commercial Space Transportation on the U. S Economy in 2009”, cites studies done by Futron, an aerospace and technology-consulting firm, which predict that space tourism could become a billion-dollar market within 20 years.[71] In addition, in the decade since Dennis Tito journeyed to the International Space Station, eight private citizens have paid the $20 million fee to travel to space. Space Adventures suggests that this number could increase fifteen-fold by 2020.[72] These figures do not include other private space agencies such as Virgin Galactic, which as of 2014 has sold approximately 700 tickets priced at $200,000 or $250,000 dollars each and has accepted more than $80 million in deposits.[73]

Read more:

Space tourism – Wikipedia, the free encyclopedia

Posted in Space Travel | Comments Off on Space tourism – Wikipedia, the free encyclopedia

The future of neo-eugenics. Now that many people approve …

Posted: July 1, 2016 at 9:49 pm

Every year, 4.1 million babies are born in the USA. On the basis of the well-known risk of Down syndrome, about 6,150 of these babies would be expected to suffer from this genetic condition, which is caused by an extra copy of chromosome 21. In reality, only about 4,370 babies are born with Down syndrome; the others have been aborted during pregnancy. These estimates are based on a prevalence rate of 0.15% and an abortion rate of about 29% of fetuses diagnosed with Down syndrome in Atlanta, GA (Siffel et al, 2004), and Hawaii (Forrester & Merz, 2002)the only two US locations for which reliable data are available. Data from other regions are similar or even higher: 32% of Down syndrome fetuses were aborted in Western Australia (Bourke et al, 2005); 75% in South Australia (Cheffins et al, 2000); 80% in Taiwan (Jou et al, 2005); and 85% in Paris, France (Khoshnood et al, 2004). Despite this trend, the total number of babies born with Down syndrome is not declining in most industrialized nations because both the number of older mothers and the conception rate is increasing.

These abortions are eugenic in both intention and effectthat is, their purpose is to eliminate a genetically defective fetus and thus allow for a genetically superior child in a subsequent pregnancy. This is a harsh way of phrasing it; another way is to say that parents just want to have healthy children. Nevertheless, however it is phrased, the conclusion is starkly unavoidable: terminating the pregnancy of a genetically defective fetus is widespread. Moreover, because none of the countries mentioned above coerce parents into aborting deformed fetuses, these abortionswhich number many thousands each yearare carried out at the request of the parents, or at least the mothers. This high number of so-called medical abortions shows that many people, in many parts of the world, consider the elimination of a genetically defective fetus to be morally acceptable.

This high number of so-called medical abortions shows that many people consider the elimination of a genetically defective fetus to be morally acceptable

This form of eugenic selection is not confined to Down syndrome, which is characterized by mental retardation, a higher risk of various diseases, and a range of major and minor abnormalities in body structure and function. Fetuses with many disorders detectable by ultrasound in utero are also aborted. Data from the European Surveillance of Congenital Abnormalities shows that between 1995 and 1999 about 40% of infants with any one of 11 main congenital disorders were aborted in Europe (Garne et al, 2005). Similarly, the International Clearinghouse for Birth Defects Monitoring System (ICBDMS; Rome, Italy) provides data for the eight main industrialized (G8) countries. From this data, I calculate that in 2002, 20% of fetuses with apparent birth defects were aborted in G8 countriesthat is, between 30,000 and 40,000 fetuses. As a result, many congenital disorders are becoming rare (ICBDMS, 2004) and, as they do, infant mortality rates are also declining. In Western Australia, neonatal mortality rates due to congenital deformities declined from 4.36 to 2.75 per 1,000 births in the period from 1980 to 1998. Half of that decline is thought to be due to the increase in abortions of abnormal fetuses (Bourke et al, 2005).

The widespread acceptance of abortion as a eugenic practice suggests that there might be little resistance to more sophisticated methods of eugenic selection and, in general, this has been the case. Increasingly, prenatal diagnosis of genetic conditions is carried out on the basis of molecular tests for Mendelian disorders. There are few published data on the frequency and consequences of such tests, but a recent survey of genetic testing in Italy showed that about 20,000 fetuses were tested in 2004, mostly for mutations causing cystic fibrosis, Duchenne’s muscular dystrophy and Fragile X mental retardation (Dallapiccola et al, 2006). In Taiwan, screens for thalassaemia mutations have caused the live-birth prevalence of this disease to drop from 5.6 to 1.21 per 100,000 births over eight years (Chern et al, 2006).

However, such tests probably do not markedly decrease the mutational burden of a nation’s newborns. Usually, a fetus is only tested for a specific mutation when its family medical history indicates that there is a clear risk. If, as must often be the case, parents are oblivious to the fact that they are carriers of a genetic disorder, they will have no reason to undergo a prenatal diagnosis, which is both expensive and invasive. Fetuses are also not tested for de novo mutations. However, given that manyperhaps mostparents want healthy children, should all fetuses be screened for many disease-causing mutations?

It is a question that some geneticists are now asking (Van den Veyver & Beaudet, 2006). They point out that comparative genomic hybridization (CGH) microarrays could be used to screen a single embryo or fetus for thousands of mutations. One type of CGH microarray that is close to clinical application is designed to detect changes in gene copy number across the whole genome (Vissers et al, 2005). These arrays, which are based on bacterial artificial chromosome (BAC) clones, can detect aneusomiesdeletions and duplicationsof about 100 kilobases in size. Such aneusomies are found in almost all individuals with no negative consequences, but a minority, which affect dosage-sensitive genes, cause disease. A recent study in which 100 patients with unexplained mental retardation were screened for aneusomies gives some indication of the importance of aneusomies in genetic disorders (de Vries et al, 2005). Most of the copy number changes found in these patients were also found in healthy parents or controls and thus were probably not responsible for the disease; however, ten patients had unique de novo mutations. Therefore, this study identified a likelyalbeit unprovengenetic cause of mental retardation in 10% of patients; a remarkable result for a single screen.

The virtue of a BAC-based microarray is that it can detect novel, as well as known, deletions and duplications; its limitation is that it misses the point mutations that are the cause of many, perhaps most, genetic diseases. Such mutations presumably account for at least some of the retardation in the 90 patients in whom no aneusomies were detected. At present there is no feasible method of screening the genome of a patient for all possible mutationsat least not without sequencing it. However, there is no technical obstacle to constructing an oligo-based micoarray able to detect all known disease-causing mutations.

there is no technical obstacle to constructing an oligo-based micoarray able to detect all known disease-causing mutations

How useful would such a microarray be? More precisely, if a geneticist were able to screen a randomly chosen embryo for all known disease genes, what is the probability that he or she would be able to predict a genetic disease should the embryo come to term and live to adulthood? At the time of writing, the Human Gene Mutation Database (HGMD; http://www.hgmd.cf.ac.uk) identifies 64,251 mutations in 2,362 human genes that impair health. Most of these mutations are individually rare, but collectively they are very common. Indeed, given that there are so many mutations, the probability that an embryo is at risk of a genetic disease caused by at least one of them must be quite high.

An individual’s risk of suffering from a genetic disease depends on the mode of inheritan
ce of the diseaseautosomal dominant (AD), X-linked recessive (XLR) or autosomal recessive (AR)and the global frequency of the causal mutation. A survey of 567 disease-causing loci from the Online Mendelian Inheritance in Man database showed that about 59% are AD, 32% are AR, and 9% are XLR (Jimenez-Sanchez et al, 2001). Using these percentages with the 64,251 known disease-causing mutations in HGMD, we can estimate that 37,908 are AD, 20,560 are AR and 5,783 are XLR.

To complete our calculation, we need to know the typical global frequencies of each of these three types of mutation. It is surprisingly difficult to obtain global frequency data for disease alleles; however, Reich & Lander (2001) give the total frequencies of all known disease mutations for 14 monogenic diseases: 4 AD, 3 XLR, and 7 AR. The HGMD then provides us with the total number of disease-causing mutations known for each of these 14 genes, which ranges from 31 for haemochromatosis to 1,262 for cystic fibrosis.

Using these figures, I have calculated average allelic frequencies (). The fact that AR mutations are more common than AD or XLR mutations makes sense, as selection acts less intensively on them. Multiplying these numbers by the number of mutations in each inheritance class calculated above, while taking into account the mode of inheritance and assuming global HardyWeinberg equilibrium, I calculate that the probability of predicting an inherited disease in a randomly chosen human embryo is almost 0.4% (). Therefore, it should be possible to predict a disease in 1 in 252 embryos.

The probability of predicting a genetic disease in a random embryo if it were screened for all currently known mutations

The prediction of a genetic disease in a fetus does not necessarily indicate that it should be aborted. This decision ultimately depends on the strength of the prediction and the nature of the disease, both of which vary greatly among mutations. A female embryo with a single BRCA1 mutation, which is dominant, has a 68% probability of developing breast cancer by the age of 80 (Risch et al, 2001). Conversely, an embryo with two copies of the HFE C282Y mutation, which is recessive, has less than a 1% probability of developing haemochromatosis, a relatively mild blood disease (Beutler et al, 2002). Whether such risks warrant aborting either fetus is a decision to be made by its parents and their clinical advisors, but it should be noted that most of the mutations in the HGMD cause classical Mendelian disorders detected by family linkage studies and so have fairly high penetrance.

The estimate of the rate of disease prediction that I have given here is crude, but it is probably conservative. For convenience, I assumed a HardyWeinberg equilibrium, but in isolated populations or populations with a high degree of consanguinityfor instance, much of the Middle East through to Pakistanthe number of disease-causing homozygotes will be higher than my calculations. In addition, the rate of disease prediction will continue to rise as more and more disease-causing mutations are found. In 2005, 7,017 mutations were added to the HGMD26% more than in 2004.

One impediment to a universal, total prenatal screen for all known mutations is the invasive nature of the procedureit requires amniocentesis () or chorionic sampling to retrieve cells from the amniotic sacand the traumatic nature of the treatment, which is therapeutic abortion. Perhaps, then, a total mutation screen will not be used in prenatal diagnosis, but rather in preimplantation genetic diagnosis (PGD). This procedure tests embryos produced by in vitro fertilization (IVF) for chromosomal abnormalities and specific mutations before implantation, by removing a single cell from the embryo at the eight-cell stage. Healthy embryos are then implanted; poor embryosshowing one or several abnormalitiesare frozen or discarded. As in prenatal diagnosis, PGD is generally carried out only when a family medical history suggests that the embryo is at risk of a specific disease (Braude et al, 2002). Since its introduction in the mid-1980s, the procedure has spread quickly, although it remains illegal in some countries, such as Germany, which does, however, allow prenatal screens for a range of severe inheritable diseases. Data collected by the European IVF-monitoring Programme for the European Society of Human Reproduction and Embryology (ESHRE; Grimbergen, Belgium) showed that 1,563 PGD screens were recorded in 25 European nations in 2002, compared with 882 in 2001 (Andersen et al, 2006). There do not seem to be any comparable data for the USA, but given the large number of US IVF clinics offering PGDand the lack of regulationthe number of people across the world who have survived a PGD screen must now number tens of thousands.

the number of people across the world who have survived a PGD screen must now number tens of thousands

Ultrasound scan to amniocentesis test. Amniocentesis is a diagnostic procedure performed by inserting a needle (seen on the left) through the abdominal wall into the uterus and withdrawing a small amount of fluid from the sac surrounding the fetus. The …

How common will PGD become? Is it possible that one day every citizen of an industrialized nation will have survived, as an embryo, a PGD screen? Most commentators who have considered such a scenariowhich was portrayed in the movie GATTACAdo not think so (Silver, 2000). Their main argument is that PGDand the need to use IVFis too expensive, inconvenient and limited in application to ever become widespread. They have a point: nature has contrived a cheap, easy and enjoyable way to conceive a child; IVF is none of these things.

However, the difficulties might be exaggerated. A course of IVF in the UK costs between 7,000 and 10,000expensive, but cheaper than a mid-range car, and trivial compared with the costs of raising a child. Conception rates using IVF are generally lower compared with the old-fashioned method, but that is because many of the women who undergo IVF are relatively old (CDC, 2003). For women under 35 who have no fertility problems, the success rate per cycle is greater than 50%, which is comparable to natural monthly conception rates. However, perhaps the most important evidence against the idea that IVFand PGDwill not catch on is the observation that it already has. At present, about 1% of Americans are conceived using IVF, and each year 4% of Danes start their life in a petri dish (Nyboe Andersen & Erb, 2006). It seems possible that if the cost of IVF decreases further and the number of PGD screens expands, an increasing number of parents will choose not to subject their children to the vicissitudes of natural conception and the risk of severe genetic disease.

It seems possible that an increasing number of parents will choose not to subject their children to the vicissitudes of natural conception and the risk of severe genetic disease

Ultimately, the argument for a universal, total mutation screen will be based on its economic costs and benefits. It is too soon to draw up a detailed balance sheet, but we can suggest some numbers. Congenital mental retardation afflicts about 51,000 children annually in the USA; the Centers for Disease Control and Prevention estimate that each afflicted child will cost the US economy $1 million over the course of his or her lifethat is, a collective cost of $51 billion (CDC, 2004). This does not include the social and emotional cost that parents assume
in raising a mentally disabled child, which all but defy quantification.

Will neo-eugenics spread? Probably. At least it is hard to see what will stop it if, as I claim, it becomes possible to detect all known disease-causing mutations before birth or implantation, if the cost of IVF and PGD declines, and if eugenic screens have clear economic benefits. Some readers might find it peculiar that in this discussion of neo-eugenics, I have not considered the ethical or legal implications with which this subject is generally considered to be fraught. Although I do not doubt their importance, I simply have no particular knowledge of them. Peter Medawar put it best 40 years ago: If the termination of a pregnancy is now in question, scientific evidence might tell us that the chances of a defective birth are 100 percent, 50 percent, 25 percent, or perhaps unascertainable. The evidence is highly relevant to the decision, but the decision itself is not a scientific one, and I see no reason why scientists as such should be specially well-qualified to make it (Medawar, 1966).

See the rest here:

The future of neo-eugenics. Now that many people approve …

Posted in Neo-eugenics | Comments Off on The future of neo-eugenics. Now that many people approve …

Cloning (Stanford Encyclopedia of Philosophy)

Posted: June 30, 2016 at 3:36 am

Strictly speaking, cloning is the creation of a genetic copy of a sequence of DNA or of the entire genome of an organism. In the latter sense, cloning occurs naturally in the birth of identical twins and other multiples. But cloning can also be done artificially in the laboratory via embryo twinning or splitting: an early embryo is split in vitro so that both parts, when transferred to a uterus, can develop into individual organisms genetically identical to each other. In the cloning debate, however, the term cloning typically refers to a technique called somatic cell nuclear transfer (SCNT). SCNT involves transferring the nucleus of a somatic cell into an oocyte from which the nucleus and thus most of the DNA has been removed. (The mitochondrial DNA in the cytoplasm is still present). The manipulated oocyte is then treated with an electric current in order to stimulate cell division, resulting in the formation of an embryo. The embryo is (virtually) genetically identical to, and thus a clone of the somatic cell donor.

Dolly was the first mammal to be brought into the world using SCNT. Wilmut and his team at the Roslin Institute in Scotland replaced the nucleus from an oocyte taken from a Blackface ewe with the nucleus of a cell from the mammary gland of a six-year old Finn Dorset sheep (these sheep have a white face). They transferred the resulting embryo into the uterus of a surrogate ewe and approximately five months later Dolly was born. Dolly had a white face: she was genetically identical to the Finn Dorset ewe from which the somatic cell had been obtained.

Dolly, however, was not 100% genetically identical to the donor animal. Genetic material comes from two sources: the nucleus and the mitochondria of a cell. Mitochondria are organelles that serve as power sources to the cell. They contain short segments of DNA. In Dolly’s case, her nuclear DNA was the same as the donor animal; other of her genetic materials came from the mitochondria in the cytoplasm of the enucleated oocyte. For the clone and the donor animal to be exact genetic copies, the oocyte too would have to come from the donor animal (or from the same maternal line mitochondria are passed on by oocytes).

Dolly’s birth was a real breakthrough, for it proved that something that had been considered biologically impossible could indeed be done. Before Dolly, scientists thought that cell differentiation was irreversible: they believed that, once a cell has differentiated into a specialized body cell, such as a skin or liver cell, the process cannot be reversed. What Dolly demonstrated was that it is possible to take a differentiated cell, turn back its biological clock, and make the cell behave as though it was a recently fertilized egg.

Nuclear transfer can also be done using a donor cell from an embryo instead of from an organism after birth. Cloning mammals using embryonic cells has been successful since the mid-1980s (for a history of cloning, see Wilmut et al., 2001). Another technique to produce genetically identical offspring or clones is embryo twinning or embryo splitting, in which an early embryo is split in vitro so that both parts, when implanted in the uterus, can develop into individual organisms genetically identical to each other. This process occurs naturally with identical twins.

However, what many people find disturbing is the idea of creating a genetic duplicate of an existing person, or a person who has existed. That is why the potential application of SCNT in humans set off a storm of controversy. Another way to produce a genetic duplicate from an existing person is by cryopreserving one of two genetically identical embryos created in vitro for several years or decades before using it to generate a pregnancy. Lastly, reproductive cloning of humans could, in theory, also be achieved by combining the induced pluripotent stem cell technique with tetraploid complementation. Several research teams have succeeded in cloning mice this way (see, for example, Boland et al., 2009).

Dolly is a case of reproductive cloning, the aim of which is to create offspring. Reproductive cloning is to be distinguished from cloning for therapy and research, sometimes also referred to as therapeutic cloning. Both reproductive cloning and cloning for research and therapy involve SCNT, but their aims, as well as most of the ethical concerns they raise, differ. I will first discuss cloning for research and therapy and will then proceed to outline the ethical debate surrounding reproductive cloning.

Cloning for research and therapy involves the creation of an embryo via SCNT, but instead of transferring the cloned embryo to the uterus in order to generate a pregnancy, it is used to obtain pluripotent stem cells. It is thus not the intention to use the embryo for reproductive purposes. Embryonic stem cells offer powerful tools for developing therapies for currently incurable diseases and conditions, for important biomedical research, and for drug discovery and toxicity testing (Cervera & Stojkovic, 2007). For example, one therapeutic approach is to induce embryonic stem cells to differentiate into cardiomyocytes (heart muscle cells) to repair or replace damaged heart tissue, into insulin-producing cells to treat diabetes, or into neurons and their supporting tissues to repair spinal cord injuries.

A potential problem with embryonic stem cells is that they will normally not be genetically identical to the patient. Embryonic stem cells are typically derived from embryos donated for research after in vitro fertilization (IVF) treatment. Because these stem cells would have a genetic identity different from that of the recipient the patient they may, when used in therapy, be rejected by her immune system. Immunorejection can occur when the recipient’s body does not recognize the transplanted cells, tissues or organs as its own and as a defense mechanism attempts to destroy the graft. Another type of immunorejection involves a condition called graft-versus-host disease, in which immune cells contaminating the graft recognize the new host the patient as foreign and attack the host’s tissues and organs. Both types of immunorejection can result in loss of the graft or death of the patient. It is one of the most serious problems faced in transplant surgery.

Cloning for research and therapy could offer a solution to this problem. An embryo produced via SNCT using the patient’s somatic cell as a donor cell would be virtually genetically identical to the patient. Stem cells obtained from that embryo would thus also be genetically identical to the patient, as would be their derivatives, and would be less likely to be rejected after transplantation. Though therapies using embryonic stem cells from SCNT embryos are not yet on the horizon for humans, scientists have provided proof of concept for these therapies in the mouse.

Embryonic stem cells from cloned embryos would also have significant advantages for biomedical research, and for drug discovery and toxicity testing. Embryonic stem cells genetically identical to the patient could provide valuable in vitro models to study disease, especially where animal models are not available, where the research cannot be done in patients themselves because it would be too invasive, or where there are too few patients to work with (as in the case of rare genetic diseases). Researchers could, for example, create large numbers of embryonic stem cells genetically identical to the patient and then experiment on these in order to understand the particular features of the disease in that person. The embryonic stem cells and their derivatives could
also be used to test potential treatments. They could, for example, be used to test candidate drug therapies to predict their likely toxicity. This would avoid dangerous exposure of patients to sometimes highly experimental drugs.

Cloning for research and therapy is, however, still in its infancy stages. In 2011, a team of scientists from the New York Stem Cell Foundation Laboratory was the first to have succeeded in creating two embryonic stem cell lines from human embryos produced through SCNT (Noggle et al., 2011). Three years earlier, a small San Diego biotechnological company created human embryos (at the blastocyst stage) via SCNT but did not succeed in deriving embryonic stem cells from these cells (French et al., 2008). Cloning for research and therapy is thus not likely to bear fruition in the short term. Apart from unsolved technical difficulties, much more basic research in embryonic stem cell research is needed. The term therapeutic cloning has been criticized precisely for this reason. It suggests that therapy using embryonic stem cells from cloned embryos is already reality. In the phase before clinical trials, critics say, it is only reasonable to refer to research on nuclear transfer as research cloning or cloning for biomedical research (PCBE, 2002).

Cloning for research and therapy holds great potential for future research and therapeutic applications, but it also raises various concerns.

Much of the debate about the ethics of cloning for research and therapy turns on a basic disagreement about how we should treat early human embryos. As it is currently done, the isolation of embryonic stem cells involves the destruction of embryos at the blastocyst stage (day five after fertilization, when the embryo consists of 125225 cells). But cloning for research and therapy not only involves the destruction of embryos, it also involves the creation of embryos solely for the purpose of stem cell derivation. Views on whether and when it is permissible to create embryos solely to obtain stem cells differ profoundly.

Some believe that an embryo, from the moment of conception, has the same moral status, that is, the same set of basic moral rights, claims or interests as an ordinary adult human being. This view is sometimes expressed by saying that the early embryo is a person. On this view, creating and killing embryos for stem cells is a serious moral wrong. It is impermissible, even if it could save many lives (Deckers, 2007). Others believe that the early embryo is merely a cluster of cells or human tissue lacking any moral status. A common view among those who hold this view is that, given its promising potential, embryonic stem cell and cloning research is a moral imperative (Devolder & Savulescu, 2006). Many defend a view somewhere in between these opposing positions. They believe, for example, that the early embryo should be treated with respect because it has an intermediate moral status: a moral status lower than that of a person but higher than that of an ordinary body cell. A popular view amongst those who hold this position is that using embryos for research might sometimes be justified. Respect can be demonstrated, it is typically argued, by using embryos only for very important research that cannot be done using less controversial means, and by acknowledging the use of embryos for research with a sense of regret or loss (Robertson, 1995; Steinbock, 2001). One common view among those who hold the intermediate moral status view is that the use of discarded IVF embryos to obtain stem cells is compatible with the respect we owe to the embryo, whereas the creation and use of cloned embryos is not. An argument underlying this view is that, unlike IVF embryos, cloned embryos are created for instrumental use only; they are created and treated as a mere means, which some regard as incompatible with respectful treatment of the embryo (NBAC, 1999). Others (both proponents and opponents of embryo research) have denied that there is a significant moral difference between using discarded IVF embryos and cloned embryos as a source of stem cells. They have argued that if killing embryos for research is wrong, it is wrong regardless of the embryo’s origin (Doerflinger, 1999; Fitzpatrick, 2003; Devolder, 2005). Douglas and Savulescu (2009) have argued that it is permissible to destroy unwanted embryos in research, that is, embryos that no one wishes to use for reproductive purposes. Since both discarded IVF embryos and cloned embryos created for the purpose of stem cell derivation are unwanted embryos in that sense, it is, on their view, permissible to use both types of embryos for research.

A less common view holds that obtaining stem cells from cloned embryos poses fewer ethical problems than obtaining stem cells from discarded IVF embryos. Hansen (2002) has advanced this view, arguing that embryos resulting from SCNT do not have the same moral status we normally accord to other embryos: he calls the combination of a somatic nucleus and an enucleated egg a transnuclear egg, which, he says, is a mere artifact with no natural purpose or potential to evolve into an embryo and eventually a human being, and therefore falls outside the category of human beings. McHugh (2004) and Kiessling (2001) advance a similar argument. On their view, obtaining stem cells from cloned embryos is less morally problematic because embryos resulting from SCNT are better thought of as tissue culture, whereas IVF represents instrumental support for human reproduction. Since creating offspring is not the goal, they argue, it is misleading to use the term embryo or zygote to refer to the product of SCNT. They suggest to instead use the terms clonote (Mc Hugh) and ovasome (Kiessling).

Cloning for research and therapy requires a large number of donor oocytes. Ethical issues arise regarding how these oocytes could be obtained. Oocyte donation involves various risks and discomforts (for a review of the risks, see Committee on Assessing the Medical Risks of Human Oocyte Donation for Stem Cell Research, 2007). Among the most pressing ethical issues raised by participating in such donation is what model of informed consent should be applied. Unlike women who are considering IVF, non-medical oocyte donors are not clinical patients. They do not stand to derive any reproductive or medical benefit themselves from the donation (though Kalfoglou & Gittelsohn, 2000, argue that they may derive a psychological benefit). Magnus and Cho (2005) have argued that donating women should not be classified as research subjects since, unlike in other research, the risks to the donor do not lie in the research itself but in the procurement of the materials required for the research. They suggest that a new category named research donors be created for those who expose themselves to substantial risk only for the benefit of others (in this case unidentifiable people in the future) and where the risk is incurred not in the actual research but in the procurement of the materials for the research. Informed consent for altruistic organ donation by living donors to strangers has also been suggested as a model, since, in both cases, the benefits will be for strangers and not for the donor. Critics of this latter suggestion have pointed out, however, that there is a disanalogy between these two types of donation. The general ethical rule reflected in regulations concerning altruistic donation, namely that there must be a high chance of a good outcome for the patient, is violated in the case of oocyte donation for cloning research (George, 2007).

Given the risks to the donor, the absence of direct medical benefit for the donor, and the uncertain po
tential of cloning research, it is not surprising that the number of altruistic oocyte donations for such research is very low. Financial incentives might be needed to increase the supply of oocytes for cloning research. In some countries, including the US, selling and buying oocytes is legal. Some object to these practices because they consider oocytes as integral to the body and think they should be kept out of the market: on their view, the value of the human body and its parts should not be expressed in terms of money or other fungible goods. Some also worry that, through commercialization of oocytes, women themselves may become objects of instrumental use (Alpers &Lo, 1995). Many agree, however, that a concern for commodification does not justify a complete ban on payment of oocyte donors and that justice requires that they be financially compensated for the inconvenience, burden, and medical risk they endure, as is standard for other research subjects (Steinbock, 2004; Mertes &Pennings, 2007). A related concern is the effect of financial or other offers of compensation on the voluntariness of oocyte donation. Women, especially economically disadvantaged women from developing countries, might be unduly induced or even coerced into selling their oocytes (Dickinson, 2002). Baylis and McLeod (2007) have highlighted how difficult it is concomitantly to avoid both undue inducement and exploitation: a price that is too low risks exploitation; a price that avoids exploitation risks undue inducement.

Concerns about exploitation are not limited to concerns about payment, as became clear in the Hwang scandal (for a review, see Saunders & Savulescu, 2008). In 2004, Woo-Suk-Hwang, a leading Korean stem cell scientist, claimed to be the first to clone human embryos using SCNT and to extract stem cells from these embryos. In addition to finding that Hwang had fabricated many of his research results, Korea’s National Bioethics Committee also found that Hwang had pressured junior members of his lab to donate oocytes for his cloning experiments.

Some authors have argued that a regulated market in oocytes could minimize ethical concerns raised by the commercialization of oocytes and could be consistent with respect for women (Resnik 2001; Gruen, 2007). Researchers are also investigating the use of alternative sources of oocytes, including animal oocytes, fetal oocytes, oocytes from adult ovaries obtained post mortem or during operation, and stem cell-derived oocytes. Finally, another option is egg-sharing where couples who are undergoing IVF for reproductive purposes have the option to donate one or two of their oocytes in return for a reduced fee for their fertility treatment. The advantage of this system is that it avoids exposing women to extra risks these women were undergoing IVF in any case (Roberts & Throsby, 2008).

Personalized cloning therapies are likely to be labor intensive and expensive. This has raised social justice concerns. Perhaps cloning therapies will only be a realistic option for the very rich? Cloning therapies may, however, become cheaper, less labor intensive and more widely accessible after time. Moreover, cloning may cure diseases and not only treat symptoms. Regardless of the economic cost, it remains true of course that the cloning procedure is time consuming, rendering it inappropriate for certain clinical applications where urgent intervention is required (e.g., myocardial infarction, acute liver failure or traumatic or infectious spinal cord damage). If cloning for therapy became available, its application would thus likely be restricted to chronic conditions. Wilmut (1997), who cloned Dolly, has suggested that cloning treatments could be targeted to maximize benefit: an older person with heart disease could be treated with stem cells that are not a genetic match, take drugs to suppress her immune system for the rest of her life, and live with the side-effects; a younger person might benefit from stem cells from cloned embryos that match exactly. Devolder and Savulescu (2006) have argued that objections about economic cost are most forceful against cloning for self-transplantation than, for example, against cloning for developing cellular models of human disease. The latter will enable research into human diseases and may result in affordable therapies and cures for a variety of common diseases, such as cancer and heart disease, which afflict people all over the world. Finally, some have pointed out that it is not clear whether cloning research is necessarily more labor intensive than experiments on cells and tissues now done in animals.

Some are skeptical about the claimed benefits of cloning for research and therapy. They stress that for many diseases in which cloned embryonic stem cells might offer a therapy, there are alternative treatments and/or preventive measures in development, including gene therapy, pharmacogenomical solutions and treatments based on nanotechnology. It is often claimed that other types of stem cells such as adult stem cells and stem cells from the umbilical cord blood might enable us to achieve the same aims as cloning. Especially induced pluripotent stem cells (iPSCs) have raised the hope that cloning research is superfluous (Rao & Condic 2008). iPSCs are created through genetic manipulation of a body cell. iPSCs are similar to embryonic stem cells, and in particular to embryonic stem cells from cloned embryos. However,iPSC research could provide tissue- and patient-specific cells without relying on the need for human oocytes or the creation and destruction of embryos. iPSC research could thus avoid the ethical issues raised by cloning. This promise notwithstanding, scientists have warned that it would be premature to stop cloning research as iPSCs are not identical to embryonic stem cells. Cloning research may teach us things that iPSC research cannot teach us. Moreover, iPSC research has been said to fail to completely avoid the issue of embryo destruction (Brown, 2009).

Slippery slope arguments express the worry that permitting a certain practice may place us on a slippery slope to a dangerous or otherwise unacceptable outcome. Several commentators have argued that accepting or allowing cloning research is the first step that would place us on a slippery slope to reproductive cloning. As Leon Kass (1998, 702) has put it: once the genies put the cloned embryos into the bottles, who can strictly control where they go?

Others are more skeptical about slippery slope arguments against cloning and think that effective legislation can prevent us from sliding down the slope (Savulescu, 1999; Devolder & Savulescu 2006). If reproductive cloning is unacceptable, these critics say, it is reasonable to prohibit this specific technology rather than to ban non-reproductive applications of cloning. The UK and Belgium, for example, allow cloning research but prohibit the transfer of cloned embryos to the uterus.

Apart from the question of how slippery the slope might be, another question raised by such arguments concerns the feared development reproductive cloning and whether it is really ethically objectionable. Profound disagreement exists about the answer to this question.

The central argument in favor of reproductive cloning is expansion of opportunities for reproduction. Reproductive cloning could offer a new means for prospective parents to satisfy their reproductive goals or desires. Infertile individuals or couples could have a child that is genetically related to them. In addition, individuals, same sex couples, or couples who cannot together produce an embryo would no longer need donor gametes to reproduce if cloning were availab
le (some might still need donor eggs for the cloning procedure, but these would be enucleated so that only the mitochondrial DNA remains). It would be possible then to avoid that one’s child shares half of her nuclear DNA with a gamete donor.

Using cloning to help infertile people to have a genetically related child, or a child that is only genetically related to them, has been defended on the grounds of human wellbeing, personal autonomy, and the satisfaction of the natural inclination to produce offspring (Hyry, 2003; Strong, 2008). Offering individuals or couples the possibility to reproduce using cloning technology has been said to be consistent with the right to reproductive freedom, which, according to some, implies the right to choose what kind of children we will have (Brock, 1998, 145).

According to some, the main benefit of reproductive cloning is that it would enable prospective parents to control what genome their children will be endowed with (Fletcher, 1988, Harris, 1997, 2004; Pence 1998, 1016; Tooley, 1998). Cloning would enable parents to have a child with a genome identical to that of a person with good health and/or other desirable characteristics.

Another possible use of reproductive cloning is to create a child that is a tissue match for a sick sibling. The stem cells from the umbilical cord blood or from the bone marrow of the cloned child could be used to treat the diseased sibling. Such saviour siblings, have already been created through sexual reproduction or, more efficiently, through a combination of IVF, preimplantation genetic diagnosis and HLA testing.

Many people, however, have expressed concerns about human reproductive cloning. For some these concerns are sufficient to reject human cloning. For others, these concerns should be weighed against reasons for reproductive cloning.

What follows is an outline of some of the main areas of concern and disagreement about human reproductive cloning.

Despite the successful creation of viable offspring via SCNT in various mammalian species, researchers still have limited understanding of how the technique works on the subcellular and molecular level. Although the overall efficiency and safety of reproductive cloning in mammals has significantly increased over the past fifteen years, it is not yet a safe process (Whitworth & Prather, 2010). For example, the rate of abortions, stillbirths and developmental abnormalities remains high. Another source of concern is the risk of premature ageing because of shortened telomeres. Telomeres are repetitive DNA sequences at the tip of chromosomes that get shorter as an animal gets older. When the telomeres of a cell get so short that they disappear, the cell dies. The concern is that cloned animals may inherit the shortened telomeres from their older progenitor, with possibly premature aging and a shortened lifespan as a result.

For many, the fact that reproductive cloning is unsafe provides a sufficient reason not to pursue it. It has been argued that it would simply be wrong to impose such significant health risks on humans. The strongest version of this argument states that it would be wrong now to produce a child using SCNT because it would constitute a case of wrongful procreation. Some adopt a consent-based objection and condemn cloning because the person conceived cannot consent to being exposed to significant risks involved in the procedure (Kass, 1998; PCBE, 2002). Against this, it has been argued that even if reproductive cloning is unsafe, it may still be permissible if there are no safer means to bring that very same child into existence so long as the child is expected to have a life worth living (Strong, 2005).

With the actual rate of advancement in cloning, one cannot exclude a future in which the safety and efficiency of SCNT will be comparable or superior to that of IVF or even sexual reproduction. A remaining question is, then, whether those who condemn cloning because of its experimental nature should continue to condemn it morally and legally. Some authors have reasoned that if, in the future, cloning becomes safer than sexual reproduction, we should even make it our reproductive method of choice (Fletcher, 1988; Harris 2004, Ch. 4).

Some fear that cloning threatens the identity and individuality of the clone, thus reducing her autonomy (Ramsey, 1966; Kitcher, 1997; Annas, 1998; Kass, 1998). This may be bad in itself, or bad because it might reduce the clone’s wellbeing. It may also be bad because it will severely restrict the array of life plans open to the clone, thus violating her right to an open future (a concept developed by Feinberg, 1980). In its report Human Cloning and Human Dignity: An Ethical Inquiry, the US President’s Council on Bioethics (2002) wrote that being genetically unique is an emblem of independence and individuality and allows us to go forward with a relatively indeterminate future in front of us (Ch.5, Section c). Such concerns have formed the basis of strong opposition to cloning.

The concern that cloning threatens the clone’s identity and individuality has been criticized for relying on the mistaken belief that who and what we become is entirely determined by our genes. Such genetic determinism is clearly false. Though genes influence our personal development, so does the complex and irreproducible context in which our lives take place. We know this, among others, from studying monozygotic twins. Notwithstanding the fact that such twins are genetically identical to each other and, therefore, sometimes look very similar and often share many character traits, habits and preferences, they are different individuals, with different identities (Segal, 2000). Thus, it is argued, having a genetic duplicate does not threaten one’s individuality, or one’s distinct identity.

Brock (2002) has pointed out that one could nevertheless argue that even though individuals created through cloning would be unique individuals with a distinct identity, they might not experience it that way. What is threatened by cloning then is not the individual’s identity or individuality, but her sense of identity and individuality, and this may reduce her autonomy. So even if a clone has a unique identity, she may experience more difficulties in establishing her identity than if she had not been a clone.

But here too critics have relied on the comparison with monozygotic twins. Harris (1997, 2004) and Tooley (1998), for example, have pointed out that each twin not only has a distinct identity, but generally also views him or herself as having a distinct identity, as do their relatives and friends. Moreover, so they argue, an individual created through cloning would likely be of a different age than her progenitor. There may even be several generations between them. A clone would thus in essence be a delayed twin. Presumably this would make it even easier for the clone to view herself as distinct from the progenitor than if she had been genetically identical to someone her same age.

However, the reference to twins as a model to think about reproductive cloning has been criticized, for example, because it fails to reflect important aspects of the parent-child relationship that would incur if the child were a clone of one of the rearing parents (Jonas, 1974; Levick, 2004). Because of the dominance of the progenitor, the risk of reduced autonomy and confused identity may be greater in such a situation than in the case of ordinary twins. Moreover, just because the clone would be a delayed twin, she may have the feeling that her life has already been lived or that she is predetermined to do the same things as her proge
nitor (Levy & Lotz 2005). This problem may be exacerbated by others constantly comparing her life with that of the progenitor, and having problematic expectations based on these comparisons. The clone may feel under constant pressure to live up to these expectations (Kass, 1998; Levick, 2004, 101; Sandel, 2007, 5762), or may have the feeling she leads a life in the shadow of the progenitor (Holm, 1998; PCBE, 2002, Ch.5). This may especially be the case if the clone was created as a replacement for a deceased child. (Some private companies already offer to clone dead pets to create replacements pets.) The fear is that the ghost of the dead child will get more attention and devotion than the replacement child. Parents may expect the clone to be like the lost child, or some idealized image of it, which could hamper the development of her identity and adversely affect her self-esteem (Levick, 2004, 111132). Finally, another reason why the clone’s autonomy may be reduced is because she would be involuntarily informed about her genetic predispositions. A clone who knows that her genetic parent developed a severe single gene disease at the age of forty will realise it is very likely that she will undergo the same fate. Unlike individuals who choose to have themselves genetically tested, clones who know their genetic parent’s medical history will be involuntarily informed.

These concerns have been challenged on several grounds. Some believe that it is plausible that, through adequate information, we could largely correct mistaken beliefs about the link between genetic and personal identity, and thus reduce the risk of problematic expectations toward the clone (Harris, 1997, 2004; Tooley 1998, 845; Brock, 1998, Pence, 1998). Brock (1998) and Buchanan et al. (2000, 198) have argued that even if people persist in these mistaken beliefs and their attitudes or actions lead to cloned individuals believing they do not have an open future, this does not imply that the clone’s right to ignorance about one’s personal future or to an open future has actually been violated. Pence (1998, 138) has argued that having high expectations, even if based on false beliefs, is not necessarily a bad thing. Parents with high expectations often give their children the best chances to lead a happy and successful life. Brock (2002, 316) has argued that parents now also constantly restrict the array of available life plans open to their children, for example, by selecting their school or by raising them according to certain values. Though this may somewhat restrict the child’s autonomy, there will always be enough decisions to take for the child to be autonomous, and to realize this. According to Brock, it is not clear why this should be different in the case of cloning. He also points out that there may be advantages to being a delayed twin (154). For example, one may acquire knowledge about the progenitor’s medical history and use this knowledge to live longer, or to increase one’s autonomy. One could, for example, use the information to reduce the risk of getting the disease or condition, or to at least postpone its onset, by behavioral changes, an appropriate diet and/or preventive medication. This would not be possible, however, if the disease is untreatable (for example, Huntington’s Disease). Harris (2004, Ch.1) has stressed that information about one’s genetic predispositions for certain diseases would also allow one to take better informed reproductive decisions. Cloning would allow us to give our child a tried and tested genome, not one created by the genetic lottery of sexual reproduction and the random combination of chromosomes.

Cloning arouses people’s imagination about the clone, but also about those who will choose to have a child through cloning. Often dubious motives are ascribed to them: they would want a child that is just like so-and-so causing people to view them as objects or as commodities like a new car or a new house (Putnam, 1997, 78). They would want an attractive child (a clone of Scarlett Johansson) or a child with tennis talent (a clone of Victoria Azarenka) purely to show off. Dictators would want armies of clones to achieve their political goals. People would clone themselves out of vanity. Parents would clone their existing child so that the clone can serve as an organ bank for that child, or would clone their deceased child to have a replacement child. The conclusion is then that cloning is wrong because the clone will be used as a mere means to others’ ends. These critiques have also been expressed with regard to other forms of assisted reproduction; but some worry that individuals created through cloning may be more likely to be viewed as commodities because their total genetic blueprint would be chosen they would be fully made and not begotten (Ramsey, 1966; Kass 1998; PCBE 2002, 107).

Strong (2008) has argued that these concerns are based on a fallacious interference. It is one thing to desire genetically related children, and something else to believe that one owns one’s children or that one considers one’s children as objects, he writes. Other commentators, however, have pointed out that even if parents themselves will not commodify their children, cloning might still have an impact in society as a whole on people’s tendencies to do so (Levy & Lotz, 2005; Sandel 2007). A related concern expressed by Levick (2004, 1845) is that allowing cloning might result in a society where production on demand clones are sold for adoption to people who are seeking to have children with special abilities a clearer case of treating children as objects.

But suppose some people create a clone for instrumental reasons, for example, as a stem cell donor for a sick sibling. Does this imply that the clone will be treated merely as a means? Critics of this argument have pointed out that parents have children for all kinds of instrumental reasons, including the benefit for the husband-wife relationship, continuity of the family name, and the economic and psychological benefits children provide when their parents become old (Harris 2004, 412, Pence 1998). This is generally not considered problematic as long as the child is also valued in its own right. What is most important in a parent-child relationship is the love and care inherent in that relationship. They stress the fact that we judge people on their attitudes toward children, rather than on their motives for having them. They also deny that there is a strong link between one’s intention or motive to have a child, and the way one will treat the child.

Another concern is that clones may be the victims of unjustified discrimination and will not be respected as persons (Deech, 1999; Levick, 2004, 185187). Savulescu (2005, Other Internet Resources) has referred to such negative attitudes towards clones as clonism: a new form of discrimination against a group of humans who are different in a non-morally significant way. But does a fear for clonism constitute a good reason for rejecting cloning? Savulescu and others have argued that, if it is, then we must conclude that racist attitudes and discriminatory behavior towards people with a certain ethnicity provides a good reason for people with that ethnicity not to procreate. This, according to these critics, is a morally objectionable way to solve the problem of racism. Instead of limiting people’s procreative liberty we should combat existing prejudices and discrimination. Likewise, it is argued, instead of prohibiting cloning out of concern for clonism, we should combat possible prejudices and discrimination against clones (see also Pence, 1998, 46; Harris, 2004, 9293). Macintosh (2005, 11921) has warned that by expressing cer
tain concerns about cloning one may actually reinforce certain prejudices and misguided stereotypes about clones. For example, saying that a clone would not have a personal identity prejudges the clone as inferior or fraudulent (the idea that originals are more valuable than their copies) or even less than human (as individuality is seen as an essential characteristic of human nature).

Another concern is that cloning threatens traditional family structures; a fear that has come up in debates about homosexuals adopting children, IVF and other assisted reproduction techniques. But in cloning the situation would be more complex as it may blur generational boundaries (McGee, 2000) and the clone would likely be confused about her kinship ties (Kass, 1998; O’Neil 2002, 6768). For example, a woman who has a child conceived through cloning would actually be the twin of her child and the woman’s mother would, genetically, be its mother, not grandmother. Some have argued against these concerns, replying that a cloned child would not necessarily be more confused about her family ties than other children. Many have four nurturing parents because of a divorce, never knew their genetic parents, have nurturing parents that are not their genetic parents, or think that their nurturing father is also their genetic father when in fact he is not. While these complex family relationships can be troubling for some children, they are not insurmountable, critics say. Harris (2004, 7778) argues that there are many aspects about the situation one is born and raised in that may be troublesome. As with all children, the most important thing is the relation with people who nurture and educate them, and children usually know very well who these people are. There is no reason to believe that with cloning, this will be any different. Onora O’Neil (2002, 678) argues that such responses are misplaced. While she acknowledges that there are already children now with confused family relationships, she argues that it is very different when prospective parents seek such potentially confused relationships for their children from the start.

Other concerns related to cloning focus on the potential harmful effects of cloning for others. Sometimes these concerns are related to those about the wellbeing of the clone. For example, McGee’s concern about confused family relationships not only bears on the clone but also on society as a whole. However, since I have already mentioned this concerns, I will, in the remainder of this entry, focus on other arguments

The strongest reason for why reproductive cloning should be permissible, if safe, is that it will allow infertile people to have a genetically related child. This position relies on the view that having genetically related children is morally significant and valuable. This is a controversial view. For example, Levy and Lotz (2005) have denied the importance of a genetic link between parents and their children. Moreover, they have argued that claiming that this link is important will give rise to bad consequences, such as reduced adoption rates and diminished resources for improving the life prospects of the disadvantaged, including those waiting to be adopted. Levick (2004, 185) and Ahlberg and Brighouse (2011) have also advanced this view. Since, according to these authors, these undesirable consequences would be magnified if we allowed human cloning, we have good reason to prohibit it. In response, Strong (2008) has argued that this effect is uncertain, and that there are other, probably more effective, ways to help such children or to prevent them from ending up in such a situation. Moreover, if cloning is banned, infertile couples may opt for embryo or gamete donation rather than adoption.

Another concern is that because cloning is an asexual way of reproducing it would decrease genetic variation among offspring and, in the long run, might even constitute a threat to the human race. The gene pool may narrow sufficiently to threaten humanity’s resistance to disease (AMA, 1999, 6). In response, it has been argued that if cloning becomes possible, the number of people who will choose it as their mode of reproduction will very likely be too low to constitute a threat to genetic diversity. It would be unlikely to be higher than the rate of natural twinning, which, occurring at a rate of 3.5/1000 children, does not seriously impact on genetic diversity. Further, even if millions of people would create children through cloning, the same genomes will not be cloned over and over: each person would have a genetic copy of his or her genome, which means the result will still be a high diversity of genomes. Others argue that, even if genetic diversity were not diminished by cloning, a society that supports reproductive cloning might be taken to express the view that variety is not important. Conveying such a message, these authors say, could have harmful consequences for a multicultural society.

Some see the increase in control of what kind of genome we want to pass on to our children as a positive development A major concern, however, is that this shift from chance to choice will lead to problematic eugenic practices.

One version of this concern states that cloning would, from the outset, constitute a problematic form of eugenics. However, critics have argued that this is implausible: the best explanations of what was wrong with immoral cases of eugenics, such as the Nazi eugenic programs, are that they involved coercion and were motivated by objectionable moral beliefs or false non-moral beliefs. This would not necessarily be the case were cloning to be implemented now (Agar, 2004; Buchanan, 2007). Unlike the coercive and state-directed eugenics of the past, new liberal eugenics defends values such as autonomy, reproductive freedom, beneficence, empathy and the avoidance of harm. Enthusiasts of so-called liberal eugenics are interested in helping individuals to prevent or diminish the suffering and increase the well-being of their children by endowing them with certain genes.

Another version of the eugenics concern points out the risk of a slippery slope: the claim is that cloning will lead to objectionable forms of eugenicsfor example, coercive eugenicsin the future. After all, historical cases of immoral eugenics often developed from earlier well intentioned and less problematic practices (for a history of eugenics as well as an analysis of philosophical and political issues raised by eugenics, see Kevles, 1985 and Paul, 1995). According to Sandel (2007, Ch.5), for example, liberal eugenics might imply more state compulsion than first appears: just as governments can force children to go to school, they could require people to use genetics to have better children.

A related concern expressed by Sandel (2007, 527) that cloning, and enhancement technologies in general, may result in a society in which parents will not accept their child for what it is, reinforcing an already existing trend of heavily managed, high-pressure child-rearing or hyper-parenting. Asch and Wasserman (2005, 202) have expressed a similar concern; arguing that having more control over what features a child has can pose an affront to an ideal of unconditioned devotion. Another concern, most often expressed by disability rights advocates, is that if cloning is used to have better children, it may create a more intolerant climate towards the disabled and the diseased, and that such practices can express negative judgments about people with disabilities. This argument has also been advanced in the debate about selective abortion, prenatal testing, and preimplantation genetic diagnosis. Disagreement exists about whe
ther these effects are likely. For example, Buchanan et al. (2002, 278) have argued that one can devalue disability while valuing existing disabled people and that trying to help parents who want to avoid having a disabled child does not imply that society should make no efforts to increase accessibility for existing people with disabilities.

UNESCO’s Universal Declaration on the Human Genome and Human Rights (1997) was the first international instrument to condemn human reproductive cloning as a practice against human dignity. Article 11 of this Declaration states: Practices which are contrary to human dignity, such as reproductive cloning of human beings, shall not be permitted This position is shared by the World Health Organization, the European Parliament and several other international instruments. Critics have pointed out that the reference to human dignity is problematic as it is rarely specified how human dignity is to be understood, whose dignity is at stake, and how dignity is relevant to the ethics of cloning (Harris 2004, Ch.2, Birnbacher 2005, McDougall 2008,). Some commentators state that it is the copying of a genome which violates human dignity (Kass 1998); others have pointed out that this interpretation could be experienced as an offence to genetically identical twins, and that we typically do not regard twins as a threat to human dignity (although some societies in the past did), nor do we prevent twins from coming into existence. On the contrary, IVF, which involves in increased risk to have twins, is a widely accepted fertility treatment.

Human dignity is most often related to Kant’s second formulation of the Categorical Imperative, namely the idea that we should never use a person merely as a means to an end. I have, however, already discussed this concern in section 4.2.2.

No unified religious perspective on human cloning exists; indeed, there are a diversity of opinions within each individual religious tradition. For an overview of the evaluation of cloning by the main religious groups see, for example, Cole-Turner (1997) and Walters (2004). For a specifically Jewish perspective on cloning, see, for example, Lipschutz (1999), for an Islamic perspective, Sadeghi (2007) and for a Catholic perspective, Doerflinger (1999).

Read the original post:

Cloning (Stanford Encyclopedia of Philosophy)

Posted in Cloning | Comments Off on Cloning (Stanford Encyclopedia of Philosophy)