Tag Archives: european

Gene therapy – Wikipedia

Posted: October 25, 2016 at 7:36 am

Gene therapy is the therapeutic delivery of nucleic acid polymers into a patient’s cells as a drug to treat disease.[1] The first attempt at modifying human DNA was performed in 1980 by Martin Cline, but the first successful and approved[by whom?] nuclear gene transfer in humans was performed in May 1989.[2] The first therapeutic use of gene transfer as well as the first direct insertion of human DNA into the nuclear genome was performed by French Anderson in a trial starting in September 1990.

Between 1989 and February 2016, over 2,300 clinical trials had been conducted, more than half of them in phase I.[3]

It should be noted that not all medical procedures that introduce alterations to a patient’s genetic makeup can be considered gene therapy. Bone marrow transplantation and organ transplants in general have been found to introduce foreign DNA into patients.[4] Gene therapy is defined by the precision of the procedure and the intention of direct therapeutic effects.

Gene therapy was conceptualized in 1972, by authors who urged caution before commencing human gene therapy studies.

The first attempt, an unsuccessful one, at gene therapy (as well as the first case of medical transfer of foreign genes into humans not counting organ transplantation) was performed by Martin Cline on 10 July 1980.[5][6] Cline claimed that one of the genes in his patients was active six months later, though he never published this data or had it verified[7] and even if he is correct, it’s unlikely it produced any significant beneficial effects treating beta-thalassemia.[8]

After extensive research on animals throughout the 1980s and a 1989 bacterial gene tagging trial on humans, the first gene therapy widely accepted as a success was demonstrated in a trial that started on September 14, 1990, when Ashi DeSilva was treated for ADA-SCID.[9]

The first somatic treatment that produced a permanent genetic change was performed in 1993.[10]

This procedure was referred to sensationally and somewhat inaccurately in the media as a “three parent baby”, though mtDNA is not the primary human genome and has little effect on an organism’s individual characteristics beyond powering their cells.

Gene therapy is a way to fix a genetic problem at its source. The polymers are either translated into proteins, interfere with target gene expression, or possibly correct genetic mutations.

The most common form uses DNA that encodes a functional, therapeutic gene to replace a mutated gene. The polymer molecule is packaged within a “vector”, which carries the molecule inside cells.

Early clinical failures led to dismissals of gene therapy. Clinical successes since 2006 regained researchers’ attention, although as of 2014, it was still largely an experimental technique.[11] These include treatment of retinal diseases Leber’s congenital amaurosis[12][13][14][15] and choroideremia,[16]X-linked SCID,[17] ADA-SCID,[18][19]adrenoleukodystrophy,[20]chronic lymphocytic leukemia (CLL),[21]acute lymphocytic leukemia (ALL),[22]multiple myeloma,[23]haemophilia[19] and Parkinson’s disease.[24] Between 2013 and April 2014, US companies invested over $600 million in the field.[25]

The first commercial gene therapy, Gendicine, was approved in China in 2003 for the treatment of certain cancers.[26] In 2011 Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease, including critical limb ischemia.[27] In 2012 Glybera, a treatment for a rare inherited disorder, became the first treatment to be approved for clinical use in either Europe or the United States after its endorsement by the European Commission.[11][28]

Following early advances in genetic engineering of bacteria, cells, and small animals, scientists started considering how to apply it to medicine. Two main approaches were considered replacing or disrupting defective genes.[29] Scientists focused on diseases caused by single-gene defects, such as cystic fibrosis, haemophilia, muscular dystrophy, thalassemia and sickle cell anemia. Glybera treats one such disease, caused by a defect in lipoprotein lipase.[28]

DNA must be administered, reach the damaged cells, enter the cell and express/disrupt a protein.[30] Multiple delivery techniques have been explored. The initial approach incorporated DNA into an engineered virus to deliver the DNA into a chromosome.[31][32]Naked DNA approaches have also been explored, especially in the context of vaccine development.[33]

Generally, efforts focused on administering a gene that causes a needed protein to be expressed. More recently, increased understanding of nuclease function has led to more direct DNA editing, using techniques such as zinc finger nucleases and CRISPR. The vector incorporates genes into chromosomes. The expressed nucleases then knock out and replace genes in the chromosome. As of 2014 these approaches involve removing cells from patients, editing a chromosome and returning the transformed cells to patients.[34]

Gene editing is a potential approach to alter the human genome to treat genetic diseases,[35] viral diseases,[36] and cancer.[37] As of 2016 these approaches were still years from being medicine.[38][39]

Gene therapy may be classified into two types:

In somatic cell gene therapy (SCGT), the therapeutic genes are transferred into any cell other than a gamete, germ cell, gametocyte or undifferentiated stem cell. Any such modifications affect the individual patient only, and are not inherited by offspring. Somatic gene therapy represents mainstream basic and clinical research, in which therapeutic DNA (either integrated in the genome or as an external episome or plasmid) is used to treat disease.

Over 600 clinical trials utilizing SCGT are underway in the US. Most focus on severe genetic disorders, including immunodeficiencies, haemophilia, thalassaemia and cystic fibrosis. Such single gene disorders are good candidates for somatic cell therapy. The complete correction of a genetic disorder or the replacement of multiple genes is not yet possible. Only a few of the trials are in the advanced stages.[40]

In germline gene therapy (GGT), germ cells (sperm or eggs) are modified by the introduction of functional genes into their genomes. Modifying a germ cell causes all the organism’s cells to contain the modified gene. The change is therefore heritable and passed on to later generations. Australia, Canada, Germany, Israel, Switzerland and the Netherlands[41] prohibit GGT for application in human beings, for technical and ethical reasons, including insufficient knowledge about possible risks to future generations[41] and higher risks versus SCGT.[42] The US has no federal controls specifically addressing human genetic modification (beyond FDA regulations for therapies in general).[41][43][44][45]

The delivery of DNA into cells can be accomplished by multiple methods. The two major classes are recombinant viruses (sometimes called biological nanoparticles or viral vectors) and naked DNA or DNA complexes (non-viral methods).

In order to replicate, viruses introduce their genetic material into the host cell, tricking the host’s cellular machinery into using it as blueprints for viral proteins. Scientists exploit this by substituting a virus’s genetic material with therapeutic DNA. (The term ‘DNA’ may be an oversimplification, as some viruses contain RNA, and gene therapy could take this form as well.) A number of viruses have been used for human gene therapy, including retrovirus, adenovirus, lentivirus, herpes simplex, vaccinia and adeno-associated virus.[3] Like the genetic material (DNA or RNA) in viruses, therapeutic DNA can be designed to simply serve as a temporary blueprint that is degraded naturally or (at least theoretically) to enter the host’s genome, becoming a permanent part of the host’s DNA in infected cells.

Non-viral methods present certain advantages over viral methods, such as large scale production and low host immunogenicity. However, non-viral methods initially produced lower levels of transfection and gene expression, and thus lower therapeutic efficacy. Later technology remedied this deficiency[citation needed].

Methods for non-viral gene therapy include the injection of naked DNA, electroporation, the gene gun, sonoporation, magnetofection, the use of oligonucleotides, lipoplexes, dendrimers, and inorganic nanoparticles.

Some of the unsolved problems include:

Three patients’ deaths have been reported in gene therapy trials, putting the field under close scrutiny. The first was that of Jesse Gelsinger in 1999.[52] One X-SCID patient died of leukemia in 2003.[9] In 2007, a rheumatoid arthritis patient died from an infection; the subsequent investigation concluded that the death was not related to gene therapy.[53]

In 1972 Friedmann and Roblin authored a paper in Science titled “Gene therapy for human genetic disease?”[54] Rogers (1970) was cited for proposing that exogenous good DNA be used to replace the defective DNA in those who suffer from genetic defects.[55]

In 1984 a retrovirus vector system was designed that could efficiently insert foreign genes into mammalian chromosomes.[56]

The first approved gene therapy clinical research in the US took place on 14 September 1990, at the National Institutes of Health (NIH), under the direction of William French Anderson.[57] Four-year-old Ashanti DeSilva received treatment for a genetic defect that left her with ADA-SCID, a severe immune system deficiency. The effects were temporary, but successful.[58]

Cancer gene therapy was introduced in 1992/93 (Trojan et al. 1993).[59] The treatment of glioblastoma multiforme, the malignant brain tumor whose outcome is always fatal, was done using a vector expressing antisense IGF-I RNA (clinical trial approved by NIH n 1602, and FDA in 1994). This therapy also represents the beginning of cancer immunogene therapy, a treatment which proves to be effective due to the anti-tumor mechanism of IGF-I antisense, which is related to strong immune and apoptotic phenomena.

In 1992 Claudio Bordignon, working at the Vita-Salute San Raffaele University, performed the first gene therapy procedure using hematopoietic stem cells as vectors to deliver genes intended to correct hereditary diseases.[60] In 2002 this work led to the publication of the first successful gene therapy treatment for adenosine deaminase-deficiency (SCID). The success of a multi-center trial for treating children with SCID (severe combined immune deficiency or “bubble boy” disease) from 2000 and 2002, was questioned when two of the ten children treated at the trial’s Paris center developed a leukemia-like condition. Clinical trials were halted temporarily in 2002, but resumed after regulatory review of the protocol in the US, the United Kingdom, France, Italy and Germany.[61]

In 1993 Andrew Gobea was born with SCID following prenatal genetic screening. Blood was removed from his mother’s placenta and umbilical cord immediately after birth, to acquire stem cells. The allele that codes for adenosine deaminase (ADA) was obtained and inserted into a retrovirus. Retroviruses and stem cells were mixed, after which the viruses inserted the gene into the stem cell chromosomes. Stem cells containing the working ADA gene were injected into Andrew’s blood. Injections of the ADA enzyme were also given weekly. For four years T cells (white blood cells), produced by stem cells, made ADA enzymes using the ADA gene. After four years more treatment was needed.[citation needed]

Jesse Gelsinger’s death in 1999 impeded gene therapy research in the US.[62][63] As a result, the FDA suspended several clinical trials pending the reevaluation of ethical and procedural practices.[64]

The modified cancer gene therapy strategy of antisense IGF-I RNA (NIH n 1602)[65] using antisense / triple helix anti IGF-I approach was registered in 2002 by Wiley gene therapy clinical trial – n 635 and 636. The approach has shown promising results in the treatment of six different malignant tumors: glioblastoma, cancers of liver, colon, prostate, uterus and ovary (Collaborative NATO Science Programme on Gene Therapy USA, France, Poland n LST 980517 conducted by J. Trojan) (Trojan et al., 2012). This antigene antisense/triple helix therapy has proven to be efficient, due to the mechanism stopping simultaneously IGF-I expression on translation and transcription levels, strengthening anti-tumor immune and apoptotic phenomena.

Sickle-cell disease can be treated in mice.[66] The mice which have essentially the same defect that causes human cases used a viral vector to induce production of fetal hemoglobin (HbF), which normally ceases to be produced shortly after birth. In humans, the use of hydroxyurea to stimulate the production of HbF temporarily alleviates sickle cell symptoms. The researchers demonstrated this treatment to be a more permanent means to increase therapeutic HbF production.[67]

A new gene therapy approach repaired errors in messenger RNA derived from defective genes. This technique has the potential to treat thalassaemia, cystic fibrosis and some cancers.[68]

Researchers created liposomes 25 nanometers across that can carry therapeutic DNA through pores in the nuclear membrane.[69]

In 2003 a research team inserted genes into the brain for the first time. They used liposomes coated in a polymer called polyethylene glycol, which, unlike viral vectors, are small enough to cross the bloodbrain barrier.[70]

Short pieces of double-stranded RNA (short, interfering RNAs or siRNAs) are used by cells to degrade RNA of a particular sequence. If a siRNA is designed to match the RNA copied from a faulty gene, then the abnormal protein product of that gene will not be produced.[71]

Gendicine is a cancer gene therapy that delivers the tumor suppressor gene p53 using an engineered adenovirus. In 2003, it was approved in China for the treatment of head and neck squamous cell carcinoma.[26]

In March researchers announced the successful use of gene therapy to treat two adult patients for X-linked chronic granulomatous disease, a disease which affects myeloid cells and damages the immune system. The study is the first to show that gene therapy can treat the myeloid system.[72]

In May a team reported a way to prevent the immune system from rejecting a newly delivered gene.[73] Similar to organ transplantation, gene therapy has been plagued by this problem. The immune system normally recognizes the new gene as foreign and rejects the cells carrying it. The research utilized a newly uncovered network of genes regulated by molecules known as microRNAs. This natural function selectively obscured their therapeutic gene in immune system cells and protected it from discovery. Mice infected with the gene containing an immune-cell microRNA target sequence did not reject the gene.

In August scientists successfully treated metastatic melanoma in two patients using killer T cells genetically retargeted to attack the cancer cells.[74]

In November researchers reported on the use of VRX496, a gene-based immunotherapy for the treatment of HIV that uses a lentiviral vector to deliver an antisense gene against the HIV envelope. In a phase I clinical trial, five subjects with chronic HIV infection who had failed to respond to at least two antiretroviral regimens were treated. A single intravenous infusion of autologous CD4 T cells genetically modified with VRX496 was well tolerated. All patients had stable or decreased viral load; four of the five patients had stable or increased CD4 T cell counts. All five patients had stable or increased immune response to HIV antigens and other pathogens. This was the first evaluation of a lentiviral vector administered in a US human clinical trial.[75][76]

In May researchers announced the first gene therapy trial for inherited retinal disease. The first operation was carried out on a 23-year-old British male, Robert Johnson, in early 2007.[77]

Leber’s congenital amaurosis is an inherited blinding disease caused by mutations in the RPE65 gene. The results of a small clinical trial in children were published in April.[12] Delivery of recombinant adeno-associated virus (AAV) carrying RPE65 yielded positive results. In May two more groups reported positive results in independent clinical trials using gene therapy to treat the condition. In all three clinical trials, patients recovered functional vision without apparent side-effects.[12][13][14][15]

In September researchers were able to give trichromatic vision to squirrel monkeys.[78] In November 2009, researchers halted a fatal genetic disorder called adrenoleukodystrophy in two children using a lentivirus vector to deliver a functioning version of ABCD1, the gene that is mutated in the disorder.[79]

An April paper reported that gene therapy addressed achromatopsia (color blindness) in dogs by targeting cone photoreceptors. Cone function and day vision were restored for at least 33 months in two young specimens. The therapy was less efficient for older dogs.[80]

In September it was announced that an 18-year-old male patient in France with beta-thalassemia major had been successfully treated.[81] Beta-thalassemia major is an inherited blood disease in which beta haemoglobin is missing and patients are dependent on regular lifelong blood transfusions.[82] The technique used a lentiviral vector to transduce the human -globin gene into purified blood and marrow cells obtained from the patient in June 2007.[83] The patient’s haemoglobin levels were stable at 9 to 10 g/dL. About a third of the hemoglobin contained the form introduced by the viral vector and blood transfusions were not needed.[83][84] Further clinical trials were planned.[85]Bone marrow transplants are the only cure for thalassemia, but 75% of patients do not find a matching donor.[84]

Cancer immunogene therapy using modified anti gene, antisense / triple helix approach was introduced in South America in 2010/11 in La Sabana University, Bogota (Ethical Committee 14.12.2010, no P-004-10). Considering the ethical aspect of gene diagnostic and gene therapy targeting IGF-I, the IGF-I expressing tumors i.e. lung and epidermis cancers, were treated (Trojan et al. 2016). [86][87]

In 2007 and 2008, a man was cured of HIV by repeated Hematopoietic stem cell transplantation (see also Allogeneic stem cell transplantation, Allogeneic bone marrow transplantation, Allotransplantation) with double-delta-32 mutation which disables the CCR5 receptor. This cure was accepted by the medical community in 2011.[88] It required complete ablation of existing bone marrow, which is very debilitating.

In August two of three subjects of a pilot study were confirmed to have been cured from chronic lymphocytic leukemia (CLL). The therapy used genetically modified T cells to attack cells that expressed the CD19 protein to fight the disease.[21] In 2013, the researchers announced that 26 of 59 patients had achieved complete remission and the original patient had remained tumor-free.[89]

Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease as well as treatment for the damage that occurs to the heart after myocardial infarction.[90][91]

In 2011 Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease, including critical limb ischemia; it delivers the gene encoding for VEGF.[92][27] Neovasculogen is a plasmid encoding the CMV promoter and the 165 amino acid form of VEGF.[93][94]

The FDA approved Phase 1 clinical trials on thalassemia major patients in the US for 10 participants in July.[95] The study was expected to continue until 2015.[96]

In July 2012, the European Medicines Agency recommended approval of a gene therapy treatment for the first time in either Europe or the United States. The treatment used Alipogene tiparvovec (Glybera) to compensate for lipoprotein lipase deficiency, which can cause severe pancreatitis.[97] The recommendation was endorsed by the European Commission in November 2012[11][28][98][99] and commercial rollout began in late 2014.[100]

In December 2012, it was reported that 10 of 13 patients with multiple myeloma were in remission “or very close to it” three months after being injected with a treatment involving genetically engineered T cells to target proteins NY-ESO-1 and LAGE-1, which exist only on cancerous myeloma cells.[23]

In March researchers reported that three of five subjects who had acute lymphocytic leukemia (ALL) had been in remission for five months to two years after being treated with genetically modified T cells which attacked cells with CD19 genes on their surface, i.e. all B-cells, cancerous or not. The researchers believed that the patients’ immune systems would make normal T-cells and B-cells after a couple of months. They were also given bone marrow. One patient relapsed and died and one died of a blood clot unrelated to the disease.[22]

Following encouraging Phase 1 trials, in April, researchers announced they were starting Phase 2 clinical trials (called CUPID2 and SERCA-LVAD) on 250 patients[101] at several hospitals to combat heart disease. The therapy was designed to increase the levels of SERCA2, a protein in heart muscles, improving muscle function.[102] The FDA granted this a Breakthrough Therapy Designation to accelerate the trial and approval process.[103] In 2016 it was reported that no improvement was found from the CUPID 2 trial.[104]

In July researchers reported promising results for six children with two severe hereditary diseases had been treated with a partially deactivated lentivirus to replace a faulty gene and after 732 months. Three of the children had metachromatic leukodystrophy, which causes children to lose cognitive and motor skills.[105] The other children had Wiskott-Aldrich syndrome, which leaves them to open to infection, autoimmune diseases and cancer.[106] Follow up trials with gene therapy on another six children with Wiskott-Aldrich syndrome were also reported as promising.[107][108]

In October researchers reported that two children born with adenosine deaminase severe combined immunodeficiency disease (ADA-SCID) had been treated with genetically engineered stem cells 18 months previously and that their immune systems were showing signs of full recovery. Another three children were making progress.[19] In 2014 a further 18 children with ADA-SCID were cured by gene therapy.[109] ADA-SCID children have no functioning immune system and are sometimes known as “bubble children.”[19]

Also in October researchers reported that they had treated six haemophilia sufferers in early 2011 using an adeno-associated virus. Over two years later all six were producing clotting factor.[19][110]

Data from three trials on Topical cystic fibrosis transmembrane conductance regulator gene therapy were reported to not support its clinical use as a mist inhaled into the lungs to treat cystic fibrosis patients with lung infections.[111]

In January researchers reported that six choroideremia patients had been treated with adeno-associated virus with a copy of REP1. Over a six-month to two-year period all had improved their sight.[112][113] By 2016, 32 patients had been treated with positive results and researchers were hopeful the treatment would be long-lasting.[16] Choroideremia is an inherited genetic eye disease with no approved treatment, leading to loss of sight.

In March researchers reported that 12 HIV patients had been treated since 2009 in a trial with a genetically engineered virus with a rare mutation (CCR5 deficiency) known to protect against HIV with promising results.[114][115]

Clinical trials of gene therapy for sickle cell disease were started in 2014[116][117] although one review failed to find any such trials.[118]

In February LentiGlobin BB305, a gene therapy treatment undergoing clinical trials for treatment of beta thalassemia gained FDA “breakthrough” status after several patients were able to forgo the frequent blood transfusions usually required to treat the disease.[119]

In March researchers delivered a recombinant gene encoding a broadly neutralizing antibody into monkeys infected with simian HIV; the monkeys’ cells produced the antibody, which cleared them of HIV. The technique is named immunoprophylaxis by gene transfer (IGT). Animal tests for antibodies to ebola, malaria, influenza and hepatitis are underway.[120][121]

In March scientists, including an inventor of CRISPR, urged a worldwide moratorium on germline gene therapy, writing scientists should avoid even attempting, in lax jurisdictions, germline genome modification for clinical application in humans until the full implications are discussed among scientific and governmental organizations.[122][123][124][125]

Also in 2015 Glybera was approved for the German market.[126]

In October, researchers announced that they had treated a baby girl, Layla Richards, with an experimental treatment using donor T-cells genetically engineered to attack cancer cells. Two months after the treatment she was still free of her cancer (a highly aggressive form of acute lymphoblastic leukaemia [ALL]). Children with highly aggressive ALL normally have a very poor prognosis and Layla’s disease had been regarded as terminal before the treatment.[127]

In December, scientists of major world academies called for a moratorium on inheritable human genome edits, including those related to CRISPR-Cas9 technologies[128] but that basic research including embryo gene editing should continue.[129]

In April the Committee for Medicinal Products for Human Use of the European Medicines Agency endorsed a gene therapy treatment called Strimvelis and recommended it be approved.[130][131] This treats children born with ADA-SCID and who have no functioning immune system – sometimes called the “bubble baby” disease. This would be the second gene therapy treatment to be approved in Europe.[132]

Speculated uses for gene therapy include:

Gene Therapy techniques have the potential to provide alternative treatments for those with infertility. Recently, successful experimentation on mice has proven that fertility can be restored by using the gene therapy method, CRISPR.[133] Spermatogenical stem cells from another organism were transplanted into the testes of an infertile male mouse. The stem cells re-established spermatogenesis and fertility.[134]

Athletes might adopt gene therapy technologies to improve their performance.[135]Gene doping is not known to occur, but multiple gene therapies may have such effects. Kayser et al. argue that gene doping could level the playing field if all athletes receive equal access. Critics claim that any therapeutic intervention for non-therapeutic/enhancement purposes compromises the ethical foundations of medicine and sports.[136]

Genetic engineering could be used to change physical appearance, metabolism, and even improve physical capabilities and mental faculties such as memory and intelligence. Ethical claims about germline engineering include beliefs that every fetus has a right to remain genetically unmodified, that parents hold the right to genetically modify their offspring, and that every child has the right to be born free of preventable diseases.[137][138][139] For adults, genetic engineering could be seen as another enhancement technique to add to diet, exercise, education, cosmetics and plastic surgery.[140][141] Another theorist claims that moral concerns limit but do not prohibit germline engineering.[142]

Possible regulatory schemes include a complete ban, provision to everyone, or professional self-regulation. The American Medical Associations Council on Ethical and Judicial Affairs stated that “genetic interventions to enhance traits should be considered permissible only in severely restricted situations: (1) clear and meaningful benefits to the fetus or child; (2) no trade-off with other characteristics or traits; and (3) equal access to the genetic technology, irrespective of income or other socioeconomic characteristics.”[143]

As early in the history of biotechnology as 1990, there have been scientists opposed to attempts to modify the human germline using these new tools,[144] and such concerns have continued as technology progressed.[145] With the advent of new techniques like CRISPR, in March 2015 a group of scientists urged a worldwide moratorium on clinical use of gene editing technologies to edit the human genome in a way that can be inherited.[122][123][124][125] In April 2015, researchers sparked controversy when they reported results of basic research to edit the DNA of non-viable human embryos using CRISPR.[133][146]

Regulations covering genetic modification are part of general guidelines about human-involved biomedical research.

The Helsinki Declaration (Ethical Principles for Medical Research Involving Human Subjects) was amended by the World Medical Association’s General Assembly in 2008. This document provides principles physicians and researchers must consider when involving humans as research subjects. The Statement on Gene Therapy Research initiated by the Human Genome Organization (HUGO) in 2001 provides a legal baseline for all countries. HUGOs document emphasizes human freedom and adherence to human rights, and offers recommendations for somatic gene therapy, including the importance of recognizing public concerns about such research.[147]

No federal legislation lays out protocols or restrictions about human genetic engineering. This subject is governed by overlapping regulations from local and federal agencies, including the Department of Health and Human Services, the FDA and NIH’s Recombinant DNA Advisory Committee. Researchers seeking federal funds for an investigational new drug application, (commonly the case for somatic human genetic engineering), must obey international and federal guidelines for the protection of human subjects.[148]

NIH serves as the main gene therapy regulator for federally funded research. Privately funded research is advised to follow these regulations. NIH provides funding for research that develops or enhances genetic engineering techniques and to evaluate the ethics and quality in current research. The NIH maintains a mandatory registry of human genetic engineering research protocols that includes all federally funded projects.

An NIH advisory committee published a set of guidelines on gene manipulation.[149] The guidelines discuss lab safety as well as human test subjects and various experimental types that involve genetic changes. Several sections specifically pertain to human genetic engineering, including Section III-C-1. This section describes required review processes and other aspects when seeking approval to begin clinical research involving genetic transfer into a human patient.[150] The protocol for a gene therapy clinical trial must be approved by the NIH’s Recombinant DNA Advisory Committee prior to any clinical trial beginning; this is different from any other kind of clinical trial.[149]

As with other kinds of drugs, the FDA regulates the quality and safety of gene therapy products and supervises how these products are used clinically. Therapeutic alteration of the human genome falls under the same regulatory requirements as any other medical treatment. Research involving human subjects, such as clinical trials, must be reviewed and approved by the FDA and an Institutional Review Board.[151][152]

Gene therapy is the basis for the plotline of the film I Am Legend[153] and the TV show Will Gene Therapy Change the Human Race?.[154]

View post:
Gene therapy – Wikipedia

Posted in Human Genetic Engineering | Comments Off on Gene therapy – Wikipedia

Colonization of the Moon – Wikipedia

Posted: at 7:34 am

“Lunar outpost” redirects here. For NASA’s former plan to construct an outpost between 2019 and 2024, see Lunar outpost (NASA).

The colonization of the Moon is the proposed establishment of permanent human communities or robotic industries[1][2] on the Moon.

Recent indication that water might be present in noteworthy quantities at the lunar poles has renewed interest in the Moon. Polar colonies could also avoid the problem of long lunar nights about 354 hours,[3] a little more than two weeks and take advantage of the Sun continuously, at least during the local summer (there is no data for the winter yet).[4]

Permanent human habitation on a planetary body other than the Earth is one of science fiction’s most prevalent themes. As technology has advanced, and concerns about the future of humanity on Earth have increased, the argument that space colonization is an achievable and worthwhile goal has gained momentum.[5][6] Because of its proximity to Earth, the Moon has been seen as the most obvious natural expansion after Earth. There are also various projects in near future by space tourism startup companies for tourism on the Moon.

The notion of a lunar colony originated before the Space Age. In 1638 Bishop John Wilkins wrote ADiscourse Concerning a New World and Another Planet, in which he predicted a human colony on the Moon.[7]Konstantin Tsiolkovsky (18571935), among others, also suggested such a step.[8] From the 1950s onwards, a number of concepts and designs have been suggested by scientists, engineers and others.

In 1954, science-fiction writer Arthur C. Clarke proposed a lunar base of inflatable modules covered in lunar dust for insulation.[9] A spaceship, assembled in low Earth orbit, would launch to the Moon, and astronauts would set up the igloo-like modules and an inflatable radio mast. Subsequent steps would include the establishment of a larger, permanent dome; an algae-based air purifier; a nuclear reactor for the provision of power; and electromagnetic cannons to launch cargo and fuel to interplanetary vessels in space.

In 1959, John S. Rinehart suggested that the safest design would be a structure that could “[float] in a stationary ocean of dust”, since there were, at the time this concept was outlined, theories that there could be mile-deep dust oceans on the Moon.[10] The proposed design consisted of a half-cylinder with half-domes at both ends, with a micrometeoroid shield placed above the base.

Project Horizon was a 1959 study regarding the United States Army’s plan to establish a fort on the Moon by 1967.[11]Heinz-Hermann Koelle, a German rocket engineer of the Army Ballistic Missile Agency (ABMA) led the Project Horizon study. The first landing would be carried out by two “soldier-astronauts” in 1965 and more construction workers would soon follow. Through numerous launches (61Saturn I and 88Saturn II), 245tons of cargo would be transported to the outpost by 1966.

Lunex Project was a US Air Force plan for a manned lunar landing prior to the Apollo Program in 1961. It envisaged a 21-airman underground Air Force base on the Moon by 1968 at a total cost of $7.5 billion.

In 1962, John DeNike and Stanley Zahn published their idea of a sub-surface base located at the Sea of Tranquility.[9] This base would house a crew of21, in modules placed four meters below the surface, which was believed to provide radiation shielding on par with Earth’s atmosphere. DeNike and Zahn favored nuclear reactors for energy production, because they were more efficient than solar panels, and would also overcome the problems with the long Lunar nights. For the life support system, an algae-based gas exchanger was proposed.

As of 2006, Japan planned to have a Moon base in 2030.[12] and as of 2007, Russia planned to have a Moon base in 202732.[13]

In 2007 Jim Burke of the International Space University in France said people should plan to preserve humanity’s culture in the event of a civilization-stopping asteroid impact with Earth. A Lunar Noah’s Ark was proposed.[14] Subsequent planning may be taken up by the International Lunar Exploration Working Group (ILEWG).[15][16][17]

In a January 2012 speech Newt Gingrich, Republican candidate for President of the United States of America, proposed a plan to build a U.S. moon colony by the year 2020.[18][19]

In 2016 Johann-Dietrich Wrner, the new Chief of ESA, proposed the International Moon Village that incorporates 3D printing.[20]

Exploration of the Lunar surface by spacecraft began in 1959 with the Soviet Union’s Luna program. Luna1 missed the Moon, but Luna2 made a hard landing (impact) into its surface, and became the first artificial object on an extraterrestrial body. The same year, the Luna3 mission radioed photographs to Earth of the Moon’s hitherto unseen far side, marking the beginning of a decade-long series of unmanned Lunar explorations.

Responding to the Soviet program of space exploration, US President JohnF. Kennedy in 1961 told the U.S.Congress on May25: “Ibelieve that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth.” The same year the Soviet leadership made some of its first public pronouncements about landing a man on the Moon and establishing a Lunar base.

Manned exploration of the lunar surface began in 1968 when the Apollo8 spacecraft orbited the Moon with three astronauts on board. This was mankind’s first direct view of the far side. The following year, the Apollo11 Lunar module landed two astronauts on the Moon, proving the ability of humans to travel to the Moon, perform scientific research work there, and bring back sample materials.

Additional missions to the Moon continued this exploration phase. In 1969 the Apollo12 mission landed next to the Surveyor3 spacecraft, demonstrating precision landing capability. The use of a manned vehicle on the Moon’s surface was demonstrated in 1971 with the Lunar Rover during Apollo15. Apollo16 made the first landing within the rugged Lunar highlands. However, interest in further exploration of the Moon was beginning to wane among the American public. In 1972 Apollo17 was the final Apollo Lunar mission, and further planned missions were scrapped at the directive of President Nixon. Instead, focus was turned to the Space Shuttle and manned missions in near Earth orbit.

The Soviet manned lunar programs failed to send a manned mission to the Moon. However, in 1966 Luna9 was the first probe to achieve a soft landing and return close-up shots of the Lunar surface. Luna16 in 1970 returned the first Soviet Lunar soil samples, while in 1970 and 1973 during the Lunokhod program two robotic rovers landed on the Moon. Lunokhod1 explored the Lunar surface for 322 days, and Lunokhod2 operated on the Moon about four months only but covered a third more distance. 1974 saw the end of the Soviet Moonshot, two years after the last American manned landing. Besides the manned landings, an abandoned Soviet moon program included building the moonbase “Zvezda”, which was the first detailed project with developed mockups of expedition vehicles[21] and surface modules.[22]

In the decades following, interest in exploring the Moon faded considerably, and only a few dedicated enthusiasts supported a return. However, evidence of Lunar ice at the poles gathered by NASA’s Clementine (1994) and Lunar Prospector (1998) missions rekindled some discussion,[23][24] as did the potential growth of a Chinese space program that contemplated its own mission to the Moon.[25] Subsequent research suggested that there was far less ice present (if any) than had originally been thought, but that there may still be some usable deposits of hydrogen in other forms.[26] However, in September 2009, the Chandrayaan probe of India, carrying an ISRO instrument, discovered that the Lunar regolith contains 0.1% water by weight, overturning theories that had stood for 40 years.[27]

In 2004, U.S. President George W. Bush called for a plan to return manned missions to the Moon by 2020 (since cancelled see Constellation program). Propelled by this new initiative, NASA issued a new long-range plan that includes building a base on the Moon as a staging point to Mars. This plan envisions a Lunar outpost at one of the Moon’s poles by 2024 which, if well-sited, might be able to continually harness solar power; at the poles, temperature changes over the course of a Lunar day are also less extreme,[28] and reserves of water and useful minerals may be found nearby.[28] In addition, the European Space Agency has a plan for a permanently manned Lunar base by 2025.[29][30] Russia has also announced similar plans to send a man to the Moon by 2025 and establish a permanent base there several years later.[6]

A Chinese space scientist has said that the People’s Republic of China could be capable of landing a human on the Moon by 2022 (see Chinese Lunar Exploration Program),[31] and Japan and India also have plans for a Lunar base by 2030.[32] Neither of these plans involves permanent residents on the Moon. Instead they call for sortie missions, in some cases followed by extended expeditions to the Lunar base by rotating crew members, as is currently done for the International Space Station.

NASAs LCROSS/LRO mission had been scheduled to launch in October 2008.[33] The launch was delayed until 18 June 2009,[34] resulting in LCROSS’s impact with the Moon at 11:30 UT on 9 October 2009.[35][36] The purpose is preparing for future Lunar exploration.

On September 24, 2009 NASA announced the discovery of water on the Moon. The discovery was made by three instruments on board Chandrayaan-1. These were the ISRO’s Moon Impact Probe (MIP), the Moon Mineralogy Mapper (M3) and Mini-Sar, belonging to NASA.[37]

On November 13, 2009 NASA announced that the LCROSS mission had discovered large quantities of water ice on the Moon around the LCROSS impact site at Cabeus. Robert Zubrin, president of the Mars Society, relativized the term ‘large’: “The 30m crater ejected by the probe contained 10million kilograms of regolith. Within this ejecta, an estimated 100kg of water was detected. That represents a proportion of ten parts per million, which is a lower water concentration than that found in the soil of the driest deserts of the Earth. In contrast, we have found continent sized regions on Mars, which are 600,000 parts per million, or 60% water by weight.”[38] Although the Moon is very dry on the whole, the spot where the LCROSS impactor hit was chosen for a high concentration of water ice. Dr. Zubrin’s computations are not a sound basis for estimating the percentage of water in the regolith at that site. Researchers with expertise in that area estimated that the regolith at the impact site contained 5.6 2.9% water ice, and also noted the presence of other volatile substances. Hydrocarbons, material containing sulfur, carbon dioxide, carbon monoxide, methane and ammonia were present.[39]

In March 2010, NASA reported that the findings of its mini-SAR radar aboard Chandrayaan-1 were consistent with ice deposits at the Moon’s north pole. It is estimated there is at least 600million tons of ice at the north pole in sheets of relatively pure ice at least a couple of meters thick.[40]

In March 2014, researchers who had previously published reports on possible abundance of water on the Moon, reported new findings that refined their predictions substantially lower.[41]

Placing a colony on a natural body would provide an ample source of material for construction and other uses in space, including shielding from cosmic radiation. The energy required to send objects from the Moon to space is much less than from Earth to space. This could allow the Moon to serve as a source of construction materials within cis-lunar space. Rockets launched from the Moon would require less locally produced propellant than rockets launched from Earth. Some proposals include using electric acceleration devices (mass drivers) to propel objects off the Moon without building rockets. Others have proposed momentum exchange tethers (see below). Furthermore, the Moon does have some gravity, which experience to date indicates may be vital for fetal development and long-term human health.[42][43] Whether the Moon’s gravity (roughly one sixth of Earth’s) is adequate for this purpose, however, is uncertain.

In addition, the Moon is the closest large body in the Solar System to Earth. While some Earth-crosser asteroids occasionally pass closer, the Moon’s distance is consistently within a small range close to 384,400km. This proximity has several advantages:

There are several disadvantages to the Moon as a colony site:

Three criteria that a Lunar outpost should meet are:[citation needed]

While a colony might be located anywhere, potential locations for a Lunar colony fall into three broad categories.

There are two reasons why the north pole and south pole of the Moon might be attractive locations for a human colony. First, there is evidence that water may be present in some continuously shaded areas near the poles.[62] Second, the Moon’s axis of rotation is sufficiently close to being perpendicular to the ecliptic plane that the radius of the Moon’s polar circles is less than 50km. Power collection stations could therefore be plausibly located so that at least one is exposed to sunlight at all times, thus making it possible to power polar colonies almost exclusively with solar energy. Solar power would be unavailable only during a lunar eclipse, but these events are relatively brief and absolutely predictable. Any such colony would therefore require a reserve energy supply that could temporarily sustain a colony during lunar eclipses or in the event of any incident or malfunction affecting solar power collection. Hydrogen fuel cells would be ideal for this purpose, since the hydrogen needed could be sourced locally using the Moon’s polar water and surplus solar power. Moreover, due to the Moon’s uneven surface some sites have nearly continuous sunlight. For example, Malapert mountain, located near the Shackleton crater at the Lunar south pole, offers several advantages as a site:

NASA chose to use a south-polar site for the Lunar outpost reference design in the Exploration Systems Architecture Study chapter on Lunar Architecture.[64]

At the north pole, the rim of Peary Crater has been proposed as a favorable location for a base.[65] Examination of images from the Clementine mission appear to show that parts of the crater rim are permanently illuminated by sunlight (except during Lunar eclipses).[65] As a result, the temperature conditions are expected to remain very stable at this location, averaging 50C (58F).[65] This is comparable to winter conditions in Earth’s Poles of Cold in Siberia and Antarctica. The interior of Peary Crater may also harbor hydrogen deposits.[65]

A 1994[66] bistatic radar experiment performed during the Clementine mission suggested the presence of water ice around the south pole.[23][67] The Lunar Prospector spacecraft reported enhanced hydrogen abundances at the south pole and even more at the north pole, in 2008.[68] On the other hand, results reported using the Arecibo radio telescope have been interpreted by some to indicate that the anomalous Clementine radar signatures are not indicative of ice, but surface roughness.[69] This interpretation, however, is not universally agreed upon.[70]

A potential limitation of the polar regions is that the inflow of solar wind can create an electrical charge on the leeward side of crater rims. The resulting voltage difference can affect electrical equipment, change surface chemistry, erode surfaces and levitate Lunar dust.[71]

The Lunar equatorial regions are likely to have higher concentrations of helium-3 (rare on Earth but much sought after for use in nuclear fusion research) because the solar wind has a higher angle of incidence.[72] They also enjoy an advantage in extra-Lunar traffic: The rotation advantage for launching material is slight due to the Moon’s slow rotation, but the corresponding orbit coincides with the ecliptic, nearly coincides with the Lunar orbit around Earth, and nearly coincides with the equatorial plane of Earth.

Several probes have landed in the Oceanus Procellarum area. There are many areas and features that could be subject to long-term study, such as the Reiner Gamma anomaly and the dark-floored Grimaldi crater.

The Lunar far side lacks direct communication with Earth, though a communication satellite at the L2 Lagrangian point, or a network of orbiting satellites, could enable communication between the far side of the Moon and Earth.[73] The far side is also a good location for a large radio telescope because it is well shielded from the Earth.[74] Due to the lack of atmosphere, the location is also suitable for an array of optical telescopes, similar to the Very Large Telescope in Chile.[44] To date, there has been no ground exploration of the far side.

Scientists have estimated that the highest concentrations of helium-3 will be found in the maria on the far side, as well as near side areas containing concentrations of the titanium-based mineral ilmenite. On the near side the Earth and its magnetic field partially shields the surface from the solar wind during each orbit. But the far side is fully exposed, and thus should receive a somewhat greater proportion of the ion stream.[75]

Lunar lava tubes are a potential location for constructing a Lunar base. Any intact lava tube on the Moon could serve as a shelter from the severe environment of the Lunar surface, with its frequent meteorite impacts, high-energy ultra-violet radiation and energetic particles, and extreme diurnal temperature variations. Lava tubes provide ideal positions for shelter because of their access to nearby resources. They also have proven themselves as a reliable structure, having withstood the test of time for billions of years.

An underground colony would escape the extreme of temperature on the Moon’s surface. The average temperature on the surface of the Moon is about 5C. The day period (about 354 hours) has an average temperature of about 107C (225F), although it can rise as high as 123C (253F). The night period (also 354 hours) has an average temperature of about 153C (243F).[76] Underground, both periods would be around 23C (9F), and humans could install ordinary heaters.[77]

One such lava tube was discovered in early 2009.[78]

The central peaks of large lunar craters may contain material that rose from as far 19 kilometers beneath the surface when the peaks formed by rebound of the compressed rock under the crater. Material moved from the interior of craters is piled in their rims.[79] These and other processes make possibly novel concentrations of minerals accessible to future prospectors from lunar colonies.

A colony in lunar orbit would avoid the extreme temperature swings of the Moon’s surface. Since the orbital period in low-lunar orbit is only about two hours, heat would only radiate away from the colony for a short period of time. At the Lagrangian points one and two, the thermal environment would be even more stable as the Sun would be almost continuously visible. This increased solar duration would allow for an almost constant supply of power. Additionally, the colony could be made to spin as has been examined with designs similar to the O’Neill cylinder so as to provide Earth-like gravity. Various lunar orbits are possible such as a Lissajous orbit or a halo orbit. Due to the Moon’s lumpy gravity, there exist only a small number of possible orbital inclinations for low lunar orbits. A satellite in such a frozen orbit could be at an inclination of 27, 50, 76, or 86.

There have been numerous proposals regarding habitat modules. The designs have evolved throughout the years as mankind’s knowledge about the Moon has grown, and as the technological possibilities have changed. The proposed habitats range from the actual spacecraft landers or their used fuel tanks, to inflatable modules of various shapes. Some hazards of the Lunar environment such as sharp temperature shifts, lack of atmosphere or magnetic field (which means higher levels of radiation and micrometeoroids) and long nights, were unknown early on. Proposals have shifted as these hazards were recognized and taken into consideration.

Some suggest building the Lunar colony underground, which would give protection from radiation and micrometeoroids. This would also greatly reduce the risk of air leakage, as the colony would be fully sealed from the outside except for a few exits to the surface.

The construction of an underground base would probably be more complex; one of the first machines from Earth might be a remote-controlled excavating machine. Once created, some sort of hardening would be necessary to avoid collapse, possibly a spray-on concrete-like substance made from available materials.[80] A more porous insulating material also made in-situ could then be applied. Rowley & Neudecker have suggested “melt-as-you-go” machines that would leave glassy internal surfaces.[81]Mining methods such as the room and pillar might also be used. Inflatable self-sealing fabric habitats might then be put in place to retain air. Eventually an underground city can be constructed. Farms set up underground would need artificial sunlight. As an alternative to excavating, a lava tube could be covered and insulated, thus solving the problem of radiation exposure.

A possibly easier solution would be to build the Lunar base on the surface, and cover the modules with Lunar soil. The Lunar regolith is composed of a unique blend of silica and iron-containing compounds that may be fused into a glass-like solid using microwave energy.[82] Blacic has studied the mechanical properties of lunar glass and has shown that it is a promising material for making rigid structures, if coated with metal to keep moisture out.[83] This may allow for the use of “Lunar bricks” in structural designs, or the vitrification of loose dirt to form a hard, ceramic crust.

A Lunar base built on the surface would need to be protected by improved radiation and micrometeoroid shielding. Building the Lunar base inside a deep crater would provide at least partial shielding against radiation and micrometeoroids. Artificial magnetic fields have been proposed[84][85] as a means to provide radiation shielding for long range deep space manned missions, and it might be possible to use similar technology on a Lunar colony. Some regions on the Moon possess strong local magnetic fields that might partially mitigate exposure to charged solar and galactic particles.[86]

In a turn from the usual engineer-designed lunar habitats, London-based Foster + Partners architectural firm proposed a building construction 3D-printer technology in January 2013 that would use Lunar regolith raw materials to produce Lunar building structures while using enclosed inflatable habitats for housing the human occupants inside the hard-shell Lunar structures. Overall, these habitats would require only ten percent of the structure mass to be transported from Earth, while using local Lunar materials for the other 90 percent of the structure mass.[87] “Printed” Lunar soil will provide both “radiation and temperature insulation. Inside, a lightweight pressurized inflatable with the same dome shape will be the living environment for the first human Moon settlers.”[87] The building technology will include mixing Lunar material with magnesium oxide, which will turn the “moonstuff into a pulp that can be sprayed to form the block” when a binding salt is applied that “converts [this] material into a stone-like solid.”[87] Terrestrial versions of this 3D-printing building technology are already printing 2 metres (6ft 7in) of building material per hour with the next-generation printers capable of 3.5 metres (11ft) per hour, sufficient to complete a building in a week.[87]

In 2010, The Moon Capital Competition offered a prize for a design of a Lunar habitat intended to be an underground international commercial center capable of supporting a residential staff of 60 people and their families. The Moon Capital is intended to be self-sufficient with respect to food and other material required for life support. Prize money was provided primarily by the Boston Society of Architects, Google Lunar X Prize and The New England Council of the American Institute of Aeronautics and Astronautics.[88]

On January 31, 2013, the ESA working with an independent architectural firm, tested a 3D-printed structure that could be constructed of lunar regolith for use as a Moon base.[89]

A nuclear fission reactor might fulfill most of a Moon base’s power requirements.[90] With the help of fission reactors, one could overcome the difficulty of the 354 hour Lunar night. According to NASA, a nuclear fission power station could generate a steady 40kilowatts, equivalent to the demand of about eight houses on Earth.[90] An artists concept of such a station published by NASA envisages the reactor being buried below the Moon’s surface to shield it from its surroundings; out from a tower-like generator part reaching above the surface over the reactor, radiators would extend into space to send away any heat energy that may be left over.[91]

Radioisotope thermoelectric generators could be used as backup and emergency power sources for solar powered colonies.

One specific development program in the 2000s was the Fission Surface Power (FSP) project of NASA and DOE, a fission power system focused on “developing and demonstrating a nominal 40 kWe power system to support human exploration missions. The FSP system concept uses conventional low-temperature stainless steel, liquid metal-cooled reactor technology coupled with Stirling power conversion.” As of 2010[update], significant component hardware testing had been successfully completed, and a non-nuclear system demonstration test was being fabricated.[92][needs update]

Solar energy is a possible source of power for a Lunar base. Many of the raw materials needed for solar panel production can be extracted on site. However, the long Lunar night (354 hours) is a drawback for solar power on the Moon’s surface. This might be solved by building several power plants, so that at least one of them is always in daylight. Another possibility would be to build such a power plant where there is constant or near-constant sunlight, such as at the Malapert mountain near the Lunar south pole, or on the rim of Peary crater near the north pole. A third possibility would be to leave the panels in orbit, and beam the power down as microwaves.

The solar energy converters need not be silicon solar panels. It may be more advantageous to use the larger temperature difference between Sun and shade to run heat engine generators. Concentrated sunlight could also be relayed via mirrors and used in Stirling engines or solar trough generators, or it could be used directly for lighting, agriculture and process heat. The focused heat might also be employed in materials processing to extract various elements from Lunar surface materials.

In the early days,[clarification needed] a combination of solar panels for “day-time” operation and fuel cells for “night-time” operation could be used.[according to whom?]

Fuel cells on the Space Shuttle have operated reliably for up to 17 Earth days at a time. On the Moon, they would only be needed for 354 hours (14 34 days) the length of the Lunar night. Fuel cells produce water directly as a waste product. Current fuel cell technology is more advanced than the Shuttle’s cells PEM (Proton Exchange Membrane) cells produce considerably less heat (though their waste heat would likely be useful during the Lunar night) and are lighter, not to mention the reduced mass of the smaller heat-dissipating radiators. This makes PEMs more economical to launch from Earth than the shuttle’s cells. PEMs have not yet been proven in space.

Combining fuel cells with electrolysis would provide a “perpetual” source of electricity solar energy could be used to provide power during the Lunar day, and fuel cells at night. During the Lunar day, solar energy would also be used to electrolyze the water created in the fuel cells although there would be small losses of gases that would have to be replaced.

Even if lunar colonies could provide themselves access to a near-continuous source of solar energy, they would still need to maintain fuel cells or an alternate energy storage system to sustain themselves during lunar eclipses and emergency situations.

Conventional rockets have been used for most Lunar explorations to date. The ESA’s SMART-1 mission from 2003 to 2006 used conventional chemical rockets to reach orbit and Hall effect thrusters to arrive at the Moon in 13 months. NASA would have used chemical rockets on its AresV booster and Lunar Surface Access Module, that were being developed for a planned return to the Moon around 2019, but this was cancelled. The construction workers, location finders, and other astronauts vital to building, would have been taken four at a time in NASA’s Orion spacecraft.

Proposed concepts of Earth-Moon transportation are Space elevators.[93][94]

Lunar colonists will want the ability to transport cargo and people to and from modules and spacecraft, and to carry out scientific study of a larger area of the Lunar surface for long periods of time. Proposed concepts include a variety of vehicle designs, from small open rovers to large pressurized modules with lab equipment, and also a few flying or hopping vehicles.

Rovers could be useful if the terrain is not too steep or hilly. The only rovers to have operated on the surface of the Moon (as of 2008[update]) are the three Apollo Lunar Roving Vehicles (LRV), developed by Boeing, and the two robotic Soviet Lunokhods. The LRV was an open rover for a crew of two, and a range of 92km during one Lunar day. One NASA study resulted in the Mobile Lunar Laboratory concept, a manned pressurized rover for a crew of two, with a range of 396km. The Soviet Union developed different rover concepts in the Lunokhod series and the L5 for possible use on future manned missions to the Moon or Mars. These rover designs were all pressurized for longer sorties.[95]

If multiple bases were established on the Lunar surface, they could be linked together by permanent railway systems. Both conventional and magnetic levitation (Maglev) systems have been proposed for the transport lines. Mag-Lev systems are particularly attractive as there is no atmosphere on the surface to slow down the train, so the vehicles could achieve velocities comparable to aircraft on the Earth. One significant difference with lunar trains, however, is that the cars would need to be individually sealed and possess their own life support systems.

For difficult areas, a flying vehicle may be more suitable. Bell Aerosystems proposed their design for the Lunar Flying Vehicle as part of a study for NASA. Bell also developed the Manned Flying System, a similar concept.

Experience so far indicates that launching human beings into space is much more expensive than launching cargo.

One way to get materials and products from the Moon to an interplanetary way station might be with a mass driver, a magnetically accelerated projectile launcher. Cargo would be picked up from orbit or an Earth-Moon Lagrangian point by a shuttle craft using ion propulsion, solar sails or other means and delivered to Earth orbit or other destinations such as near-Earth asteroids, Mars or other planets, perhaps using the Interplanetary Transport Network.

A Lunar space elevator could transport people, raw materials and products to and from an orbital station at Lagrangian points L1 or L2. Chemical rockets would take a payload from Earth to the L1 Lunar Lagrange location. From there a tether would slowly lower the payload to a soft landing on the lunar surface.

Other possibilities include a momentum exchange tether system.

A cis-Lunar transport system has been proposed using tethers to achieve momentum exchange.[102] This system requires zero net energy input, and could not only retrieve payloads from the Lunar surface and transport them to Earth, but could also soft land payloads on to the Lunar surface.

For long term sustainability, a space colony should be close to self-sufficient. Mining and refining the Moon’s materials on-site for use both on the Moon and elsewhere in the Solar System could provide an advantage over deliveries from Earth, as they can be launched into space at a much lower energy cost than from Earth. It is possible that large amounts of matter will need to be launched into space for interplanetary exploration in the 21st century, and the lower cost of providing goods from the Moon might be attractive.[80]

In the long term, the Moon will likely play an important role in supplying space-based construction facilities with raw materials.[95] Zero gravity in space allows for the processing of materials in ways impossible or difficult on Earth, such as “foaming” metals, where a gas is injected into a molten metal, and then the metal is annealed slowly. On Earth, the gas bubbles rise and burst, but in a zero gravity environment, that does not happen. The annealing process requires large amounts of energy, as a material is kept very hot for an extended period of time. (This allows the molecular structure to realign.)

Exporting material to Earth in trade from the Moon is more problematic due to the cost of transportation, which will vary greatly if the Moon is industrially developed (see “Launch costs” above). One suggested trade commodity, Helium-3 (3He) from the solar wind, is thought to have accumulated on the Moon’s surface over billions of years, but occurs only rarely on Earth. Helium might be present in the Lunar regolith in quantities of 0.01 ppm to 0.05 ppm (depending on soil). In 2006 3He had a market price of about $1500 per gram ($1.5M per kilogram), more than 120 times the value per unit weight of gold and over eight times the value of rhodium.

In the future 3He may have a role as a fuel in thermonuclear fusion reactors.[103] If the technology for converting helium-3 to energy is developed, there is the potential that it would produce 10 times more electricity than fossil fuels. It should require about 100 tonnes of helium-3 to produce the electricity that Earth uses in a year and there should be enough on the moon to provide that much for 10,000 years.[104]

To reduce the cost of transport, the Moon could store propellants produced from lunar water at one or several depots between the Earth and the Moon, to resupply rockets or satellites in Earth orbit.[105] The Shackleton Energy Company estimate investment in this infrastructure could cost around $25 billion.[106]

Gerard K. O’Neill, noting the problem of high launch costs in the early 1970s, came up with the idea of building Solar Power Satellites in orbit with materials from the Moon.[107] Launch costs from the Moon will vary greatly if the Moon is industrially developed (see “Launch costs” above). This proposal was based on the contemporary estimates of future launch costs of the space shuttle.

On 30 April 1979 the Final Report “Lunar Resources Utilization for Space Construction” by General Dynamics Convair Division under NASA contract NAS9-15560 concluded that use of Lunar resources would be cheaper than terrestrial materials for a system comprising as few as thirty Solar Power Satellites of 10 GW capacity each.[108]

In 1980, when it became obvious NASA’s launch cost estimates for the space shuttle were grossly optimistic, O’Neill et al. published another route to manufacturing using Lunar materials with much lower startup costs.[109] This 1980s SPS concept relied less on human presence in space and more on partially self-replicating systems on the Lunar surface under telepresence control of workers stationed on Earth.


General references

Excerpt from:
Colonization of the Moon – Wikipedia

Posted in Moon Colonization | Comments Off on Colonization of the Moon – Wikipedia

Eugenics – Wikipedia

Posted: October 23, 2016 at 4:23 am

Eugenics (; from Greek eugenes “well-born” from eu, “good, well” and genos, “race, stock, kin”)[2][3] is a set of beliefs and practices that aims at improving the genetic quality of the human population.[4][5] It is a social philosophy advocating the improvement of human genetic traits through the promotion of higher rates of sexual reproduction for people with desired traits (positive eugenics), or reduced rates of sexual reproduction and sterilization of people with less-desired or undesired traits (negative eugenics), or both.[6] Alternatively, gene selection rather than “people selection” has recently been made possible through advances in genome editing (e.g. CRISPR).[7] The exact definition of eugenics has been a matter of debate since the term was coined. The definition of it as a “social philosophy”that is, a philosophy with implications for social orderis not universally accepted, and was taken from Frederick Osborn’s 1937 journal article “Development of a Eugenic Philosophy”.[6]

While eugenic principles have been practiced as far back in world history as Ancient Greece, the modern history of eugenics began in the early 20th century when a popular eugenics movement emerged in the United Kingdom[8] and spread to many countries, including the United States, Canada[9] and most European countries. In this period, eugenic ideas were espoused across the political spectrum. Consequently, many countries adopted eugenic policies meant to improve the genetic stock of their countries. Such programs often included both “positive” measures, such as encouraging individuals deemed particularly “fit” to reproduce, and “negative” measures such as marriage prohibitions and forced sterilization of people deemed unfit for reproduction. People deemed unfit to reproduce often included people with mental or physical disabilities, people who scored in the low ranges of different IQ tests, criminals and deviants, and members of disfavored minority groups. The eugenics movement became negatively associated with Nazi Germany and the Holocaust when many of the defendants at the Nuremberg trials attempted to justify their human rights abuses by claiming there was little difference between the Nazi eugenics programs and the US eugenics programs.[10] In the decades following World War II, with the institution of human rights, many countries gradually abandoned eugenics policies, although some Western countries, among them the United States, continued to carry out forced sterilizations.

Since the 1980s and 1990s when new assisted reproductive technology procedures became available, such as gestational surrogacy (available since 1985), preimplantation genetic diagnosis (available since 1989) and cytoplasmic transfer (first performed in 1996), fear about a possible future revival of eugenics and a widening of the gap between the rich and the poor has emerged.

A major criticism of eugenics policies is that, regardless of whether “negative” or “positive” policies are used, they are vulnerable to abuse because the criteria of selection are determined by whichever group is in political power. Furthermore, negative eugenics in particular is considered by many to be a violation of basic human rights, which include the right to reproduction. Another criticism is that eugenic policies eventually lead to a loss of genetic diversity, resulting in inbreeding depression instead due to a low genetic variation.

The idea of positive eugenics to produce better human beings has existed at least since Plato suggested selective mating to produce a guardian class.[12] The idea of negative eugenics to decrease the birth of inferior human beings has existed at least since William Goodell (1829-1894) advocated the castration and spaying of the insane.[13][14]

However, the term “eugenics” to describe a modern project of improving the human population through breeding was originally developed by Francis Galton. Galton had read his half-cousin Charles Darwin’s theory of evolution, which sought to explain the development of plant and animal species, and desired to apply it to humans. Based on his biographical studies, Galton believed that desirable human qualities were hereditary traits, though Darwin strongly disagreed with this elaboration of his theory.[15] In 1883, one year after Darwin’s death, Galton gave his research a name: eugenics.[16] Throughout its recent history, eugenics has remained controversial.

Eugenics became an academic discipline at many colleges and universities, and received funding from many sources.[18] Organisations formed to win public support and sway opinion towards responsible eugenic values in parenthood, including the British Eugenics Education Society of 1907, and the American Eugenics Society of 1921. Both sought support from leading clergymen, and modified their message to meet religious ideals.[19] In 1909 the Anglican clergymen William Inge and James Peile both wrote for the British Eugenics Education Society. Inge was an invited speaker at the 1921 International Eugenics Conference, which was also endorsed by the Roman Catholic Archbishop of New York Patrick Joseph Hayes.[19]

Three International Eugenics Conferences presented a global venue for eugenists with meetings in 1912 in London, and in 1921 and 1932 in New York City. Eugenic policies were first implemented in the early 1900s in the United States.[20] It also took root in France, Germany, and Great Britain.[21] Later, in the 1920s and 30s, the eugenic policy of sterilizing certain mental patients was implemented in other countries, including Belgium,[22]Brazil,[23]Canada,[24]Japan and Sweden.

In addition to being practiced in a number of countries, eugenics was internationally organized through the International Federation of Eugenics Organizations. Its scientific aspects were carried on through research bodies such as the Kaiser Wilhelm Institute of Anthropology, Human Heredity, and Eugenics, the Cold Spring Harbour Carnegie Institution for Experimental Evolution, and the Eugenics Record Office. Politically, the movement advocated measures such as sterilization laws. In its moral dimension, eugenics rejected the doctrine that all human beings are born equal, and redefined moral worth purely in terms of genetic fitness. Its racist elements included pursuit of a pure “Nordic race” or “Aryan” genetic pool and the eventual elimination of “less fit” races.

Early critics of the philosophy of eugenics included the American sociologist Lester Frank Ward,[33] the English writer G. K. Chesterton, the German-American anthropologist Franz Boas,[34] and Scottish tuberculosis pioneer and author Halliday Sutherland. Ward’s 1913 article “Eugenics, Euthenics, and Eudemics”, Chesterton’s 1917 book Eugenics and Other Evils, and Boas’ 1916 article “Eugenics” (published in The Scientific Monthly) were all harshly critical of the rapidly growing movement. Sutherland identified eugenists as a major obstacle to the eradication and cure of tuberculosis in his 1917 address “Consumption: Its Cause and Cure”,[35] and criticism of eugenists and Neo-Malthusians in his 1921 book Birth Control led to a writ for libel from the eugenist Marie Stopes. Several biologists were also antagonistic to the eugenics movement, including Lancelot Hogben.[36] Other biologists such as J. B. S. Haldane and R. A. Fisher expressed skepticism that sterilization of “defectives” would lead to the disappearance of undesirable genetic traits.[37]

Among institutions, the Catholic Church was an opponent of state-enforced sterilizations.[38] Attempts by the Eugenics Education Society to persuade the British government to legalise voluntary sterilisation were opposed by Catholics and by the Labour Party.[pageneeded] The American Eugenics Society initially gained some Catholic supporters, but Catholic support declined following the 1930 papal encyclical Casti connubii.[19] In this, Pope Pius XI explicitly condemned sterilization laws: “Public magistrates have no direct power over the bodies of their subjects; therefore, where no crime has taken place and there is no cause present for grave punishment, they can never directly harm, or tamper with the integrity of the body, either for the reasons of eugenics or for any other reason.”[39]

As a social movement, eugenics reached its greatest popularity in the early decades of the 20th century, when it was practiced around the world and promoted by governments, institutions, and influential individuals. Many countries enacted[40] various eugenics policies, including: genetic screening, birth control, promoting differential birth rates, marriage restrictions, segregation (both racial segregation and sequestering the mentally ill), compulsory sterilization, forced abortions or forced pregnancies, culminating in genocide.

The scientific reputation of eugenics started to decline in the 1930s, a time when Ernst Rdin used eugenics as a justification for the racial policies of Nazi Germany. Adolf Hitler had praised and incorporated eugenic ideas in Mein Kampf in 1925 and emulated eugenic legislation for the sterilization of “defectives” that had been pioneered in the United States once he took power. Some common early 20th century eugenics methods involved identifying and classifying individuals and their families, including the poor, mentally ill, blind, deaf, developmentally disabled, promiscuous women, homosexuals, and racial groups (such as the Roma and Jews in Nazi Germany) as “degenerate” or “unfit”, leading to their their segregation or institutionalization, sterilization, euthanasia, and even their mass murder. The Nazi practice of euthanasia was carried out on hospital patients in the Aktion T4 centers such as Hartheim Castle.

By the end of World War II, many discriminatory eugenics laws were abandoned, having become associated with Nazi Germany.[43] H. G. Wells, who had called for “the sterilization of failures” in 1904,[44] stated in his 1940 book The Rights of Man: Or What are we fighting for? that among the human rights he believed should be available to all people was “a prohibition on mutilation, sterilization, torture, and any bodily punishment”.[45] After World War II, the practice of “imposing measures intended to prevent births within [a population] group” fell within the definition of the new international crime of genocide, set out in the Convention on the Prevention and Punishment of the Crime of Genocide.[46] The Charter of Fundamental Rights of the European Union also proclaims “the prohibition of eugenic practices, in particular those aiming at selection of persons”.[47] In spite of the decline in discriminatory eugenics laws, some government mandated sterilization continued into the 21st century. During the ten years President Alberto Fujimori led Peru from 1990 to 2000, allegedly 2,000 persons were involuntarily sterilized.[48] China maintained its coercive one-child policy until 2015 as well as a suite of other eugenics based legislation to reduce population size and manage fertility rates of different populations.[49][50][51] In 2007 the United Nations reported coercive sterilisations and hysterectomies in Uzbekistan.[52] During the years 200506 to 201213, nearly one-third of the 144 California prison inmates who were sterilized did not give lawful consent to the operation.[53]

Developments in genetic, genomic, and reproductive technologies at the end of the 20th century are raising numerous questions regarding the ethical status of eugenics, effectively creating a resurgence of interest in the subject. Some, such as UC Berkeley sociologist Troy Duster, claim that modern genetics is a back door to eugenics.[54] This view is shared by White House Assistant Director for Forensic Sciences, Tania Simoncelli, who stated in a 2003 publication by the Population and Development Program at Hampshire College that advances in pre-implantation genetic diagnosis (PGD) are moving society to a “new era of eugenics”, and that, unlike the Nazi eugenics, modern eugenics is consumer driven and market based, “where children are increasingly regarded as made-to-order consumer products”.[55] In a 2006 newspaper article, Richard Dawkins said that discussion regarding eugenics was inhibited by the shadow of Nazi misuse, to the extent that some scientists would not admit that breeding humans for certain abilities is at all possible. He believes that it is not physically different from breeding domestic animals for traits such as speed or herding skill. Dawkins felt that enough time had elapsed to at least ask just what the ethical differences were between breeding for ability versus training athletes or forcing children to take music lessons, though he could think of persuasive reasons to draw the distinction.[56]

In October 2015, the United Nations’ International Bioethics Committee wrote that the ethical problems of human genetic engineering should not be confused with the ethical problems of the 20th century eugenics movements; however, it is still problematic because it challenges the idea of human equality and opens up new forms of discrimination and stigmatization for those who do not want or cannot afford the enhancements.[57]

Transhumanism is often associated with eugenics, although most transhumanists holding similar views nonetheless distance themselves from the term “eugenics” (preferring “germinal choice” or “reprogenetics”)[58] to avoid having their position confused with the discredited theories and practices of early-20th-century eugenic movements.

The term eugenics and its modern field of study were first formulated by Francis Galton in 1883,[59] drawing on the recent work of his half-cousin Charles Darwin.[60][61] Galton published his observations and conclusions in his book Inquiries into Human Faculty and Its Development.

The origins of the concept began with certain interpretations of Mendelian inheritance, and the theories of August Weismann. The word eugenics is derived from the Greek word eu (“good” or “well”) and the suffix -gens (“born”), and was coined by Galton in 1883 to replace the word “stirpiculture”, which he had used previously but which had come to be mocked due to its perceived sexual overtones.[63] Galton defined eugenics as “the study of all agencies under human control which can improve or impair the racial quality of future generations”.[64] Galton did not understand the mechanism of inheritance.[65]

Historically, the term has referred to everything from prenatal care for mothers to forced sterilization and euthanasia.[citation needed] To population geneticists, the term has included the avoidance of inbreeding without altering allele frequencies; for example, J. B. S. Haldane wrote that “the motor bus, by breaking up inbred village communities, was a powerful eugenic agent.”[66] Debate as to what exactly counts as eugenics has continued to the present day.[67]

Edwin Black, journalist and author of War Against the Weak, claims eugenics is often deemed a pseudoscience because what is defined as a genetic improvement of a desired trait is often deemed a cultural choice rather than a matter that can be determined through objective scientific inquiry.[68] The most disputed aspect of eugenics has been the definition of “improvement” of the human gene pool, such as what is a beneficial characteristic and what is a defect. This aspect of eugenics has historically been tainted with scientific racism.

Early eugenists were mostly concerned with perceived intelligence factors that often correlated strongly with social class. Some of these early eugenists include Karl Pearson and Walter Weldon, who worked on this at the University College London.[15]

Eugenics also had a place in medicine. In his lecture “Darwinism, Medical Progress and Eugenics”, Karl Pearson said that everything concerning eugenics fell into the field of medicine. He basically placed the two words as equivalents. He was supported in part by the fact that Francis Galton, the father of eugenics, also had medical training.[69]

Eugenic policies have been conceptually divided into two categories. Positive eugenics is aimed at encouraging reproduction among the genetically advantaged; for example, the reproduction of the intelligent, the healthy, and the successful. Possible approaches include financial and political stimuli, targeted demographic analyses, in vitro fertilization, egg transplants, and cloning.[70] The movie Gattaca provides a fictional example of positive eugenics done voluntarily. Negative eugenics aimed to eliminate, through sterilization or segregation, those deemed physically, mentally, or morally “undesirable”. This includes abortions, sterilization, and other methods of family planning.[70] Both positive and negative eugenics can be coercive; abortion for fit women, for example, was illegal in Nazi Germany.[71]

Jon Entine claims that eugenics simply means “good genes” and using it as synonym for genocide is an “all-too-common distortion of the social history of genetics policy in the United States.” According to Entine, eugenics developed out of the Progressive Era and not “Hitler’s twisted Final Solution”.[72]

According to Richard Lynn, eugenics may be divided into two main categories based on the ways in which the methods of eugenics can be applied.[73]

The first major challenge to conventional eugenics based upon genetic inheritance was made in 1915 by Thomas Hunt Morgan, who demonstrated the event of genetic mutation occurring outside of inheritance involving the discovery of the hatching of a fruit fly (Drosophila melanogaster) with white eyes from a family of red-eyes. Morgan claimed that this demonstrated that major genetic changes occurred outside of inheritance and that the concept of eugenics based upon genetic inheritance was not completely scientifically accurate. Additionally, Morgan criticized the view that subjective traits, such as intelligence and criminality, were caused by heredity because he believed that the definitions of these traits varied and that accurate work in genetics could only be done when the traits being studied were accurately defined.[109] In spite of Morgan’s public rejection of eugenics, much of his genetic research was absorbed by eugenics.[110][111]

The heterozygote test is used for the early detection of recessive hereditary diseases, allowing for couples to determine if they are at risk of passing genetic defects to a future child.[112] The goal of the test is to estimate the likelihood of passing the hereditary disease to future descendants.[112]

Recessive traits can be severely reduced, but never eliminated unless the complete genetic makeup of all members of the pool was known, as aforementioned. As only very few undesirable traits, such as Huntington’s disease, are dominant, it could be argued[by whom?] from certain perspectives that the practicality of “eliminating” traits is quite low.[citation needed]

There are examples of eugenic acts that managed to lower the prevalence of recessive diseases, although not influencing the prevalence of heterozygote carriers of those diseases. The elevated prevalence of certain genetically transmitted diseases among the Ashkenazi Jewish population (TaySachs, cystic fibrosis, Canavan’s disease, and Gaucher’s disease), has been decreased in current populations by the application of genetic screening.[113]

Pleiotropy occurs when one gene influences multiple, seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect.[114] Andrzej Pkalski, from the University of Wrocaw, argues that eugenics can cause harmful loss of genetic diversity if a eugenics program selects for a pleiotropic gene that is also associated with a positive trait. Pekalski uses the example of a coercive government eugenics program that prohibits people with myopia from breeding but has the unintended consequence of also selecting against high intelligence since the two go together.[115]

Eugenic policies could also lead to loss of genetic diversity, in which case a culturally accepted “improvement” of the gene pool could very likelyas evidenced in numerous instances in isolated island populations (e.g., the dodo, Raphus cucullatus, of Mauritius)result in extinction due to increased vulnerability to disease, reduced ability to adapt to environmental change, and other factors both known and unknown. A long-term species-wide eugenics plan might lead to a scenario similar to this because the elimination of traits deemed undesirable would reduce genetic diversity by definition.[116]

Edward M. Miller claims that, in any one generation, any realistic program should make only minor changes in a fraction of the gene pool, giving plenty of time to reverse direction if unintended consequences emerge, reducing the likelihood of the elimination of desirable genes.[117] Miller also argues that any appreciable reduction in diversity is so far in the future that little concern is needed for now.[117]

While the science of genetics has increasingly provided means by which certain characteristics and conditions can be identified and understood, given the complexity of human genetics, culture, and psychology there is at this point no agreed objective means of determining which traits might be ultimately desirable or undesirable. Some diseases such as sickle-cell disease and cystic fibrosis respectively confer immunity to malaria and resistance to cholera when a single copy of the recessive allele is contained within the genotype of the individual. Reducing the instance of sickle-cell disease genes in Africa where malaria is a common and deadly disease could indeed have extremely negative net consequences.

However, some genetic diseases such as haemochromatosis can increase susceptibility to illness, cause physical deformities, and other dysfunctions, which provides some incentive for people to re-consider some elements of eugenics.

Autistic people have advocated a shift in perception of autism spectrum disorders as complex syndromes rather than diseases that must be cured. Proponents of this view reject the notion that there is an “ideal” brain configuration and that any deviation from the norm is pathological; they promote tolerance for what they call neurodiversity.[118] Baron-Cohen argues that the genes for Asperger’s combination of abilities have operated throughout recent human evolution and have made remarkable contributions to human history.[119] The possible reduction of autism rates through selection against the genetic predisposition to autism is a significant political issue in the autism rights movement, which claims that autism is a part of neurodiversity.

Many culturally Deaf people oppose attempts to cure deafness, believing instead deafness should be considered a defining cultural characteristic not a disease.[120][121][122] Some people have started advocating the idea that deafness brings about certain advantages, often termed “Deaf Gain.”[123][124]

Societal and political consequences of eugenics call for a place in the discussion on the ethics behind the eugenics movement.[125] Many of the ethical concerns regarding eugenics arise from its controversial past, prompting a discussion on what place, if any, it should have in the future. Advances in science have changed eugenics. In the past, eugenics had more to do with sterilization and enforced reproduction laws.[126] Now, in the age of a progressively mapped genome, embryos can be tested for susceptibility to disease, gender, and genetic defects, and alternative methods of reproduction such as in vitro fertilization are becoming more common.[127] Therefore, eugenics is no longer ex post facto regulation of the living but instead preemptive action on the unborn.[128]

With this change, however, there are ethical concerns which lack adequate attention, and which must be addressed before eugenic policies can be properly implemented in the future. Sterilized individuals, for example, could volunteer for the procedure, albeit under incentive or duress, or at least voice their opinion. The unborn fetus on which these new eugenic procedures are performed cannot speak out, as the fetus lacks the voice to consent or to express his or her opinion.[129] Philosophers disagree about the proper framework for reasoning about such actions, which change the very identity and existence of future persons.[130]

A common criticism of eugenics is that “it inevitably leads to measures that are unethical”.[131] Some fear future “eugenics wars” as the worst-case scenario: the return of coercive state-sponsored genetic discrimination and human rights violations such as compulsory sterilization of persons with genetic defects, the killing of the institutionalized and, specifically, segregation and genocide of races perceived as inferior.[132] Health law professor George Annas and technology law professor Lori Andrews are prominent advocates of the position that the use of these technologies could lead to such human-posthuman caste warfare.[133][134]

In his 2003 book Enough: Staying Human in an Engineered Age, environmental ethicist Bill McKibben argued at length against germinal choice technology and other advanced biotechnological strategies for human enhancement. He claims that it would be morally wrong for humans to tamper with fundamental aspects of themselves (or their children) in an attempt to overcome universal human limitations, such as vulnerability to aging, maximum life span and biological constraints on physical and cognitive ability. Attempts to “improve” themselves through such manipulation would remove limitations that provide a necessary context for the experience of meaningful human choice. He claims that human lives would no longer seem meaningful in a world where such limitations could be overcome technologically. Even the goal of using germinal choice technology for clearly therapeutic purposes should be relinquished, since it would inevitably produce temptations to tamper with such things as cognitive capacities. He argues that it is possible for societies to benefit from renouncing particular technologies, using as examples Ming China, Tokugawa Japan and the contemporary Amish.[135]

Some, such as Nathaniel C. Comfort from Johns Hopkins University, claim that the change from state-led reproductive-genetic decision-making to individual choice has moderated the worst abuses of eugenics by transferring the decision-making from the state to the patient and their family.[136] Comfort suggests that “the eugenic impulse drives us to eliminate disease, live longer and healthier, with greater intelligence, and a better adjustment to the conditions of society; and the health benefits, the intellectual thrill and the profits of genetic bio-medicine are too great for us to do otherwise.”[137] Others, such as bioethicist Stephen Wilkinson of Keele University and Honorary Research Fellow Eve Garrard at the University of Manchester, claim that some aspects of modern genetics can be classified as eugenics, but that this classification does not inherently make modern genetics immoral. In a co-authored publication by Keele University, they stated that “[e]ugenics doesn’t seem always to be immoral, and so the fact that PGD, and other forms of selective reproduction, might sometimes technically be eugenic, isn’t sufficient to show that they’re wrong.”[138]

In their 2000 book From Chance to Choice: Genetics and Justice, bioethicists Allen Buchanan, Dan Brock, Norman Daniels and Daniel Wikler argued that liberal societies have an obligation to encourage as wide an adoption of eugenic enhancement technologies as possible (so long as such policies do not infringe on individuals’ reproductive rights or exert undue pressures on prospective parents to use these technologies) in order to maximize public health and minimize the inequalities that may result from both natural genetic endowments and unequal access to genetic enhancements.[139]

Original position, a hypothetical situation developed by American philosopher John Rawls, has been used as an argument for negative eugenics.[140][141]

See the rest here:

Eugenics – Wikipedia

Posted in Eugenics | Comments Off on Eugenics – Wikipedia

The Bahamas – Wikipedia

Posted: October 20, 2016 at 11:38 pm

Coordinates: 2415N 7600W / 24.250N 76.000W / 24.250; -76.000

The Bahamas (i), officially the Commonwealth of the Bahamas, is an archipelagic state of the Lucayan Archipelago consisting of more than 700 islands, cays, and islets in the Atlantic Ocean; north of Cuba and Hispaniola (Haiti and the Dominican Republic); northwest of the Turks and Caicos Islands; southeast of the US state of Florida and east of the Florida Keys. Its capital is Nassau on the island of New Providence. The designation of “The Bahamas” can refer to either the country or the larger island chain that it shares with the Turks and Caicos Islands. As stated in the mandate/manifesto of the Royal Bahamas Defence Force, the Bahamas territory encompasses 470,000km2 (180,000sqmi) of ocean space.

The Bahamas were the site of Columbus’ first landfall in the New World in 1492. At that time, the islands were inhabited by the Lucayan, a branch of the Arawakan-speaking Taino people. Although the Spanish never colonised the Bahamas, they shipped the native Lucayans to slavery in Hispaniola. The islands were mostly deserted from 1513 until 1648, when English colonists from Bermuda settled on the island of Eleuthera.

The Bahamas became a British Crown colony in 1718, when the British clamped down on piracy. After the American War of Independence, the Crown resettled thousands of American Loyalists in the Bahamas; they brought their slaves with them and established plantations on land grants. Africans constituted the majority of the population from this period. The Bahamas became a haven for freed African slaves: the Royal Navy resettled Africans here liberated from illegal slave ships; American slaves and Seminoles escaped here from Florida; and the government freed American slaves carried on United States domestic ships that had reached the Bahamas due to weather. Slavery in the Bahamas was abolished in 1834. Today the descendants of slaves and free Africans make up nearly 90% of the population; issues related to the slavery years are part of society.

The Bahamas became an independent Commonwealth realm in 1973, retaining Queen Elizabeth II as its monarch. In terms of gross domestic product per capita, the Bahamas is one of the richest countries in the Americas (following the United States and Canada), with an economy based on tourism and finance.[9]

The name Bahamas is derived from either the Taino ba ha ma (“big upper middle land”), which was a term for the region used by the indigenous Amerindians,[10] while other theories suggest it derives from the Spanish baja mar (“shallow water or sea” or “low tide”) reflecting the shallow waters of the area. Alternatively it may originate from Guanahani, a local name of unclear meaning.[11] In English, the Bahamas is one of only two countries whose self-standing short name begins with the word “the”, along with The Gambia.[12]

Taino people moved into the uninhabited southern Bahamas from Hispaniola and Cuba around the 11th century, having migrated there from South America. They came to be known as the Lucayan people. An estimated 30,000 Lucayan inhabited the Bahamas at the time of Christopher Columbus’ arrival in 1492.

Columbus’s first landfall in the New World was on an island he named San Salvador (known to the Lucayan as Guanahani). Some researchers believe this site to be present-day San Salvador Island (formerly known as Watling’s Island), situated in the southeastern Bahamas. An alternative theory holds that Columbus landed to the southeast on Samana Cay, according to calculations made in 1986 by National Geographic writer and editor Joseph Judge, based on Columbus’s log. Evidence in support of this remains inconclusive. On the landfall island, Columbus made first contact with the Lucayan and exchanged goods with them.

The Spanish forced much of the Lucayan population to Hispaniola for use as forced labour. The slaves suffered from harsh conditions and most died from contracting diseases to which they had no immunity; half of the Taino died from smallpox alone.[14] The population of the Bahamas was severely diminished.[15]

In 1648, the Eleutherian Adventurers, led by William Sayle, migrated from Bermuda. These English Puritans established the first permanent European settlement on an island which they named Eleutherathe name derives from the Greek word for freedom. They later settled New Providence, naming it Sayle’s Island after one of their leaders. To survive, the settlers salvaged goods from wrecks.

In 1670 King Charles II granted the islands to the Lords Proprietors of the Carolinas in North America. They rented the islands from the king with rights of trading, tax, appointing governors, and administering the country.[16] In 1684 Spanish corsair Juan de Alcon raided the capital, Charles Town (later renamed Nassau). In 1703 a joint Franco-Spanish expedition briefly occupied the Bahamian capital during the War of the Spanish Succession.

During proprietary rule, the Bahamas became a haven for pirates, including the infamous Blackbeard (c.16801718). To put an end to the ‘Pirates’ republic’ and restore orderly government, Britain made the Bahamas a crown colony in 1718 under the royal governorship of Woodes Rogers. After a difficult struggle, he succeeded in suppressing piracy.[17] In 1720, Rogers led local militia to drive off a Spanish attack.

During the American War of Independence in the late 18th century, the islands became a target for American naval forces under the command of Commodore Esek Hopkins. US Marines occupied the capital of Nassau for a fortnight.

In 1782, following the British defeat at Yorktown, a Spanish fleet appeared off the coast of Nassau. The city surrendered without a fight. Spain returned possession of the Bahamas to Britain the following year, under the terms of the Treaty of Paris. Before the news was received, however, the islands were recaptured by a small British force led by Andrew Deveaux.

After American independence, the British resettled some 7,300 Loyalists with their slaves in the Bahamas, and granted land to the planters to help compensate for losses on the continent. These Loyalists, who included Deveaux, established plantations on several islands and became a political force in the capital. European Americans were outnumbered by the African-American slaves they brought with them, and ethnic Europeans remained a minority in the territory.

In 1807, the British abolished the slave trade, followed by the United States the next year. During the following decades, the Royal Navy intercepted the trade; they resettled in the Bahamas thousands of Africans liberated from slave ships.

In the 1820s during the period of the Seminole Wars in Florida, hundreds of American slaves and African Seminoles escaped from Cape Florida to the Bahamas. They settled mostly on northwest Andros Island, where they developed the village of Red Bays. From eyewitness accounts, 300 escaped in a mass flight in 1823, aided by Bahamians in 27 sloops, with others using canoes for the journey. This was commemorated in 2004 by a large sign at Bill Baggs Cape Florida State Park.[18][19] Some of their descendants in Red Bays continue African Seminole traditions in basket making and grave marking.[20]

The United States’ National Park Service, which administers the National Underground Railroad Network to Freedom, is working with the African Bahamian Museum and Research Center (ABAC) in Nassau on development to identify Red Bays as a site related to American slaves’ search for freedom. The museum has researched and documented the African Seminoles’ escape from southern Florida. It plans to develop interpretive programs at historical sites in Red Bay associated with the period of their settlement in the Bahamas.[21]

In 1818,[22] the Home Office in London had ruled that “any slave brought to the Bahamas from outside the British West Indies would be manumitted.” This led to a total of nearly 300 slaves owned by US nationals being freed from 1830 to 1835.[23] The American slave ships Comet and Encomium used in the United States domestic coastwise slave trade, were wrecked off Abaco Island in December 1830 and February 1834, respectively. When wreckers took the masters, passengers and slaves into Nassau, customs officers seized the slaves and British colonial officials freed them, over the protests of the Americans. There were 165 slaves on the Comet and 48 on the Encomium. Britain finally paid an indemnity to the United States in those two cases in 1855, under the Treaty of Claims of 1853, which settled several compensation cases between the two nations.[24][25]

Slavery was abolished in the British Empire on 1 August 1834. After that British colonial officials freed 78 American slaves from the Enterprise, which went into Bermuda in 1835; and 38 from the Hermosa, which wrecked off Abaco Island in 1840.[26] The most notable case was that of the Creole in 1841: as a result of a slave revolt on board, the leaders ordered the American brig to Nassau. It was carrying 135 slaves from Virginia destined for sale in New Orleans. The Bahamian officials freed the 128 slaves who chose to stay in the islands. The Creole case has been described as the “most successful slave revolt in U.S. history”.[27]

These incidents, in which a total of 447 slaves belonging to US nationals were freed from 1830 to 1842, increased tension between the United States and Great Britain. They had been co-operating in patrols to suppress the international slave trade. But, worried about the stability of its large domestic slave trade and its value, the United States argued that Britain should not treat its domestic ships that came to its colonial ports under duress, as part of the international trade. The United States worried that the success of the Creole slaves in gaining freedom would encourage more slave revolts on merchant ships.

In August 1940, after his abdication of the British throne, the Duke of Windsor was installed as Governor of the Bahamas, arriving with his wife, the Duchess. Although disheartened at the condition of Government House, they “tried to make the best of a bad situation”.[28] He did not enjoy the position, and referred to the islands as “a third-class British colony”.[29]

He opened the small local parliament on 29 October 1940. The couple visited the “Out Islands” that November, on Axel Wenner-Gren’s yacht, which caused controversy;[30] the British Foreign Office strenuously objected because they had been advised (mistakenly) by United States intelligence that Wenner-Gren was a close friend of the Luftwaffe commander Hermann Gring of Nazi Germany.[30][31]

The Duke was praised at the time for his efforts to combat poverty on the islands. A 1991 biography by Philip Ziegler, however, described him as contemptuous of the Bahamians and other non-white peoples of the Empire. He was praised for his resolution of civil unrest over low wages in Nassau in June 1942, when there was a “full-scale riot.”[32] Ziegler said that the Duke blamed the trouble on “mischief makers communists” and “men of Central European Jewish descent, who had secured jobs as a pretext for obtaining a deferment of draft”.[33]

The Duke resigned the post on 16 March 1945.[34][35]

Modern political development began after the Second World War. The first political parties were formed in the 1950s. The British Parliament authorised the islands as internally self-governing in 1964, with Sir Roland Symonette, of the United Bahamian Party, as the first Premier.

A new constitution granting the Bahamas internal autonomy went into effect on 7 January 1964.[36] In 1967, Lynden Pindling of the Progressive Liberal Party, became the first black Premier of the majority-black colony; in 1968 the title of the position was changed to Prime Minister. In 1968, Pindling announced that the Bahamas would seek full independence.[37] A new constitution giving the Bahamas increased control over its own affairs was adopted in 1968.[38]

The British House of Lords voted to give the Bahamas its independence on 22 June 1973.[39]Prince Charles delivered the official documents to Prime Minister Lynden Pindling, officially declaring the Bahamas a fully independent nation on 10 July 1973.[40] It joined the Commonwealth of Nations on the same day.[41]Sir Milo Butler was appointed the first Governor-General of the Bahamas (the official representative of Queen Elizabeth II) shortly after independence. The Bahamas joined the International Monetary Fund and the World Bank on 22 August 1973,[42] and it joined the United Nations on 18 September 1973.[43]

Based on the twin pillars of tourism and offshore finance, the Bahamian economy has prospered since the 1950s. Significant challenges in areas such as education, health care, housing, international narcotics trafficking and illegal immigration from Haiti continue to be issues.

The College of the Bahamas is the national higher education/tertiary system. Offering baccalaureate, masters and associate degrees, COB has three campuses, and teaching and research centres throughout the Bahamas. COB is on track to become the national “University of The Bahamas” (UOB) in 2015.

The country lies between latitudes 20 and 28N, and longitudes 72 and 80W.

In 1864, the Governor of the Bahamas reported that there were 29 islands, 661 cays, and 2,387 rocks in the colony.[44]

The closest island to the United States is Bimini, which is also known as the gateway to the Bahamas. The island of Abaco is to the east of Grand Bahama. The southeasternmost island is Inagua. The largest island is Andros Island. Other inhabited islands include Eleuthera, Cat Island, Long Island, San Salvador Island, Acklins, Crooked Island, Exuma, Berry Islands and Mayaguana. Nassau, capital city of the Bahamas, lies on the island of New Providence.

All the islands are low and flat, with ridges that usually rise no more than 15 to 20m (49 to 66ft). The highest point in the country is Mount Alvernia (formerly Como Hill) on Cat Island. It has an elevation of 63 metres (207ft).

To the southeast, the Turks and Caicos Islands, and three more extensive submarine features called Mouchoir Bank, Silver Bank and Navidad Bank, are geographically a continuation of the Bahamas.

The climate of the Bahamas is tropical savannah climate or Aw according to Kppen climate classification. As such, there has never been a frost or freeze reported in the Bahamas, although every few decades low temperatures can fall into the 35C (3741F) range for a few hours when a severe cold outbreak comes off the North American landmass. Otherwise, the low latitude, warm tropical Gulf Stream, and low elevation give the Bahamas a warm and winterless climate. There is only an 8C difference between the warmest month and coolest month in most of the Bahama islands. As with most tropical climates, seasonal rainfall follows the sun, and summer is the wettest season. The Bahamas are often sunny and dry for long periods of time, and average more than 3,000 hours or 340 days[45] of sunlight annually.

Tropical storms and hurricanes affect the Bahamas. In 1992, Hurricane Andrew passed over the northern portions of the islands, and Hurricane Floyd passed near the eastern portions of the islands in 1999.













The Bahamas is a parliamentary constitutional monarchy headed by Queen Elizabeth II in her role as Queen of the Bahamas. Political and legal traditions closely follow those of the United Kingdom and the Westminster system. The Bahamas is a member of the Commonwealth of Nations as a Commonwealth realm, retaining the Queen as head of state (represented by a Governor-General).

Legislative power is vested in a bicameral parliament, which consists of a 38-member House of Assembly (the lower house), with members elected from single-member districts, and a 16-member Senate, with members appointed by the Governor-General, including nine on the advice of the Prime Minister, four on the advice of the Leader of Her Majesty’s Loyal Opposition, and three on the advice of the Prime Minister after consultation with the Leader of the Opposition. The House of Assembly carries out all major legislative functions. As under the Westminster system, the Prime Minister may dissolve Parliament and call a general election at any time within a five-year term.[48]

The Prime Minister is the head of government and is the leader of the party with the most seats in the House of Assembly. Executive power is exercised by the Cabinet, selected by the Prime Minister and drawn from his supporters in the House of Assembly. The current Governor-General is Dame Marguerite Pindling, and the current Prime Minister is The Rt. Hon. Perry Christie, P.C., M.P..

Constitutional safeguards include freedom of speech, press, worship, movement and association. The judiciary is independent of the executive and the legislature. Jurisprudence is based on English law.

The Bahamas has a two-party system dominated by the centre-left Progressive Liberal Party and the centre-right Free National Movement. A handful of splinter parties have been unable to win election to parliament. These parties have included the Bahamas Democratic Movement, the Coalition for Democratic Reform, Bahamian Nationalist Party and the Democratic National Alliance.

The Bahamas has strong bilateral relationships with the United States and the United Kingdom, represented by an ambassador in Washington and High Commissioner in London. The Bahamas also associates closely with other nations of the Caribbean Community (CARICOM).

Its military is the Royal Bahamas Defence Force (the RBDF), the navy of the Bahamas which includes a land unit called Commando Squadron (Regiment) and an Air Wing (Air Force). Under the Defence Act, the RBDF has been mandated, in the name of the Queen, to defend the Bahamas, protect its territorial integrity, patrol its waters, provide assistance and relief in times of disaster, maintain order in conjunction with the law enforcement agencies of the Bahamas, and carry out any such duties as determined by the National Security Council. The Defence Force is also a member of the Caribbean Community (CARICOM)’s Regional Security Task Force.

The RBDF came into existence on 31 March 1980. Their duties include defending the Bahamas, stopping drug smuggling, illegal immigration and poaching, and providing assistance to mariners. The Defence Force has a fleet of 26 coastal and inshore patrol craft along with 3 aircraft and over 1,100 personnel including 65 officers and 74 women.

The districts of the Bahamas provide a system of local government everywhere except New Providence (which holds 70% of the national population), whose affairs are handled directly by the central government. In 1996, the Bahamian Parliament passed the “Local Government Act” to facilitate the establishment of Family Island Administrators, Local Government Districts, Local District Councillors and Local Town Committees for the various island communities. The overall goal of this act is to allow the various elected leaders to govern and oversee the affairs of their respective districts without the interference of Central Government. In total, there are 32 districts, with elections being held every five years. There are 110 Councillors and 281 Town Committee members are elected to represent the various districts.[49]

Each Councillor or Town Committee member is responsible for the proper use of public funds for the maintenance and development of their constituency.

The Bahamas uses drive-on-the-Left traffic rules throughout the Commonwealth.

The districts other than New Providence are:

The colours embodied in the design of the Bahamian flag symbolise the image and aspirations of the people of the Bahamas; the design reflects aspects of the natural environment (sun, sand and sea) and the economic and social development. The flag is a black equilateral triangle against the mast, superimposed on a horizontal background made up of two colours on three equal stripes of aquamarine, gold and aquamarine.

The symbolism of the flag is as follows: Black, a strong colour, represents the vigour and force of a united people, the triangle pointing towards the body of the flag represents the enterprise and determination of the Bahamian people to develop and possess the rich resources of sun and sea symbolised by gold and aquamarine respectively. In reference to the representation of the people with the colour black, some white Bahamians have joked that they are represented in the thread which “holds it all together.”[50]

There are rules on how to use the flag for certain events. For a funeral the national flag should be draped over the coffin covering the top completely but not covering the bearers. The black triangle on the flag should be placed over the head of the deceased in the coffin. The flag will remain on the coffin throughout the whole service and removed right before lowered into the grave. Upon removal of the flag it should be folded with dignity and put away. The black triangle should never be displayed pointing upwards or from the viewer’s right. This would be a sign of distress.[51]

The coat of arms of the Bahamas contains a shield with the national symbols as its focal point. The shield is supported by a marlin and a flamingo, which are the national animals of the Bahamas. The flamingo is located on the land, and the marlin on the sea, indicating the geography of the islands.

On top of the shield is a conch shell, which represents the varied marine life of the island chain. The conch shell rests on a helmet. Below this is the actual shield, the main symbol of which is a ship representing the Santa Mara of Christopher Columbus, shown sailing beneath the sun. Along the bottom, below the shield appears a banner upon which is the national motto:[52]

“Forward, Upward, Onward Together.”

The yellow elder was chosen as the national flower of the Bahamas because it is native to the Bahama islands, and it blooms throughout the year.

Selection of the yellow elder over many other flowers was made through the combined popular vote of members of all four of New Providence’s garden clubs of the 1970sthe Nassau Garden Club, the Carver Garden Club, the International Garden Club and the Y.W.C.A. Garden Club.

They reasoned that other flowers grown theresuch as the bougainvillea, hibiscus and poincianahad already been chosen as the national flowers of other countries. The yellow elder, on the other hand, was unclaimed by other countries (although it is now also the national flower of the United States Virgin Islands) and also the yellow elder is native to the family islands.[53]

By the terms of GDP per capita, the Bahamas is one of the richest countries in the Americas.[54]

The Bahamas relies on tourism to generate most of its economic activity. Tourism as an industry not only accounts for over 60% of the Bahamian GDP, but provides jobs for more than half the country’s workforce.[55] The Bahamas attracted 5.8 million visitors in 2012, more than 70% of which were cruise visitors.

After tourism, the next most important economic sector is banking and international financial services, accounting for some 15% of GDP.

The government has adopted incentives to encourage foreign financial business, and further banking and finance reforms are in progress. The government plans to merge the regulatory functions of key financial institutions, including the Central Bank of the Bahamas (CBB) and the Securities and Exchange Commission.[citation needed] The Central Bank administers restrictions and controls on capital and money market instruments. The Bahamas International Securities Exchange consists of 19 listed public companies. Reflecting the relative soundness of the banking system (mostly populated by Canadian banks), the impact of the global financial crisis on the financial sector has been limited.[citation needed]

The economy has a very competitive tax regime. The government derives its revenue from import tariffs, VAT, licence fees, property and stamp taxes, but there is no income tax, corporate tax, capital gains tax, or wealth tax. Payroll taxes fund social insurance benefits and amount to 3.9% paid by the employee and 5.9% paid by the employer.[56] In 2010, overall tax revenue as a percentage of GDP was 17.2%.[5]

Agriculture is the third largest sector of the Bahamian economy, representing 57% of total GDP. An estimated 80% of the Bahamian food supply is imported. Major crops include onions, okra, and tomatoes, oranges, grapefruit, cucumbers, sugar cane, lemons, limes and sweet potatoes.

The Bahamas has an estimated population of 392,718, of which 25.9% are under 14, 67.2% 15 to 64 and 6.9% over 65. It has a population growth rate of 0.925% (2010), with a birth rate of 17.81/1,000 population, death rate of 9.35/1,000, and net migration rate of 2.13 migrant(s)/1,000 population.[57] The infant mortality rate is 23.21 deaths/1,000 live births. Residents have a life expectancy at birth of 69.87 years: 73.49 years for females, 66.32 years for males. The total fertility rate is 2.0 children born/woman (2010).[5]

The most populous islands are New Providence, where Nassau, the capital and largest city, is located;[58] and Grand Bahama, home to the second largest city of Freeport.[59]

According to the 99% response rate obtained from the race question on the 2010 Census questionnaire, 91% of the population identified themselves as being Africans or Afro-Bahamian, 5% Europeans or Euro-Bahamian and 2% of a mixed race (African and European). Three centuries prior, in 1722 when the first official census of The Bahamas was taken, 74% of the population was White and 26% Black.[60]

Afro-Bahamians are Bahamian nationals whose primary ancestry was based in West Africa. The first Africans to arrive to the Bahamas were freed slaves from Bermuda; they arrived with the Eleutheran Adventurers looking for new lives.

Since the colonial era of plantations, Africans or Afro-Bahamians have been the largest ethnic group in the Bahamas; in the 21st century, they account for some 91% of the country’s population.[60] The Haitian community is also largely of African descent and numbers about 80,000. Because of an extremely high immigration of Haitians to the Bahamas, the Bahamian government started deporting illegal Haitian immigrants to their homeland in late 2014.[61]

16,598 (5%) of the total population are descendants of Europeans or European Bahamians at the 2010 census.[1]European Bahamians, or Bahamians of European and mixed European descent form the largest minority, and are mainly the descendants of the English Puritans looking to flee religious persecution in England and American Loyalists escaping the American Revolution who arrived in 1649 and 1783, respectively.[62] Many Southern Loyalists went to the Abaco Islands, which has an about 50% European population as of 1985.[63] A small portion of the Euro Bahamian population is descended from Greek labourers who came to help develop the sponging industry in the 1900s. They make up less than 1% of the nation’s population, but have still preserved their distinct Greek Bahamian culture.

The official language of the Bahamas is English. Many residents speak the Bahamian dialect.[64] According to 1995 estimates 98.2% of the adult population is literate.

According to International Religious Freedom Report 2008 prepared by United States Bureau of Democracy, Human Rights and Labor, the islands’ population is predominantly Christian. Protestant denominations are widespread with Baptists representing 35% of the population, Anglicans 15%, Pentecostals 8%, Church of God 5%, Seventh-day Adventists 5% and Methodists 4%, but there is also a significant Roman Catholic community accounting for about 14%.[65] There are also smaller communities of Jews, Muslims, Baha’is, Hindus, Rastafarians and practitioners of Obeah.

In the less developed outer islands (or Family Islands), handicrafts include basketry made from palm fronds. This material, commonly called “straw”, is plaited into hats and bags that are popular tourist items. Another use is for so-called “Voodoo dolls”, even though such dolls are the result of the American imagination and not based on historic fact.[66]

A form of folk magic (obeah) is practiced by some Bahamians, mainly in the Family Islands (out-islands) of the Bahamas.[67] The practice of obeah is illegal in the Bahamas and punishable by law.[68]

Junkanoo is a traditional Afro-Bahamian street parade of ‘rushing’, music, dance and art held in Nassau (and a few other settlements) every Boxing Day and New Year’s Day. Junkanoo is also used to celebrate other holidays and events such as Emancipation Day.

Regattas are important social events in many family island settlements. They usually feature one or more days of sailing by old-fashioned work boats, as well as an onshore festival.

Many dishes are associated with Bahamian cuisine, which reflects Caribbean, African and European influences. Some settlements have festivals associated with the traditional crop or food of that area, such as the “Pineapple Fest” in Gregory Town, Eleuthera or the “Crab Fest” on Andros. Other significant traditions include story telling.

Bahamians have created a rich literature of poetry, short stories, plays and short fictional works. Common themes in these works are (1) an awareness of change, (2) a striving for sophistication, (3) a search for identity, (4) nostalgia for the old ways and (5) an appreciation of beauty. Some contributing writers are Susan Wallace, Percival Miller, Robert Johnson, Raymond Brown, O.M. Smith, William Johnson, Eddie Minnis and Winston Saunders.[69][70]

Bahamas culture is rich with beliefs, traditions, folklore and legend. The most well-known folklore and legends in the Bahamas includes Lusca in Andros Bahamas, Pretty Molly on Exuma Bahamas, the Chickcharnies of Andro Bahamas, and the Lost City of Atlantis on Bimini Bahamas.

Sport is a significant part of Bahamian culture. The national sport is Cricket. Cricket has been played in the Bahamas from 1846.[71] It is the oldest sport being played in the country today. The Bahamas Cricket Association was formed in 1936 as an organised body. From the 1940s to the 1970s, cricket was played amongst many Bahamians. Bahamas is not a part of the West Indies Board, so players are not eligible to play for the West Indies cricket team. The late 1970s saw the game begin to decline in the country as teachers, who had previously come from the United Kingdom with a passion for cricket were replaced by teachers who had been trained in the United States. The Bahamian Physical education teachers had no knowledge of the game and instead taught track & field, basketball, baseball, softball,[72]volleyball[73] and football[74] where primary and high schools compete against each other. Today cricket is still enjoyed by a few locals and immigrants in the country usually from Jamaica, Guyana, Haiti and Barbados. Cricket is played on Saturdays and Sundays at Windsor Park and Haynes Oval.

The only other sporting event that began before cricket was horse racing, which started in 1796. The most popular spectator sports are those imported from United States such as basketball,[75]American football[76] and baseball[77] rather than Great Britain due to the country’s close proximity to the United States. Unlike their other Caribbean counterparts, cricket has proven to be more popular.

Dexter Cambridge, Rick Fox and Ian Lockhart are a few Bahamians who joined Bahamian Mychal Thompson of the Los Angeles Lakers in the NBA ranks,[78] and Buddy Hield is expected to join this group in 2016.[79] Over the years American football has become much more popular than association football, though not implemented in the high school system yet. Leagues for teens and adults have been developed by the Bahamas American Football Federation.[80] However association football, commonly known as ‘soccer’ in the country, is still a very popular sport amongst high school pupils. Leagues are governed by the Bahamas Football Association. Recently the Bahamian government has been working closely with Tottenham Hotspur of London to promote the sport in the country as well as promoting the Bahamas in the European market. In 2013 ‘Spurs’ became the first Premier League club to play an exhibition match in the Bahamas to face the Jamaica national football team. Joe Lewis the owner of the Tottenham Hotspur club is based in the Bahamas.[81]

Other popular sports are swimming,[82]tennis[83] and boxing[84] where Bahamians have enjoyed some level of success at the international level. Other sports such as golf,[85]rugby league,[86]rugby union[87] and beach soccer[88] are considered growing sports. Athletics commonly known as track and field in the country is the most successful sport by far amongst Bahamians. Bahamians have a strong tradition in the sprints and jumps. Track and field is probably the most popular spectator sport in the country next to basketball due to their success over the years. Triathlons are gaining popularity in Nassau and the Family Islands.

Bahamians have gone on to win numerous track and field medals at the Olympic Games, IAAF World Championships in Athletics, Commonwealth Games and Pan American Games. Frank Rutherford is the first athletics olympic medalist for the country. He won a bronze medal for triple jump during the 1992 Summer Olympics.[89]Pauline Davis-Thompson, Debbie Ferguson, Chandra Sturrup, Savatheda Fynes and Eldece Clarke-Lewis teamed up for the first athletics Olympic Gold medal for the country when they won the 4 100 m relay at the 2000 Summer Olympics. They are affectionately known as the “Golden Girls”.[90]Tonique Williams-Darling became the first athletics individual Olympic gold medalist when she won the 400m sprint in 2004 Summer Olympics.[91] In 2007, with the disqualification of Marion Jones, Pauline Davis-Thompson was advanced to the gold medal position in the 200 metres at the 2000 Olympics, predating William-Darling.

Go here to see the original:

The Bahamas – Wikipedia

Posted in Bahamas | Comments Off on The Bahamas – Wikipedia

Human genome – Wikipedia

Posted: at 11:32 pm

Genomic information Graphical representation of the idealized human diploid karyotype, showing the organization of the genome into chromosomes. This drawing shows both the female (XX) and male (XY) versions of the 23rd chromosome pair. Chromosomes are shown aligned at their centromeres. The mitochondrial DNA is not shown. NCBI genome ID 51 Ploidy diploid Genome size

3,234.83 Mb (Mega-basepairs) per haploid genome

The human genome is the complete set of nucleic acid sequence for humans (Homo sapiens), encoded as DNA within the 23 chromosome pairs in cell nuclei and in a small DNA molecule found within individual mitochondria. Human genomes include both protein-coding DNA genes and noncoding DNA. Haploid human genomes, which are contained in germ cells (the egg and sperm gamete cells created in the meiosis phase of sexual reproduction before fertilization creates a zygote) consist of three billion DNA base pairs, while diploid genomes (found in somatic cells) have twice the DNA content. While there are significant differences among the genomes of human individuals (on the order of 0.1%),[1] these are considerably smaller than the differences between humans and their closest living relatives, the chimpanzees (approximately 4%[2]) and bonobos.

The Human Genome Project produced the first complete sequences of individual human genomes, with the first draft sequence and initial analysis being published on February 12, 2001.[3] The human genome was the first of all vertebrates to be completely sequenced. As of 2012, thousands of human genomes have been completely sequenced, and many more have been mapped at lower levels of resolution. The resulting data are used worldwide in biomedical science, anthropology, forensics and other branches of science. There is a widely held expectation that genomic studies will lead to advances in the diagnosis and treatment of diseases, and to new insights in many fields of biology, including human evolution.

Although the sequence of the human genome has been (almost) completely determined by DNA sequencing, it is not yet fully understood. Most (though probably not all) genes have been identified by a combination of high throughput experimental and bioinformatics approaches, yet much work still needs to be done to further elucidate the biological functions of their protein and RNA products. Recent results suggest that most of the vast quantities of noncoding DNA within the genome have associated biochemical activities, including regulation of gene expression, organization of chromosome architecture, and signals controlling epigenetic inheritance.

There are an estimated 19,000-20,000 human protein-coding genes.[4] The estimate of the number of human genes has been repeatedly revised down from initial predictions of 100,000 or more as genome sequence quality and gene finding methods have improved, and could continue to drop further.[5][6]Protein-coding sequences account for only a very small fraction of the genome (approximately 1.5%), and the rest is associated with non-coding RNA molecules, regulatory DNA sequences, LINEs, SINEs, introns, and sequences for which as yet no function has been determined.[7]

In June 2016, scientists formally announced HGP-Write, a plan to synthesize the human genome.[8][9]

The total length of the human genome is over 3 billion base pairs. The genome is organized into 22 paired chromosomes, plus the X chromosome (one in males, two in females) and, in males only, one Y chromosome. These are all large linear DNA molecules contained within the cell nucleus. The genome also includes the mitochondrial DNA, a comparatively small circular molecule present in each mitochondrion. Basic information about these molecules and their gene content, based on a reference genome that does not represent the sequence of any specific individual, are provided in the following table. (Data source: Ensembl genome browser release 68, July 2012)

Table 1 (above) summarizes the physical organization and gene content of the human reference genome, with links to the original analysis, as published in the Ensembl database at the European Bioinformatics Institute (EBI) and Wellcome Trust Sanger Institute. Chromosome lengths were estimated by multiplying the number of base pairs by 0.34 nanometers, the distance between base pairs in the DNA double helix. The number of proteins is based on the number of initial precursor mRNA transcripts, and does not include products of alternative pre-mRNA splicing, or modifications to protein structure that occur after translation.

The number of variations is a summary of unique DNA sequence changes that have been identified within the sequences analyzed by Ensembl as of July, 2012; that number is expected to increase as further personal genomes are sequenced and examined. In addition to the gene content shown in this table, a large number of non-expressed functional sequences have been identified throughout the human genome (see below). Links open windows to the reference chromosome sequence in the EBI genome browser. The table also describes prevalence of genes encoding structural RNAs in the genome.

MicroRNA, or miRNA, functions as a post-transcriptional regulator of gene expression. Ribosomal RNA, or rRNA, makes up the RNA portion of the ribosome and is critical in the synthesis of proteins. Small nuclear RNA, or snRNA, is found in the nucleus of the cell. Its primary function is in the processing of pre-mRNA molecules and also in the regulation of transcription factors. Small nucleolar RNA, or SnoRNA, primarily functions in guiding chemical modifications to other RNA molecules.

Although the human genome has been completely sequenced for all practical purposes, there are still hundreds of gaps in the sequence. A recent study noted more than 160 euchromatic gaps of which 50 gaps were closed.[10] However, there are still numerous gaps in the heterochromatic parts of the genome which is much harder to sequence due to numerous repeats and other intractable sequence features.

The content of the human genome is commonly divided into coding and noncoding DNA sequences. Coding DNA is defined as those sequences that can be transcribed into mRNA and translated into proteins during the human life cycle; these sequences occupy only a small fraction of the genome (

Some noncoding DNA contains genes for RNA molecules with important biological functions (noncoding RNA, for example ribosomal RNA and transfer RNA). The exploration of the function and evolutionary origin of noncoding DNA is an important goal of contemporary genome research, including the ENCODE (Encyclopedia of DNA Elements) project, which aims to survey the entire human genome, using a variety of experimental tools whose results are indicative of molecular activity.

Because non-coding DNA greatly outnumbers coding DNA, the concept of the sequenced genome has become a more focused analytical concept than the classical concept of the DNA-coding gene.[11][12]

Mutation rate of human genome is a very important factor in calculating evolutionary time points. Researchers calculated the number of genetic variations between human and apes. Dividing that number by age of fossil of most recent common ancestor of humans and ape, researchers calculated the mutation rate. Recent studies using next generation sequencing technologies concluded a slow mutation rate which doesn’t add up with human migration pattern time points and suggesting a new evolutionary time scale.[13] 100,000 year old human fossils found in Israel have served to compound this new found uncertainty of the human migration timeline.[13]

Protein-coding sequences represent the most widely studied and best understood component of the human genome. These sequences ultimately lead to the production of all human proteins, although several biological processes (e.g. DNA rearrangements and alternative pre-mRNA splicing) can lead to the production of many more unique proteins than the number of protein-coding genes.

The complete modular protein-coding capacity of the genome is contained within the exome, and consists of DNA sequences encoded by exons that can be translated into proteins. Because of its biological importance, and the fact that it constitutes less than 2% of the genome, sequencing of the exome was the first major milepost of the Human Genome Project.

Number of protein-coding genes. About 20,000 human proteins have been annotated in databases such as Uniprot.[15] Historically, estimates for the number of protein genes have varied widely, ranging up to 2,000,000 in the late 1960s,[16] but several researchers pointed out in the early 1970s that the estimated mutational load from deleterious mutations placed an upper limit of approximately 40,000 for the total number of functional loci (this includes protein-coding and functional non-coding genes).[17]

The number of human protein-coding genes is not significantly larger than that of many less complex organisms, such as the roundworm and the fruit fly. This difference may result from the extensive use of alternative pre-mRNA splicing in humans, which provides the ability to build a very large number of modular proteins through the selective incorporation of exons.

Protein-coding capacity per chromosome. Protein-coding genes are distributed unevenly across the chromosomes, ranging from a few dozen to more than 2000, with an especially high gene density within chromosomes 19, 11, and 1 (Table 1). Each chromosome contains various gene-rich and gene-poor regions, which may be correlated with chromosome bands and GC-content[citation needed]. The significance of these nonrandom patterns of gene density is not well understood.[18]

Size of protein-coding genes. The size of protein-coding genes within the human genome shows enormous variability (Table 2). For example, the gene for histone H1a (HIST1HIA) is relatively small and simple, lacking introns and encoding mRNA sequences of 781 nt and a 215 amino acid protein (648 nt open reading frame). Dystrophin (DMD) is the largest protein-coding gene in the human reference genome, spanning a total of 2.2 MB, while Titin (TTN) has the longest coding sequence (114,414 bp), the largest number of exons (363),[19] and the longest single exon (17,106 bp). Over the whole genome, the median size of an exon is 122 bp (mean = 145 bp), the median number of exons is 7 (mean = 8.8), and the median coding sequence encodes 367 amino acids (mean = 447 amino acids; Table 21 in[7] ).

Table 2. Examples of human protein-coding genes. Chrom, chromosome. Alt splicing, alternative pre-mRNA splicing. (Data source: Ensembl genome browser release 68, July 2012)

Noncoding DNA is defined as all of the DNA sequences within a genome that are not found within protein-coding exons, and so are never represented within the amino acid sequence of expressed proteins. By this definition, more than 98% of the human genomes is composed of ncDNA.

Numerous classes of noncoding DNA have been identified, including genes for noncoding RNA (e.g. tRNA and rRNA), pseudogenes, introns, untranslated regions of mRNA, regulatory DNA sequences, repetitive DNA sequences, and sequences related to mobile genetic elements.

Numerous sequences that are included within genes are also defined as noncoding DNA. These include genes for noncoding RNA (e.g. tRNA, rRNA), and untranslated components of protein-coding genes (e.g. introns, and 5′ and 3′ untranslated regions of mRNA).

Protein-coding sequences (specifically, coding exons) constitute less than 1.5% of the human genome.[7] In addition, about 26% of the human genome is introns.[20] Aside from genes (exons and introns) and known regulatory sequences (820%), the human genome contains regions of noncoding DNA. The exact amount of noncoding DNA that plays a role in cell physiology has been hotly debated. Recent analysis by the ENCODE project indicates that 80% of the entire human genome is either transcribed, binds to regulatory proteins, or is associated with some other biochemical activity.[6]

It however remains controversial whether all of this biochemical activity contributes to cell physiology, or whether a substantial portion of this is the result transcriptional and biochemical noise, which must be actively filtered out by the organism.[21] Excluding protein-coding sequences, introns, and regulatory regions, much of the non-coding DNA is composed of: Many DNA sequences that do not play a role in gene expression have important biological functions. Comparative genomics studies indicate that about 5% of the genome contains sequences of noncoding DNA that are highly conserved, sometimes on time-scales representing hundreds of millions of years, implying that these noncoding regions are under strong evolutionary pressure and positive selection.[22]

Many of these sequences regulate the structure of chromosomes by limiting the regions of heterochromatin formation and regulating structural features of the chromosomes, such as the telomeres and centromeres. Other noncoding regions serve as origins of DNA replication. Finally several regions are transcribed into functional noncoding RNA that regulate the expression of protein-coding genes (for example[23] ), mRNA translation and stability (see miRNA), chromatin structure (including histone modifications, for example[24] ), DNA methylation (for example[25] ), DNA recombination (for example[26] ), and cross-regulate other noncoding RNAs (for example[27] ). It is also likely that many transcribed noncoding regions do not serve any role and that this transcription is the product of non-specific RNA Polymerase activity.[21]

Pseudogenes are inactive copies of protein-coding genes, often generated by gene duplication, that have become nonfunctional through the accumulation of inactivating mutations. Table 1 shows that the number of pseudogenes in the human genome is on the order of 13,000,[28] and in some chromosomes is nearly the same as the number of functional protein-coding genes. Gene duplication is a major mechanism through which new genetic material is generated during molecular evolution.

For example, the olfactory receptor gene family is one of the best-documented examples of pseudogenes in the human genome. More than 60 percent of the genes in this family are non-functional pseudogenes in humans. By comparison, only 20 percent of genes in the mouse olfactory receptor gene family are pseudogenes. Research suggests that this is a species-specific characteristic, as the most closely related primates all have proportionally fewer pseudogenes. This genetic discovery helps to explain the less acute sense of smell in humans relative to other mammals.[29]

Noncoding RNA molecules play many essential roles in cells, especially in the many reactions of protein synthesis and RNA processing. Noncoding RNA include tRNA, ribosomal RNA, microRNA, snRNA and other non-coding RNA genes including about 60,000 long non coding RNAs (lncRNAs).[6][30][31][32] It should be noted that while the number of reported lncRNA genes continues to rise and the exact number in the human genome is yet to be defined, many of them are argued to be non-functional.[33]

Many ncRNAs are critical elements in gene regulation and expression. Noncoding RNA also contributes to epigenetics, transcription, RNA splicing, and the translational machinery. The role of RNA in genetic regulation and disease offers a new potential level of unexplored genomic complexity.[34]

In addition to the ncRNA molecules that are encoded by discrete genes, the initial transcripts of protein coding genes usually contain extensive noncoding sequences, in the form of introns, 5′-untranslated regions (5′-UTR), and 3′-untranslated regions (3′-UTR). Within most protein-coding genes of the human genome, the length of intron sequences is 10- to 100-times the length of exon sequences (Table 2).

The human genome has many different regulatory sequences which are crucial to controlling gene expression. Conservative estimates indicate that these sequences make up 8% of the genome,[35] however extrapolations from the ENCODE project give that 20[36]-40%[37] of the genome is gene regulatory sequence. Some types of non-coding DNA are genetic “switches” that do not encode proteins, but do regulate when and where genes are expressed (called enhancers).[38]

Regulatory sequences have been known since the late 1960s.[39] The first identification of regulatory sequences in the human genome relied on recombinant DNA technology.[40] Later with the advent of genomic sequencing, the identification of these sequences could be inferred by evolutionary conservation. The evolutionary branch between the primates and mouse, for example, occurred 7090 million years ago.[41] So computer comparisons of gene sequences that identify conserved non-coding sequences will be an indication of their importance in duties such as gene regulation.[42]

Other genomes have been sequenced with the same intention of aiding conservation-guided methods, for exampled the pufferfish genome.[43] However, regulatory sequences disappear and re-evolve during evolution at a high rate.[44][45][46]

As of 2012, the efforts have shifted toward finding interactions between DNA and regulatory proteins by the technique ChIP-Seq, or gaps where the DNA is not packaged by histones (DNase hypersensitive sites), both of which tell where there are active regulatory sequences in the investigated cell type.[35]

Repetitive DNA sequences comprise approximately 50% of the human genome.[47]

About 8% of the human genome consists of tandem DNA arrays or tandem repeats, low complexity repeat sequences that have multiple adjacent copies (e.g. “CAGCAGCAG…”).[citation needed] The tandem sequences may be of variable lengths, from two nucleotides to tens of nucleotides. These sequences are highly variable, even among closely related individuals, and so are used for genealogical DNA testing and forensic DNA analysis.[48]

Repeated sequences of fewer than ten nucleotides (e.g. the dinucleotide repeat (AC)n) are termed microsatellite sequences. Among the microsatellite sequences, trinucleotide repeats are of particular importance, as sometimes occur within coding regions of genes for proteins and may lead to genetic disorders. For example, Huntington’s disease results from an expansion of the trinucleotide repeat (CAG)n within the Huntingtin gene on human chromosome 4. Telomeres (the ends of linear chromosomes) end with a microsatellite hexanucleotide repeat of the sequence (TTAGGG)n.

Tandem repeats of longer sequences (arrays of repeated sequences 1060 nucleotides long) are termed minisatellites.

Transposable genetic elements, DNA sequences that can replicate and insert copies of themselves at other locations within a host genome, are an abundant component in the human genome. The most abundant transposon lineage, Alu, has about 50,000 active copies,[49] and can be inserted into intragenic and intergenic regions.[50] One other lineage, LINE-1, has about 100 active copies per genome (the number varies between people).[51] Together with non-functional relics of old transposons, they account for over half of total human DNA.[52] Sometimes called “jumping genes”, transposons have played a major role in sculpting the human genome. Some of these sequences represent endogenous retroviruses, DNA copies of viral sequences that have become permanently integrated into the genome and are now passed on to succeeding generations.

Mobile elements within the human genome can be classified into LTR retrotransposons (8.3% of total genome), SINEs (13.1% of total genome) including Alu elements, LINEs (20.4% of total genome), SVAs and Class II DNA transposons (2.9% of total genome).

With the exception of identical twins, all humans show significant variation in genomic DNA sequences. The human reference genome (HRG) is used as a standard sequence reference.

There are several important points concerning the human reference genome:

Most studies of human genetic variation have focused on single-nucleotide polymorphisms (SNPs), which are substitutions in individual bases along a chromosome. Most analyses estimate that SNPs occur 1 in 1000 base pairs, on average, in the euchromatic human genome, although they do not occur at a uniform density. Thus follows the popular statement that “we are all, regardless of race, genetically 99.9% the same”,[53] although this would be somewhat qualified by most geneticists. For example, a much larger fraction of the genome is now thought to be involved in copy number variation.[54] A large-scale collaborative effort to catalog SNP variations in the human genome is being undertaken by the International HapMap Project.

The genomic loci and length of certain types of small repetitive sequences are highly variable from person to person, which is the basis of DNA fingerprinting and DNA paternity testing technologies. The heterochromatic portions of the human genome, which total several hundred million base pairs, are also thought to be quite variable within the human population (they are so repetitive and so long that they cannot be accurately sequenced with current technology). These regions contain few genes, and it is unclear whether any significant phenotypic effect results from typical variation in repeats or heterochromatin.

Most gross genomic mutations in gamete germ cells probably result in inviable embryos; however, a number of human diseases are related to large-scale genomic abnormalities. Down syndrome, Turner Syndrome, and a number of other diseases result from nondisjunction of entire chromosomes. Cancer cells frequently have aneuploidy of chromosomes and chromosome arms, although a cause and effect relationship between aneuploidy and cancer has not been established.

Whereas a genome sequence lists the order of every DNA base in a genome, a genome map identifies the landmarks. A genome map is less detailed than a genome sequence and aids in navigating around the genome.[55][56]

An example of a variation map is the HapMap being developed by the International HapMap Project. The HapMap is a haplotype map of the human genome, “which will describe the common patterns of human DNA sequence variation.”[57] It catalogs the patterns of small-scale variations in the genome that involve single DNA letters, or bases.

Researchers published the first sequence-based map of large-scale structural variation across the human genome in the journal Nature in May 2008.[58][59] Large-scale structural variations are differences in the genome among people that range from a few thousand to a few million DNA bases; some are gains or losses of stretches of genome sequence and others appear as re-arrangements of stretches of sequence. These variations include differences in the number of copies individuals have of a particular gene, deletions, translocations and inversions.

Single-nucleotide polymorphisms (SNPs) do not occur homogeneously across the human genome. In fact, there is enormous diversity in SNP frequency between genes, reflecting different selective pressures on each gene as well as different mutation and recombination rates across the genome. However, studies on SNPs are biased towards coding regions, the data generated from them are unlikely to reflect the overall distribution of SNPs throughout the genome. Therefore, the SNP Consortium protocol was designed to identify SNPs with no bias towards coding regions and the Consortium’s 100,000 SNPs generally reflect sequence diversity across the human chromosomes.The SNP Consortium aims to expand the number of SNPs identified across the genome to 300 000 by the end of the first quarter of 2001.[60]

Changes in non-coding sequence and synonymous changes in coding sequence are generally more common than non-synonymous changes, reflecting greater selective pressure reducing diversity at positions dictating amino acid identity. Transitional changes are more common than transversions, with CpG dinucleotides showing the highest mutation rate, presumably due to deamination.

A personal genome sequence is a (nearly) complete sequence of the chemical base pairs that make up the DNA of a single person. Because medical treatments have different effects on different people due to genetic variations such as single-nucleotide polymorphisms (SNPs), the analysis of personal genomes may lead to personalized medical treatment based on individual genotypes.[citation needed]

The first personal genome sequence to be determined was that of Craig Venter in 2007. Personal genomes had not been sequenced in the public Human Genome Project to protect the identity of volunteers who provided DNA samples. That sequence was derived from the DNA of several volunteers from a diverse population.[61] However, early in the Venter-led Celera Genomics genome sequencing effort the decision was made to switch from sequencing a composite sample to using DNA from a single individual, later revealed to have been Venter himself. Thus the Celera human genome sequence released in 2000 was largely that of one man. Subsequent replacement of the early composite-derived data and determination of the diploid sequence, representing both sets of chromosomes, rather than a haploid sequence originally reported, allowed the release of the first personal genome.[62] In April 2008, that of James Watson was also completed. Since then hundreds of personal genome sequences have been released,[63] including those of Desmond Tutu,[64][65] and of a Paleo-Eskimo.[66] In November 2013, a Spanish family made their personal genomics data obtained by direct-to-consumer genetic testing with 23andMe publicly available under a Creative Commons public domain license. This is believed to be the first such public genomics dataset for a whole family.[67]

The sequencing of individual genomes further unveiled levels of genetic complexity that had not been appreciated before. Personal genomics helped reveal the significant level of diversity in the human genome attributed not only to SNPs but structural variations as well. However, the application of such knowledge to the treatment of disease and in the medical field is only in its very beginnings.[68]Exome sequencing has become increasingly popular as a tool to aid in diagnosis of genetic disease because the exome contributes only 1% of the genomic sequence but accounts for roughly 85% of mutations that contribute significantly to disease.[69]

Most aspects of human biology involve both genetic (inherited) and non-genetic (environmental) factors. Some inherited variation influences aspects of our biology that are not medical in nature (height, eye color, ability to taste or smell certain compounds, etc.). Moreover, some genetic disorders only cause disease in combination with the appropriate environmental factors (such as diet). With these caveats, genetic disorders may be described as clinically defined diseases caused by genomic DNA sequence variation. In the most straightforward cases, the disorder can be associated with variation in a single gene. For example, cystic fibrosis is caused by mutations in the CFTR gene, and is the most common recessive disorder in caucasian populations with over 1,300 different mutations known.[70]

Disease-causing mutations in specific genes are usually severe in terms of gene function, and are fortunately rare, thus genetic disorders are similarly individually rare. However, since there are many genes that can vary to cause genetic disorders, in aggregate they constitute a significant component of known medical conditions, especially in pediatric medicine. Molecularly characterized genetic disorders are those for which the underlying causal gene has been identified, currently there are approximately 2,200 such disorders annotated in the OMIM database.[70]

Studies of genetic disorders are often performed by means of family-based studies. In some instances population based approaches are employed, particularly in the case of so-called founder populations such as those in Finland, French-Canada, Utah, Sardinia, etc. Diagnosis and treatment of genetic disorders are usually performed by a geneticist-physician trained in clinical/medical genetics. The results of the Human Genome Project are likely to provide increased availability of genetic testing for gene-related disorders, and eventually improved treatment. Parents can be screened for hereditary conditions and counselled on the consequences, the probability it will be inherited, and how to avoid or ameliorate it in their offspring.

As noted above, there are many different kinds of DNA sequence variation, ranging from complete extra or missing chromosomes down to single nucleotide changes. It is generally presumed that much naturally occurring genetic variation in human populations is phenotypically neutral, i.e. has little or no detectable effect on the physiology of the individual (although there may be fractional differences in fitness defined over evolutionary time frames). Genetic disorders can be caused by any or all known types of sequence variation. To molecularly characterize a new genetic disorder, it is necessary to establish a causal link between a particular genomic sequence variant and the clinical disease under investigation. Such studies constitute the realm of human molecular genetics.

With the advent of the Human Genome and International HapMap Project, it has become feasible to explore subtle genetic influences on many common disease conditions such as diabetes, asthma, migraine, schizophrenia, etc. Although some causal links have been made between genomic sequence variants in particular genes and some of these diseases, often with much publicity in the general media, these are usually not considered to be genetic disorders per se as their causes are complex, involving many different genetic and environmental factors. Thus there may be disagreement in particular cases whether a specific medical condition should be termed a genetic disorder. The categorized table below provides the prevalence as well as the genes or chromosomes associated with some human genetic disorders.












Comparative genomics studies of mammalian genomes suggest that approximately 5% of the human genome has been conserved by evolution since the divergence of extant lineages approximately 200 million years ago, containing the vast majority of genes.[72][73] The published chimpanzee genome differs from that of the human genome by 1.23% in direct sequence comparisons.[74] Around 20% of this figure is accounted for by variation within each species, leaving only ~1.06% consistent sequence divergence between humans and chimps at shared genes.[75] This nucleotide by nucleotide difference is dwarfed, however, by the portion of each genome that is not shared, including around 6% of functional genes that are unique to either humans or chimps.[76]

In other words, the considerable observable differences between humans and chimps may be due as much or more to genome level variation in the number, function and expression of genes rather than DNA sequence changes in shared genes. Indeed, even within humans, there has been found to be a previously unappreciated amount of copy number variation (CNV) which can make up as much as 5 15% of the human genome. In other words, between humans, there could be +/- 500,000,000 base pairs of DNA, some being active genes, others inactivated, or active at different levels. The full significance of this finding remains to be seen. On average, a typical human protein-coding gene differs from its chimpanzee ortholog by only two amino acid substitutions; nearly one third of human genes have exactly the same protein translation as their chimpanzee orthologs. A major difference between the two genomes is human chromosome 2, which is equivalent to a fusion product of chimpanzee chromosomes 12 and 13.[77] (later renamed to chromosomes 2A and 2B, respectively).

Humans have undergone an extraordinary loss of olfactory receptor genes during our recent evolution, which explains our relatively crude sense of smell compared to most other mammals. Evolutionary evidence suggests that the emergence of color vision in humans and several other primate species has diminished the need for the sense of smell.[78]

In September 2016, scientists reported that, based on human DNA genetic studies, all non-Africans in the world today can be traced to a single population that exited Africa between 50,000 and 80,000 years ago.[79]

The human mitochondrial DNA is of tremendous interest to geneticists, since it undoubtedly plays a role in mitochondrial disease. It also sheds light on human evolution; for example, analysis of variation in the human mitochondrial genome has led to the postulation of a recent common ancestor for all humans on the maternal line of descent (see Mitochondrial Eve).

Due to the lack of a system for checking for copying errors, mitochondrial DNA (mtDNA) has a more rapid rate of variation than nuclear DNA. This 20-fold higher mutation rate allows mtDNA to be used for more accurate tracing of maternal ancestry. Studies of mtDNA in populations have allowed ancient migration paths to be traced, such as the migration of Native Americans from Siberia or Polynesians from southeastern Asia. It has also been used to show that there is no trace of Neanderthal DNA in the European gene mixture inherited through purely maternal lineage.[80] Due to the restrictive all or none manner of mtDNA inheritance, this result (no trace of Neanderthal mtDNA) would be likely unless there were a large percentage of Neanderthal ancestry, or there was strong positive selection for that mtDNA (for example, going back 5 generations, only 1 of your 32 ancestors contributed to your mtDNA, so if one of these 32 was pure Neanderthal you would expect that ~3% of your autosomal DNA would be of Neanderthal origin, yet you would have a ~97% chance to have no trace of Neanderthal mtDNA).

Epigenetics describes a variety of features of the human genome that transcend its primary DNA sequence, such as chromatin packaging, histone modifications and DNA methylation, and which are important in regulating gene expression, genome replication and other cellular processes. Epigenetic markers strengthen and weaken transcription of certain genes but do not affect the actual sequence of DNA nucleotides. DNA methylation is a major form of epigenetic control over gene expression and one of the most highly studied topics in epigenetics. During development, the human DNA methylation profile experiences dramatic changes. In early germ line cells, the genome has very low methylation levels. These low levels generally describe active genes. As development progresses, parental imprinting tags lead to increased methylation activity.[81][82]

Epigenetic patterns can be identified between tissues within an individual as well as between individuals themselves. Identical genes that have differences only in their epigenetic state are called epialleles. Epialleles can be placed into three categories: those directly determined by an individuals genotype, those influenced by genotype, and those entirely independent of genotype. The epigenome is also influenced significantly by environmental factors. Diet, toxins, and hormones impact the epigenetic state. Studies in dietary manipulation have demonstrated that methyl-deficient diets are associated with hypomethylation of the epigenome. Such studies establish epigenetics as an important interface between the environment and the genome.[83]

Continue reading here:
Human genome – Wikipedia

Posted in Genome | Comments Off on Human genome – Wikipedia

How to increase serotonin in the human brain without drugs

Posted: October 17, 2016 at 1:20 am

For the last 4 decades, the question of how to manipulate the serotonergic system with drugs has been an important area of research in biological psychiatry, and this research has led to advances in the treatment of depression. Research on the association between various polymorphisms and depression supports the idea that serotonin plays a role, not only in the treatment of depression but also in susceptibility to depression and suicide. The research focus here has been on polymorphisms of the serotonin transporter, but other serotonin-related genes may also be involved.15 In the future, genetic research will make it possible to predict with increasing accuracy who is susceptible to depression. Much less attention has been given to how this information will be used for the benefit of individuals with a serotonin-related susceptibility to depression, and little evidence exists concerning strategies to prevent depression in those with such a susceptibility. Various studies have looked at early intervention in those with prodromal symptoms as well as at population strategies for preventing depression.611 Obviously, prevention is preferable to early intervention; moreover, although population strategies are important, they are ideally supplemented with preventive interventions that can be used over long periods of time in targeted individuals who do not yet exhibit even nonclinical symptoms. Clearly, pharmacologic approaches are not appropriate, and given the evidence for serotonin’s role in the etiology and treatment of depression, nonpharmacologic methods of increasing serotonin are potential candidates to test for their ability to prevent depression.

Another reason for pursuing nonpharmacologic methods of increasing serotonin arises from the increasing recognition that happiness and well-being are important, both as factors protecting against mental and physical disorders and in their own right.1214 Conversely, negative moods are associated with negative outcomes. For example, the negative mood hostility is a risk factor for many disorders. For the sake of brevity, hostility is discussed here mainly in relation to one of the biggest sources of mortality, coronary heart disease (CHD). A meta-analysis of 45 studies demonstrated that hostility is a risk factor for CHD and for all-cause mortality.15 More recent research confirms this. Hostility is associated not only with the development of CHD but also with poorer survival in coronary artery disease (CAD) patients.16 Hostility may lead to decreased social support and social isolation,17 and low perceived social support is associated with greater mortality in those with CAD.18 Effects are not just limited to CHD. For example, the opposite of hostility, agreeableness, was a significant protective factor against mortality in a sample of older, frail participants.19

The constitution of the WHO states Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity.20 This may sound exaggerated but positive mood within the normal range is an important predictor of health and longevity. In a classic study, those in the lowest quartile for positive emotions, rated from autobiographies written at a mean age of 22 years, died on average 10 years earlier than those in the highest quartile.21 Even taking into account possible confounders, other studies found the same solid link between feeling good and living longer.12 In a series of recent studies, negative emotions were associated with increased disability due to mental and physical disorders,22 increased incidence of depression,23 increased suicide24 and increased mortality25 up to 2 decades later. Positive emotions protected against these outcomes. A recent review including meta-analyses assessed cross-sectional, longitudinal and experimental studies and concluded that happiness is associated with and precedes numerous successful outcomes.26 Mood may influence social behaviour, and social support is one of the most studied psychosocial factors in relation to health and disease.27 Low social support is associated with higher levels of stress, depression, dysthymia and posttraumatic stress disorder and with increased morbidity and mortality from a host of medical illnesses.27

Research confirms what might be intuitively expected, that positive emotions and agreeableness foster congenial relationships with others.28,29 This in turn will create the conditions for an increase in social support.

Several studies found an association between measures related to serotonin and mood in the normal range. Lower platelet serotonin2 receptor function was associated with lower mood in one study,30 whereas better mood was associated with higher blood serotonin levels in another.31 Two studies found that greater prolactin release in response to fenfluramine was associated with more positive mood.32,33 The idea that these associations indicate a causal association between serotonin function and mood within the normal range is consistent with a study demonstrating that, in healthy people with high trait irritability, tryptophan, relative to placebo, decreased quarrelsome behaviours, increased agreeable behaviours and improved mood.34 Serotonin may be associated with physical health as well as mood. In otherwise healthy individuals, a low prolactin response to the serotonin-releasing drug fenfluramine was associated with the metabolic syndrome, a risk factor for heart disease,35 suggesting that low serotonin may predispose healthy individuals to suboptimal physical as well as mental functioning.

Nonpharmacologic methods of raising brain serotonin may not only improve mood and social functioning of healthy people a worthwhile objective even without additional considerations but would also make it possible to test the idea that increases in brain serotonin may help protect against the onset of various mental and physical disorders. Four strategies that are worth further investigation are discussed below.

The article by Perreau-Linck and colleagues36 (page 430 of this issue) provides an initial lead about one possible strategy for raising brain serotonin. Using positron emission tomography, they obtained a measure of serotonin synthesis in the brains of healthy participants who underwent positive, negative and neutral mood inductions. Reported levels of happiness were positively correlated and reported levels of sadness were negatively correlated with serotonin synthesis in the right anterior cingulate cortex. The idea that alterations in thought, either self-induced or due to psychotherapy, can alter brain metabolism is not new. Numerous studies have demonstrated changes in blood flow in such circumstances. However, reports related to specific transmitters are much less common. In one recent study, meditation was reported to increase release of dopamine.37 The study by Perreau-Linck and colleagues36 is the first to report that self-induced changes in mood can influence serotonin synthesis. This raises the possibility that the interaction between serotonin synthesis and mood may be 2-way, with serotonin influencing mood and mood influencing serotonin. Obviously, more work is needed to answer questions in this area. For example, is the improvement in mood associated with psychotherapy accompanied by increases in serotonin synthesis? If more precise information is obtained about the mental states that increase serotonin synthesis, will this help to enhance therapy techniques?

Exposure to bright light is a second possible approach to increasing serotonin without drugs. Bright light is, of course, a standard treatment for seasonal depression, but a few studies also suggest that it is an effective treatment for nonseasonal depression38 and also reduces depressed mood in women with premenstrual dysphoric disorder39 and in pregnant women suffering from depression.40 The evidence relating these effects to serotonin is indirect. In human postmortem brain, serotonin levels are higher in those who died in summer than in those who died in winter.41 A similar conclusion came from a study on healthy volunteers, in which serotonin synthesis was assessed by measurements of the serotonin metabolite 5-hydroxyindoleacetic acid (5-HIAA) in the venous outflow from the brain.42 There was also a positive correlation between serotonin synthesis and the hours of sunlight on the day the measurements were made, independent of season. In rats, serotonin is highest during the light part of the lightdark cycle, and this state is driven by the photic cycle rather than the circadian rhythm.43,44 The existence of a retinoraphe tract may help explain why, in experimental animals, neuronal firing rates, c-fos expression and the serotonin content in the raphe nuclei are responsive to retinal light exposure.4448 In humans, there is certainly an interaction between bright light and the serotonin system. The mood-lowering effect of acute tryptophan depletion in healthy women is completely blocked by carrying out the study in bright light (3000 lux) instead of dim light.49

Relatively few generations ago, most of the world population was involved in agriculture and was outdoors for much of the day. This would have resulted in high levels of bright light exposure even in winter. Even on a cloudy day, the light outside can be greater than 1000 lux, a level never normally achieved indoors. In a recent study carried out at around latitude 45 N, daily exposure to light greater than 1000 lux averaged about 30 minutes in winter and only about 90 minutes in summer50 among people working at least 30 hours weekly; weekends were included. In this group, summer bright light exposure was probably considerably less than the winter exposure of our agricultural ancestors. We may be living in a bright lightdeprived society. A large literature that is beyond the scope of this editorial exists on the beneficial effect of bright light exposure in healthy individuals. Lamps designed for the treatment of seasonal affective disorder, which provide more lux than is ever achieved by normal indoor lighting, are readily available, although incorporating their use into a daily routine may be a challenge for some. However, other strategies, both personal and institutional, exist. Light cafes pioneered in Scandinavia have come to the United Kingdom,51 and an Austrian village that receives no sunshine in the winter because of its surrounding mountains is building a series of giant mirrors to reflect sunlight into the valley.52 Better use of daylight in buildings is an issue that architects are increasingly aware of. Working indoors does not have to be associated with suboptimal exposure to bright light.

A third strategy that may raise brain serotonin is exercise. A comprehensive review of the relation between exercise and mood concluded that antidepressant and anxiolytic effects have been clearly demonstrated.53 In the United Kingdom the National Institute for Health and Clinical Excellence, which works on behalf of the National Health Service and makes recommendations on treatments according to the best available evidence, has published a guide on the treatment of depression.54 The guide recommends treating mild clinical depression with various strategies, including exercise rather than antidepressants, because the riskbenefit ratio is poor for antidepressant use in patients with mild depression. Exercise improves mood in subclinical populations as well as in patients. The most consistent effect is seen when regular exercisers undertake aerobic exercise at a level with which they are familiar.53 However, some skepticism remains about the antidepressant effect of exercise, and the National Institute of Mental Health in the United States is currently funding a clinical trial of the antidepressant effect of exercise that is designed to overcome sources of potential bias and threats to internal and external validity that have limited previous research.55

Several lines of research suggest that exercise increases brain serotonin function in the human brain. Post and colleagues56 measured biogenic amine metabolites in cerebrospinal fluid (CSF) of patients with depression before and after they increased their physical activity to simulate mania. Physical activity increased 5-HIAA, but it is not clear that this was due to increased serotonin turnover or to mixing of CSF from higher regions, which contain higher levels of 5-HIAA, with lumbar CSF (or to a combination of both mechanisms). Nonetheless, this finding stimulated many animal studies on the effects of exercise. For example, Chaouloff and colleagues57 showed that exercise increased tryptophan and 5-HIAA in rat ventricles. More recent studies using intracerebral dialysis have shown that exercise increases extracellular serotonin and 5-HIAA in various brain areas, including the hippocampus and cortex (for example, see5860). Two different mechanisms may be involved in this effect. As reviewed by Jacobs and Fornal,61 motor activity increases the firing rates of serotonin neurons, and this results in increased release and synthesis of serotonin.62 In addition, there is an increase in the brain of the serotonin precursor tryptophan that persists after exercise.63

The largest body of work in humans looking at the effect of exercise on tryptophan availability to the brain is concerned with the hypothesis that fatigue during exercise is associated with elevated brain tryptophan and serotonin synthesis. A large body of evidence supports the idea that exercise, including exercise to fatigue, is associated with an increase in plasma tryptophan and a decrease in the plasma level of the branched chain amino acids (BCAAs) leucine, isoleucine and valine (see64,65 for reviews). The BCAAs inhibit tryptophan transport into the brain.66 Because of the increase in plasma tryptophan and decrease in BCAA, there is a substantial increase in tryptophan availability to the brain. Tryptophan is an effective mild hypnotic,67 a fact that stimulated the hypothesis that it may be involved in fatigue. A full discussion of this topic is not within the scope of this editorial; however, it is notable that several clinical trials of BCAA investigated whether it was possible to counter fatigue by lowering brain tryptophan, with results that provided little support for the hypothesis. Further, exercise results in an increase in the plasma ratio of tryptophan to the BCAAs before the onset of fatigue.64,65 The conclusion of these studies is that, in humans, a rise in precursor availability should increase serotonin synthesis during and after exercise and that this is not related to fatigue, although it may be related to improved mood. Whether motor activity increases the firing rate of serotonin neurons in humans, as in animals, is not known. However, it is clear that aerobic exercise can improve mood.

As with exposure to bright light, there has been a large change in the level of vigorous physical exercise experienced since humans were hunter-gatherers or engaged primarily in agriculture.68 Lambert68 argued that the decline in vigorous physical exercise and, in particular, in effort-based rewards may contribute to the high level of depression in today’s society. The effect of exercise on serotonin suggests that the exercise itself, not the rewards that stem from exercise, may be important. If trials of exercise to prevent depression are successful, then prevention of depression can be added to the numerous other benefits of exercise.

The fourth factor that could play a role in raising brain serotonin is diet. According to some evidence, tryptophan, which increases brain serotonin in humans as in experimental animals,69 is an effective antidepressant in mild-to-moderate depression.67,70 Further, in healthy people with high trait irritability, it increases agreeableness, decreases quarrelsomeness and improves mood.34 However, whether tryptophan should be considered primarily as a drug or a dietary component is a matter of some dispute. In the United States, it is classified as a dietary component, but Canada and some European countries classify it as a drug. Treating tryptophan as a drug is reasonable because, first, there is normally no situation in which purified tryptophan is needed for dietary reasons, and second, purified tryptophan and foods containing tryptophan have different effects on brain serotonin. Although purified tryptophan increases brain serotonin, foods containing tryptophan do not.71 This is because tryptophan is transported into the brain by a transport system that is active toward all the large neutral amino acids and tryptophan is the least abundant amino acid in protein. There is competition between the various amino acids for the transport system, so after the ingestion of a meal containing protein, the rise in the plasma level of the other large neutral amino acids will prevent the rise in plasma tryptophan from increasing brain tryptophan. The idea, common in popular culture, that a high-protein food such as turkey will raise brain tryptophan and serotonin is, unfortunately, false. Another popular myth that is widespread on the Internet is that bananas improve mood because of their serotonin content. Although it is true that bananas contain serotonin, it does not cross the bloodbrain barrier.

-Lactalbumin, a minor constituent of milk, is one protein that contains relatively more tryptophan than most proteins. Acute ingestion of -lactalbumin by humans can improve mood and cognition in some circumstances, presumably owing to increased serotonin.72,73 Enhancing the tryptophan content of the diet chronically with -lactalbumin is probably not practical. However, increasing the tryptophan content of the diet relative to that of the other amino acids is something that possibly occurred in the past and could occur again in the future. Kerem and colleagues74 studied the tryptophan content of both wild chickpeas and the domesticated chickpeas that were bred from them in the Near East in neolithic times. The mean protein content (per mg dry seed) was similar for 73 cultivars and 15 wild varieties. In the cultivated group, however, the tryptophan content was almost twice that of the wild seeds. Interestingly, the greater part of the increase was due to an increase in the free tryptophan content (i.e., not part of the protein). In cultivated chickpeas, almost two-thirds of the tryptophan was in the free form. Kerem and colleagues74 argue that there was probably selection for seeds with a higher tryptophan content. This is plausible, given another example of an early strategy to increase the available tryptophan content of an important food source. Pellagra is a disorder caused by niacin deficiency, usually owing to poverty and a diet relying heavily on corn (maize), which has a low level of niacin and its precursor tryptophan. Cultures in the Americas that relied greatly on corn used alkali during its processing (e.g., boiling the corn in lime when making tortillas). This enhanced the nutritional quality of the corn by increasing the bioavailability of both niacin and tryptophan, a practice that prevented pellagra.75 The Europeans transported corn around the world but did not transport the traditional alkali-processing methods, thereby causing epidemics of pellagra in past centuries. Breeding corn with a higher tryptophan content was shown in the 1980s to prevent pellagra76; presumably, it also raised brain serotonin. In a recent issue of Nature Biotechnology, Morris and Sands77 argue that plant breeders should be focusing more on nutrition than on yield. They ask, Could consumption of tryptophan-rich foods play a role in reducing the prevalence of depression and aggression in society? Cross-national studies have reported a positive association between corn consumption and homicide rates78 and a negative association between dietary tryptophan and suicide rates.79 Although the idea behind such studies is interesting, any causal attribution must remain speculative, given the possible confounds. Nonetheless, the possibility that the mental health of a population could be improved by increasing the dietary intake of tryptophan relative to the dietary intake of other amino acids remains an interesting idea that should be explored.

The primary purpose of this editorial is to point out that pharmacologic strategies are not the only ones worthy of study when devising strategies to increase brain serotonin function. The effect of nonpharmacologic interventions on brain serotonin and the implications of increased serotonin for mood and behaviour need to be studied more. The amount of money and effort put into research on drugs that alter serotonin is very much greater than that put into non-pharmacologic methods. The magnitude of the discrepancy is probably neither in tune with the wishes of the public nor optimal for progress in the prevention and treatment of mental disorders.

See the original post:
How to increase serotonin in the human brain without drugs

Posted in Human Longevity | Comments Off on How to increase serotonin in the human brain without drugs

Casino Gambling Web | Best Online Gambling News and Casinos …

Posted: October 13, 2016 at 5:36 am

The Top Online Casino Gambling News Reporting Site Since 2002! Latest News From the Casino Gambling Industry

Cheers and Jeers Abound for New UK Online Gambling Law May 19, 2014 The new UK betting law is expected to be finalized by July 1st and go into effect by September 1st. However, many are concerned the law could create another wild-west situation in the UK… Speculation on Casino Gambling Legalization in Japan Continues May 13, 2014 LVS owner Sheldon Adelson continues to create gambling news across the world, this time in Japan as he salivates at the possibility of legalization before the 2020 Olympics… LVS Owner Adelson Pulling the Strings of Politicians in the US May 8, 2014 Las Vegas Sands is playing the political system, and its owner, Sheldon Adelson, is the puppet master behind the curtain pulling the strings, according to new reports… New Jersey Bets Big on Sports Gambling, Loses – So Far… May 5, 2014 Governor Chris Christie may need a win in the Supreme Court to justify his defense for his initiative to legalize sports betting in the state… Tribal And Private Gaming Owners Square Off In Massachusetts April 28, 2014 Steve Wynn and the Mohegan Sun are squaring off in a battle for a casino license in Massachusetts, and the two have vastly different views of how regulations are being constructed…

Below is a quick guide to the best gambling sites online. One is for USA players, the other is for players in the rest of the world. Good luck!

As laws change in 2012 the internet poker craze is set to boom once again in North America. Bovada, formerly known as Bodog, is one of the only sites that weathered the storm and they are now the best place to play online. More players gamble here than anywhere else.

The goal of Casino Gambling Web is to provide each of our visitors with an insider’s view of every aspect of the gambling world. We have over 30 feeds releasing news to more than 30 specific gaming related categories in order to achieve our important goal of keeping you well updated and informed.

The main sections of our site are broken up into 5 broad areas of gambling news. The first area of news we cover is about issues concerning brick and mortar casinos like those found in Atlantic City, Las Vegas, the Gulf Coast Region, and well, now the rest of the USA. The second area of gambling news we cover concerns itself with the Internet casino community. We also have reporters who cover the international poker community and also the world of sports gambling. And finally, we cover news about the law when it effects any part of the gambling community; such legal news could include information on updates to the UIGEA, or issues surrounding gambling petitions to repeal that law, or information and stories related to new poker laws that are constantly being debated in state congresses.

We go well beyond simply reporting the news. We get involved with the news and sometimes we even become the news. We pride ourselves on providing follow up coverage to individual news stories. We had reporters in Washington D.C. on the infamous night when the internet gambling ban was passed by a now proven to be corrupt, former senator Bill Frist led congress, and we have staff constantly digging to get important details to American citizens. We had reporters at the World Series of Poker in Las Vegas when Jamie Gold won his ring and changed the online gambling world, and we have representatives playing in the tournament each and every year.

It is our pleasure and proud duty to serve as a reliable source of gambling news and quality online casino reviews for all of the international gaming community. Please take a few moments to look around our site and discover why we, and most other insiders of the industry, have considered CGW the #1 Top Casino Gambling News eporting Organization since 2002.

The United States changed internet gambling when they passed the Unlawful Internet Gambling Enforcement Act (UIGEA), so now when searching for top online casinos you must focus your energies on finding post-UIGEA information as opposed to pre-UIGEA information. Before the law passed you could find reliable info on most gambling portals across the internet. Most of those portals simply advertised casinos and gambling sites that were tested and approved by eCogra, and in general you would be hard pressed to find an online casino that had a bad reputation. However, now that these gambling sites were forced out of the US they may be changing how they run their business. That is why it important to get your information from reliable sources who have been following the industry and keeping up with which companies have remained honorable. So good luck and happy hunting!

The Unlawful Internet Gambling Enforcement Act (UIGEA), in short, states that anything that may be illegal on a state level is now also illegal on a federal level. However, the day after Christmas in 2011, President Barrack Obama’s administration delivered what the online gaming industry will view forever as a great big beautifully wrapped present. The government released a statement declaring that the 1961 Federal Wire Act only covers sports betting. What this means for the industry on an international level is still unknown, but what it means in the USA is that states can begin running online poker sites and selling lottery tickets to its citizens within its borders. The EU and WTO will surely have some analysis and we will keep you updated as this situation unfolds. Be sure to check with state laws before you start to gamble online.

The UK was the first high-power territory to legalize and regulate gambling online with a law passed in 2007. They allow all forms of betting but have strict requirements on advertisers. They first attracted offshore companies to come on land, which gave the gambling companies who complied the appearance of legitamacy. However, high taxes forced many who originally came to land, back out to sea and the battle forever rages on, but on a whole, the industry regulations have proven greatly successful and have since served as a model for other gaming enlightened countries around the world.

Since then, many European countries have regulated the industry, breaking up long term monopolies, sometimes even breaking up government backed empires, finally allowing competition – and the industry across the globe (outside of the USA) is thriving with rave reviews, even from those who are most interested in protecting the innocent and vulnerable members of society.

We strive to provide our visitors with the most valuable information about problem gambling and addiction in society. We have an entire section of our site dedicated to news about the subject. When a state or territory implements new technology to safeguard itself from allowing problem gamblers to proliferate, we will report it to you. If there is a new story that reveals some positive or negative information about gambling as it is related to addiction, we will report it to you. And if you think you have a problem with gambling right now, please visit Gamblers Anonymous if you feel you have a gambling problem.

In order to get all the information you need about this industry it is important to visit Wiki’s Online Gambling page. It provides an unbiased view of the current state of the Internet gambling industry. If you are interested in learning about other issues you may also enjoy visiting the National Council on Problem Gambling, a righteous company whose sole purpose is to help protect and support problem gamblers. They have a lot of great resources for anyone interested in learning more.

Read the original post:

Casino Gambling Web | Best Online Gambling News and Casinos …

Posted in Gambling | Comments Off on Casino Gambling Web | Best Online Gambling News and Casinos …

Holidays to the Caribbean 2016 / 2017 | loveholidays.com

Posted: October 6, 2016 at 2:58 pm

Top Hotels Caribbean Treasures

Think Caribbean and you think beach holiday. And you certainly wont find a better destination for lounging in the sand, preferably with something rum-based nearby. That isnt nearly all that these islands have to offer though. Rain forests and mountains for starters; distinctive island cultures that only have providing a good time in common; and exciting towns and cities with some fascinating history.

Youve got a picture of the perfect Caribbean island in your head already the palm tree-fringed, white-sand beach, the funky little beach bar under the trees, yachts sailing by on the deep blue sea. The good news is that youve got it just right; the Caribbean more than lives up to the most demanding expectations.

You might want to replace that tin-shack bar in your fantasy with a big, luxurious, all-inclusive resort hotel. And thats easily enough done. The islands of the Caribbean are very used to welcoming guests, and they do it style. High quality customer service and endless pampering is top of the agenda here.

But if youre worried about the effect all that good living is going to have on the beach body you spent months working to perfect, you can throw in a very healthy dose of activities while youre at it. The islands all have excellent water sports on tap. Divings a particular favourite because the underwater picture here is as colourful as the one above the waves. There are also inland adventures to be had, from off-roading or zip-wiring through unspoiled jungle to climbing extinct volcanoes and canyoning in mountain streams.

The Caribbeans far from being one-dimensional. There are more than 7,000 islands in the group. Though only 13 of them are inhabited island nations, they are a colourful cocktail of distinctive cultures, unique environments, and long, storied histories.

The Dominican Republic is the most popular island with visitors. Its a perfect mix of beach resort luxury, tropical rainforest paradise, and pretty colonial towns. Trinidad is the capital of carnival, where a party of some sort is never far from breaking out. To Jamaicas beautiful beaches are added a super-laid-back attitude and the rich musical culture. Antigua fits the desert island dream to a tee.

Cuba just opening up to America again is the Caribbeans biggest, most populated island, an intriguing cultural stew of cuisines, cultures and rhythms that along with the rum will leave you intoxicated.

As holiday destinations the islands of the Caribbean offer something for everyone. Theyre a brilliant family destination with loads of attractions and days out for kids. For romantic souls theres nothing like a Caribbean sunset to tick the box. You might want to return for your honeymoon or even to get married on the beach. But if a beach towel, a book and a planters punch is all you need, youll never find anywhere better to lie back and soak in relaxation.

What a lot of choices this diverse little box of treasures hold. The beaches and resort hotels at the likes of Punta Cana are all-inclusive paradises. Kick off your sandals for a pair of boots and you could be hiking through rain forest or up Pico Duarte, the Caribbeans tallest mountain. Historic rum factories are uncorked around Puerto Plata. Santo Domingo, the islands capital, was the first port of call for Christopher Columbus on his way to the New World and is a beautiful UNESCO-protected historic town.

The Dominican Republic is made for family or his-and-her beach breaks, with big resort hotels offering brilliant value and all-inclusive facilities with perfect sands and crystal-clear waters.

Jungle tumbles down the dramatic mountains in the interior. Head for the hills and get ready to explore an unspoiled new world and release your inner Bear Grylls with rainforest adventure sports.

Get ready to change your desert island preconceptions in beautiful Santo Domingo, where modern high rises stand side-by-side with the oldest European buildings in the Caribbean. Its lively, laid-back, and enormous fun.

Food is an obsession with the Dominican locals, and if youre a visitor you should be no different. Super-fresh fish, spicy meat stews, straight-from-the-tree fruit juice and some of the best rum and coffee in the world are highlights.

The big beach resorts around Punta Cana, La Romana, Samana and Puerto Plata offer great value all-inclusive access to some of the best beaches in the world.

Santo Domingo is a UNESCO World Heritage Site, with 16th-century churches, plazas and forts, standing over a beautiful port. There are good museums to explain the islands place in world history too.

Theres more UNESCO protection for the pristine Eastern National Park (Parque Nacional del Este), an internationally important land and sea wildlife reserve full of colourful species from pelicans to dolphins.

With its long-established British links, Jamaicas a top destination for UK sun seekers. Theyve got good reason to love it. The beaches are classically Caribbean with white sand, palm trees, coral reefs and blue waters. Then there are the forests, mountains, waterfalls and banana plantations pure beauty. Finally, the people, the music, the food, the culture; theyre all as wonderful, welcoming and worth exploring as youve been led to believe.

Lying back on a perfect island beach. Seven Mile Beach in Negril has room to spread out. Montego Bay is busy with beach bars and water sports. You can surf at Boston Bay Beach in Port Antonio, or lose yourself on Winnifred Beach, a favourite with the locals as well as seclusion-seeking visitors.

Climbing the Blue Mountain Peak is just one inland adventure to experience on this stunningly beautiful island. The Blue Hole springs at Ocho Rios, the Dunns River Falls, the cliffs at Negril – Jamaica is packed with natural wonders to discover.

Dancing the night away is expected in the home of reggae. Theres more to Jamaican musical and party culture than Bob Marley though. But from African-inspired folk songs or church gospel to booming dancehall beats and street sound systems, everythings got passion and rhythm.

Eating like royalty is every Jamaicans birth right! The cuisine is spicy and international mixing African, European and Latin American flavours. With fantastic local produce yam, plantain, fish, goat, fruit to conjure with, Jamaican food is as rich and diverse as the islands landscapes.

From jumping Montego Bay to fashionable Seven Mile Beach or isolated Treasure Beach, Jamaicas coastline is one of the best for sun and sand in the world. And guess what youll find at Reggae Beach?

Jamaica has a proud cultural heritage with music just the best known of its exports. Historic houses and capital-city museums celebrate everyone from Noel Coward to Bob Marley. The best way to understand it all is just to dive in and immerse yourself.

The twin islands of Antigua and smaller Barbuda are as beautiful as any in the Caribbean. The reefs around the shore make the islands diving really rewarding. Smaller and less-developed than some of the islands but with 365 beaches, Antigua has room for everyone on its sands.

Everything that makes the Caribbean great a good choice of top-quality resorts; party people; beaches and jungles; a beautiful historic capital can be found in spades in Barbados. Bridgetown has UNESCO World Heritage Status, but the beaches and wild interior dont need any certification to confirm their timeless beauty.

The times are changing in Cuba. But its the years of time standing relatively still that give the crumbling, colourful facades and classic American motors of Havana much of its charm. Elsewhere there are resorts and beaches to match any in the region, and a rum, a cigar and some Afro-Cuban beats are the icing on a colourful cake.

Trinidad (busy and relatively built up) and Tobago (chilled and empty) are a beautiful contrast. Party in Port of Spain or zip wire through Tobagos protected forests before lying back on the pink tinged sands.

St Lucia is a supremely romantic island, its mountains and waterfalls stealing the hearts of many a visitor. Brilliant beach-front resorts include the famous Sandals brand. A party can always be found in Gros Islet, and peace and quiet is the hallmark or Choc Bay.

See more here:

Holidays to the Caribbean 2016 / 2017 | loveholidays.com

Posted in Caribbean | Comments Off on Holidays to the Caribbean 2016 / 2017 | loveholidays.com

Medicine – Wikipedia, the free encyclopedia

Posted: October 1, 2016 at 1:45 am

Medicine (British English i; American English i) is the science and practice of the diagnosis, treatment, and prevention of disease.[1][2] The word medicine is derived from Latin medicus, meaning “a physician”.[3][4] Medicine encompasses a variety of health care practices evolved to maintain and restore health by the prevention and treatment of illness. Contemporary medicine applies biomedical sciences, biomedical research, genetics, and medical technology to diagnose, treat, and prevent injury and disease, typically through pharmaceuticals or surgery, but also through therapies as diverse as psychotherapy, external splints and traction, medical devices, biologics, and ionizing radiation, amongst others.[5]

Medicine has existed for thousands of years, during most of which it was an art (an area of skill and knowledge) frequently having connections to the religious and philosophical beliefs of local culture. For example, a medicine man would apply herbs and say prayers for healing, or an ancient philosopher and physician would apply bloodletting according to the theories of humorism. In recent centuries, since the advent of modern science, most medicine has become a combination of art and science (both basic and applied, under the umbrella of medical science). While stitching technique for sutures is an art learned through practice, the knowledge of what happens at the cellular and molecular level in the tissues being stitched arises through science.

Prescientific forms of medicine are now known as traditional medicine and folk medicine. They remain commonly used with or instead of scientific medicine and are thus called alternative medicine. For example, evidence on the effectiveness of acupuncture is “variable and inconsistent” for any condition,[6] but is generally safe when done by an appropriately trained practitioner.[7] In contrast, treatments outside the bounds of safety and efficacy are termed quackery.

Medical availability and clinical practice varies across the world due to regional differences in culture and technology. Modern scientific medicine is highly developed in the Western world, while in developing countries such as parts of Africa or Asia, the population may rely more heavily on traditional medicine with limited evidence and efficacy and no required formal training for practitioners.[8] Even in the developed world however, evidence-based medicine is not universally used in clinical practice; for example, a 2007 survey of literature reviews found that about 49% of the interventions lacked sufficient evidence to support either benefit or harm.[9]

In modern clinical practice, doctors personally assess patients in order to diagnose, treat, and prevent disease using clinical judgment. The doctor-patient relationship typically begins an interaction with an examination of the patient’s medical history and medical record, followed by a medical interview[10] and a physical examination. Basic diagnostic medical devices (e.g. stethoscope, tongue depressor) are typically used. After examination for signs and interviewing for symptoms, the doctor may order medical tests (e.g. blood tests), take a biopsy, or prescribe pharmaceutical drugs or other therapies. Differential diagnosis methods help to rule out conditions based on the information provided. During the encounter, properly informing the patient of all relevant facts is an important part of the relationship and the development of trust. The medical encounter is then documented in the medical record, which is a legal document in many jurisdictions.[11] Follow-ups may be shorter but follow the same general procedure, and specialists follow a similar process. The diagnosis and treatment may take only a few minutes or a few weeks depending upon the complexity of the issue.

The components of the medical interview[10] and encounter are:

The physical examination is the examination of the patient for medical signs of disease, which are objective and observable, in contrast to symptoms which are volunteered by the patient and not necessarily objectively observable.[12] The healthcare provider uses the senses of sight, hearing, touch, and sometimes smell (e.g., in infection, uremia, diabetic ketoacidosis). Four actions are the basis of physical examination: inspection, palpation (feel), percussion (tap to determine resonance characteristics), and auscultation (listen), generally in that order although auscultation occurs prior to percussion and palpation for abdominal assessments.[13]

The clinical examination involves the study of:

It is to likely focus on areas of interest highlighted in the medical history and may not include everything listed above.

The treatment plan may include ordering additional medical laboratory tests and medical imaging studies, starting therapy, referral to a specialist, or watchful observation. Follow-up may be advised. Depending upon the health insurance plan and the managed care system, various forms of “utilization review”, such as prior authorization of tests, may place barriers on accessing expensive services.[14]

The medical decision-making (MDM) process involves analysis and synthesis of all the above data to come up with a list of possible diagnoses (the differential diagnoses), along with an idea of what needs to be done to obtain a definitive diagnosis that would explain the patient’s problem.

On subsequent visits, the process may be repeated in an abbreviated manner to obtain any new history, symptoms, physical findings, and lab or imaging results or specialist consultations.

Contemporary medicine is in general conducted within health care systems. Legal, credentialing and financing frameworks are established by individual governments, augmented on occasion by international organizations, such as churches. The characteristics of any given health care system have significant impact on the way medical care is provided.

From ancient times, Christian emphasis on practical charity gave rise to the development of systematic nursing and hospitals and the Catholic Church today remains the largest non-government provider of medical services in the world.[15] Advanced industrial countries (with the exception of the United States)[16][17] and many developing countries provide medical services through a system of universal health care that aims to guarantee care for all through a single-payer health care system, or compulsory private or co-operative health insurance. This is intended to ensure that the entire population has access to medical care on the basis of need rather than ability to pay. Delivery may be via private medical practices or by state-owned hospitals and clinics, or by charities, most commonly by a combination of all three.

Most tribal societies provide no guarantee of healthcare for the population as a whole. In such societies, healthcare is available to those that can afford to pay for it or have self-insured it (either directly or as part of an employment contract) or who may be covered by care financed by the government or tribe directly.

Transparency of information is another factor defining a delivery system. Access to information on conditions, treatments, quality, and pricing greatly affects the choice by patients/consumers and, therefore, the incentives of medical professionals. While the US healthcare system has come under fire for lack of openness,[18] new legislation may encourage greater openness. There is a perceived tension between the need for transparency on the one hand and such issues as patient confidentiality and the possible exploitation of information for commercial gain on the other.

Provision of medical care is classified into primary, secondary, and tertiary care categories.

Primary care medical services are provided by physicians, physician assistants, nurse practitioners, or other health professionals who have first contact with a patient seeking medical treatment or care. These occur in physician offices, clinics, nursing homes, schools, home visits, and other places close to patients. About 90% of medical visits can be treated by the primary care provider. These include treatment of acute and chronic illnesses, preventive care and health education for all ages and both sexes.

Secondary care medical services are provided by medical specialists in their offices or clinics or at local community hospitals for a patient referred by a primary care provider who first diagnosed or treated the patient. Referrals are made for those patients who required the expertise or procedures performed by specialists. These include both ambulatory care and inpatient services, emergency rooms, intensive care medicine, surgery services, physical therapy, labor and delivery, endoscopy units, diagnostic laboratory and medical imaging services, hospice centers, etc. Some primary care providers may also take care of hospitalized patients and deliver babies in a secondary care setting.

Tertiary care medical services are provided by specialist hospitals or regional centers equipped with diagnostic and treatment facilities not generally available at local hospitals. These include trauma centers, burn treatment centers, advanced neonatology unit services, organ transplants, high-risk pregnancy, radiation oncology, etc.

Modern medical care also depends on information still delivered in many health care settings on paper records, but increasingly nowadays by electronic means.

In low-income countries, modern healthcare is often too expensive for the average person. International healthcare policy researchers have advocated that “user fees” be removed in these areas to ensure access, although even after removal, significant costs and barriers remain.[19]

Working together as an interdisciplinary team, many highly trained health professionals besides medical practitioners are involved in the delivery of modern health care. Examples include: nurses, emergency medical technicians and paramedics, laboratory scientists, pharmacists, podiatrists, physiotherapists, respiratory therapists, speech therapists, occupational therapists, radiographers, dietitians, and bioengineers, surgeons, surgeon’s assistant, surgical technologist.

The scope and sciences underpinning human medicine overlap many other fields. Dentistry, while considered by some a separate discipline from medicine, is a medical field.

A patient admitted to the hospital is usually under the care of a specific team based on their main presenting problem, e.g., the Cardiology team, who then may interact with other specialties, e.g., surgical, radiology, to help diagnose or treat the main problem or any subsequent complications/developments.

Physicians have many specializations and subspecializations into certain branches of medicine, which are listed below. There are variations from country to country regarding which specialties certain subspecialties are in.

The main branches of medicine are:

In the broadest meaning of “medicine”, there are many different specialties. In the UK, most specialities have their own body or college, which have its own entrance examination. These are collectively known as the Royal Colleges, although not all currently use the term “Royal”. The development of a speciality is often driven by new technology (such as the development of effective anaesthetics) or ways of working (such as emergency departments); the new specialty leads to the formation of a unifying body of doctors and the prestige of administering their own examination.

Within medical circles, specialities usually fit into one of two broad categories: “Medicine” and “Surgery.” “Medicine” refers to the practice of non-operative medicine, and most of its subspecialties require preliminary training in Internal Medicine. In the UK, this was traditionally evidenced by passing the examination for the Membership of the Royal College of Physicians (MRCP) or the equivalent college in Scotland or Ireland. “Surgery” refers to the practice of operative medicine, and most subspecialties in this area require preliminary training in General Surgery, which in the UK leads to membership of the Royal College of Surgeons of England (MRCS). At present, some specialties of medicine do not fit easily into either of these categories, such as radiology, pathology, or anesthesia. Most of these have branched from one or other of the two camps above; for example anaesthesia developed first as a faculty of the Royal College of Surgeons (for which MRCS/FRCS would have been required) before becoming the Royal College of Anaesthetists and membership of the college is attained by sitting for the examination of the Fellowship of the Royal College of Anesthetists (FRCA).

Surgery is an ancient medical specialty that uses operative manual and instrumental techniques on a patient to investigate and/or treat a pathological condition such as disease or injury, to help improve bodily function or appearance or to repair unwanted ruptured areas (for example, a perforated ear drum). Surgeons must also manage pre-operative, post-operative, and potential surgical candidates on the hospital wards. Surgery has many sub-specialties, including general surgery, ophthalmic surgery, cardiovascular surgery, colorectal surgery, neurosurgery, oral and maxillofacial surgery, oncologic surgery, orthopedic surgery, otolaryngology, plastic surgery, podiatric surgery, transplant surgery, trauma surgery, urology, vascular surgery, and pediatric surgery. In some centers, anesthesiology is part of the division of surgery (for historical and logistical reasons), although it is not a surgical discipline. Other medical specialties may employ surgical procedures, such as ophthalmology and dermatology, but are not considered surgical sub-specialties per se.

Surgical training in the U.S. requires a minimum of five years of residency after medical school. Sub-specialties of surgery often require seven or more years. In addition, fellowships can last an additional one to three years. Because post-residency fellowships can be competitive, many trainees devote two additional years to research. Thus in some cases surgical training will not finish until more than a decade after medical school. Furthermore, surgical training can be very difficult and time-consuming.

Internal medicine is the medical specialty dealing with the prevention, diagnosis, and treatment of adult diseases. According to some sources, an emphasis on internal structures is implied.[20] In North America, specialists in internal medicine are commonly called “internists.” Elsewhere, especially in Commonwealth nations, such specialists are often called physicians.[21] These terms, internist or physician (in the narrow sense, common outside North America), generally exclude practitioners of gynecology and obstetrics, pathology, psychiatry, and especially surgery and its subspecialities.

Because their patients are often seriously ill or require complex investigations, internists do much of their work in hospitals. Formerly, many internists were not subspecialized; such general physicians would see any complex nonsurgical problem; this style of practice has become much less common. In modern urban practice, most internists are subspecialists: that is, they generally limit their medical practice to problems of one organ system or to one particular area of medical knowledge. For example, gastroenterologists and nephrologists specialize respectively in diseases of the gut and the kidneys.[22]

In the Commonwealth of Nations and some other countries, specialist pediatricians and geriatricians are also described as specialist physicians (or internists) who have subspecialized by age of patient rather than by organ system. Elsewhere, especially in North America, general pediatrics is often a form of Primary care.

There are many subspecialities (or subdisciplines) of internal medicine:

Training in internal medicine (as opposed to surgical training), varies considerably across the world: see the articles on Medical education and Physician for more details. In North America, it requires at least three years of residency training after medical school, which can then be followed by a one- to three-year fellowship in the subspecialties listed above. In general, resident work hours in medicine are less than those in surgery, averaging about 60 hours per week in the USA. This difference does not apply in the UK where all doctors are now required by law to work less than 48 hours per week on average.

The followings are some major medical specialties that do not directly fit into any of the above-mentioned groups.

Some interdisciplinary sub-specialties of medicine include:

Medical education and training varies around the world. It typically involves entry level education at a university medical school, followed by a period of supervised practice or internship, and/or residency. This can be followed by postgraduate vocational training. A variety of teaching methods have been employed in medical education, still itself a focus of active research. In Canada and the United States of America, a Doctor of Medicine degree, often abbreviated M.D., or a Doctor of Osteopathic Medicine degree, often abbreviated as D.O. and unique to the United States, must be completed in and delivered from a recognized university.

Since knowledge, techniques, and medical technology continue to evolve at a rapid rate, many regulatory authorities require continuing medical education. Medical practitioners upgrade their knowledge in various ways, including medical journals, seminars, conferences, and online programs.

In most countries, it is a legal requirement for a medical doctor to be licensed or registered. In general, this entails a medical degree from a university and accreditation by a medical board or an equivalent national organization, which may ask the applicant to pass exams. This restricts the considerable legal authority of the medical profession to physicians that are trained and qualified by national standards. It is also intended as an assurance to patients and as a safeguard against charlatans that practice inadequate medicine for personal gain. While the laws generally require medical doctors to be trained in “evidence based”, Western, or Hippocratic Medicine, they are not intended to discourage different paradigms of health.

In the European Union, the profession of doctor of medicine is regulated. A profession is said to be regulated when access and exercise is subject to the possession of a specific professional qualification. The regulated professions database contains a list of regulated professions for doctor of medicine in the EU member states, EEA countries and Switzerland. This list is covered by the Directive 2005/36/EC.

Doctors who are negligent or intentionally harmful in their care of patients can face charges of medical malpractice and be subject to civil, criminal, or professional sanctions.

Medical ethics is a system of moral principles that apply values and judgments to the practice of medicine. As a scholarly discipline, medical ethics encompasses its practical application in clinical settings as well as work on its history, philosophy, theology, and sociology. Six of the values that commonly apply to medical ethics discussions are:

Values such as these do not give answers as to how to handle a particular situation, but provide a useful framework for understanding conflicts. When moral values are in conflict, the result may be an ethical dilemma or crisis. Sometimes, no good solution to a dilemma in medical ethics exists, and occasionally, the values of the medical community (i.e., the hospital and its staff) conflict with the values of the individual patient, family, or larger non-medical community. Conflicts can also arise between health care providers, or among family members. For example, some argue that the principles of autonomy and beneficence clash when patients refuse blood transfusions, considering them life-saving; and truth-telling was not emphasized to a large extent before the HIV era.

Prehistoric medicine incorporated plants (herbalism), animal parts, and minerals. In many cases these materials were used ritually as magical substances by priests, shamans, or medicine men. Well-known spiritual systems include animism (the notion of inanimate objects having spirits), spiritualism (an appeal to gods or communion with ancestor spirits); shamanism (the vesting of an individual with mystic powers); and divination (magically obtaining the truth). The field of medical anthropology examines the ways in which culture and society are organized around or impacted by issues of health, health care and related issues.

Early records on medicine have been discovered from ancient Egyptian medicine, Babylonian Medicine, Ayurvedic medicine (in the Indian subcontinent), classical Chinese medicine (predecessor to the modern traditional Chinese Medicine), and ancient Greek medicine and Roman medicine.

In Egypt, Imhotep (3rd millennium BC) is the first physician in history known by name. The oldest Egyptian medical text is the Kahun Gynaecological Papyrus from around 2000 BCE, which describes gynaecological diseases. The Edwin Smith Papyrus dating back to 1600 BCE is an early work on surgery, while the Ebers Papyrus dating back to 1500 BCE is akin to a textbook on medicine.[24]

In China, archaeological evidence of medicine in Chinese dates back to the Bronze Age Shang Dynasty, based on seeds for herbalism and tools presumed to have been used for surgery.[25] The Huangdi Neijing, the progenitor of Chinese medicine, is a medical text written beginning in the 2nd century BCE and compiled in the 3rd century.[26]

In India, the surgeon Sushruta described numerous surgical operations, including the earliest forms of plastic surgery.[27][dubious discuss][28][29] Earliest records of dedicated hospitals come from Mihintale in Sri Lanka where evidence of dedicated medicinal treatment facilities for patients are found.[30][31]

In Greece, the Greek physician Hippocrates, the “father of western medicine”,[32][33] laid the foundation for a rational approach to medicine. Hippocrates introduced the Hippocratic Oath for physicians, which is still relevant and in use today, and was the first to categorize illnesses as acute, chronic, endemic and epidemic, and use terms such as, “exacerbation, relapse, resolution, crisis, paroxysm, peak, and convalescence”.[34][35] The Greek physician Galen was also one of the greatest surgeons of the ancient world and performed many audacious operations, including brain and eye surgeries. After the fall of the Western Roman Empire and the onset of the Early Middle Ages, the Greek tradition of medicine went into decline in Western Europe, although it continued uninterrupted in the Eastern Roman (Byzantine) Empire.

Most of our knowledge of ancient Hebrew medicine during the 1stmillenniumBC comes from the Torah, i.e.the Five Books of Moses, which contain various health related laws and rituals. The Hebrew contribution to the development of modern medicine started in the Byzantine Era, with the physician Asaph the Jew.[36]

After 750 CE, the Muslim world had the works of Hippocrates, Galen and Sushruta translated into Arabic, and Islamic physicians engaged in some significant medical research. Notable Islamic medical pioneers include the Persian polymath, Avicenna, who, along with Imhotep and Hippocrates, has also been called the “father of medicine”.[37] He wrote The Canon of Medicine, considered one of the most famous books in the history of medicine.[38] Others include Abulcasis,[39]Avenzoar,[40]Ibn al-Nafis,[41] and Averroes.[42]Rhazes[43] was one of the first to question the Greek theory of humorism, which nevertheless remained influential in both medieval Western and medieval Islamic medicine.[44]Al-Risalah al-Dhahabiah by Ali al-Ridha, the eighth Imam of Shia Muslims, is revered as the most precious Islamic literature in the Science of Medicine.[45] The Islamic Bimaristan hospitals were an early example of public hospitals.[46][47]

In Europe, Charlemagne decreed that a hospital should be attached to each cathedral and monastery and the historian Geoffrey Blainey likened the activities of the Catholic Church in health care during the Middle Ages to an early version of a welfare state: “It conducted hospitals for the old and orphanages for the young; hospices for the sick of all ages; places for the lepers; and hostels or inns where pilgrims could buy a cheap bed and meal”. It supplied food to the population during famine and distributed food to the poor. This welfare system the church funded through collecting taxes on a large scale and possessing large farmlands and estates. The Benedictine order was noted for setting up hospitals and infirmaries in their monasteries, growing medical herbs and becoming the chief medical care givers of their districts, as at the great Abbey of Cluny. The Church also established a network of cathedral schools and universities where medicine was studied. The Schola Medica Salernitana in Salerno, looking to the learning of Greek and Arab physicians, grew to be the finest medical school in Medieval Europe.[48]

However, the fourteenth and fifteenth century Black Death devastated both the Middle East and Europe, and it has even been argued that Western Europe was generally more effective in recovering from the pandemic than the Middle East.[49] In the early modern period, important early figures in medicine and anatomy emerged in Europe, including Gabriele Falloppio and William Harvey.

The major shift in medical thinking was the gradual rejection, especially during the Black Death in the 14th and 15th centuries, of what may be called the ‘traditional authority’ approach to science and medicine. This was the notion that because some prominent person in the past said something must be so, then that was the way it was, and anything one observed to the contrary was an anomaly (which was paralleled by a similar shift in European society in general see Copernicus’s rejection of Ptolemy’s theories on astronomy). Physicians like Vesalius improved upon or disproved some of the theories from the past. The main tomes used both by medicine students and expert physicians were Materia Medica and Pharmacopoeia.

Andreas Vesalius was the author of De humani corporis fabrica, an important book on human anatomy.[50] Bacteria and microorganisms were first observed with a microscope by Antonie van Leeuwenhoek in 1676, initiating the scientific field microbiology.[51] Independently from Ibn al-Nafis, Michael Servetus rediscovered the pulmonary circulation, but this discovery did not reach the public because it was written down for the first time in the “Manuscript of Paris”[52] in 1546, and later published in the theological work for which he paid with his life in 1553. Later this was described by Renaldus Columbus and Andrea Cesalpino. Herman Boerhaave is sometimes referred to as a “father of physiology” due to his exemplary teaching in Leiden and textbook ‘Institutiones medicae’ (1708). Pierre Fauchard has been called “the father of modern dentistry”.[53]

Veterinary medicine was, for the first time, truly separated from human medicine in 1761, when the French veterinarian Claude Bourgelat founded the world’s first veterinary school in Lyon, France. Before this, medical doctors treated both humans and other animals.

Modern scientific biomedical research (where results are testable and reproducible) began to replace early Western traditions based on herbalism, the Greek “four humours” and other such pre-modern notions. The modern era really began with Edward Jenner’s discovery of the smallpox vaccine at the end of the 18th century (inspired by the method of inoculation earlier practiced in Asia), Robert Koch’s discoveries around 1880 of the transmission of disease by bacteria, and then the discovery of antibiotics around 1900.

The post-18th century modernity period brought more groundbreaking researchers from Europe. From Germany and Austria, doctors Rudolf Virchow, Wilhelm Conrad Rntgen, Karl Landsteiner and Otto Loewi made notable contributions. In the United Kingdom, Alexander Fleming, Joseph Lister, Francis Crick and Florence Nightingale are considered important. Spanish doctor Santiago Ramn y Cajal is considered the father of modern neuroscience.

From New Zealand and Australia came Maurice Wilkins, Howard Florey, and Frank Macfarlane Burnet.

In the United States, William Williams Keen, William Coley, James D. Watson, Italy (Salvador Luria), Switzerland (Alexandre Yersin), Japan (Kitasato Shibasabur), and France (Jean-Martin Charcot, Claude Bernard, Paul Broca) and others did significant work. Russian Nikolai Korotkov also did significant work, as did Sir William Osler and Harvey Cushing.

As science and technology developed, medicine became more reliant upon medications. Throughout history and in Europe right until the late 18th century, not only animal and plant products were used as medicine, but also human body parts and fluids.[54]Pharmacology developed in part from herbalism and some drugs are still derived from plants (atropine, ephedrine, warfarin, aspirin, digoxin, vinca alkaloids, taxol, hyoscine, etc.).[55]Vaccines were discovered by Edward Jenner and Louis Pasteur.

The first antibiotic was arsphenamine (Salvarsan) discovered by Paul Ehrlich in 1908 after he observed that bacteria took up toxic dyes that human cells did not. The first major class of antibiotics was the sulfa drugs, derived by German chemists originally from azo dyes.

Pharmacology has become increasingly sophisticated; modern biotechnology allows drugs targeted towards specific physiological processes to be developed, sometimes designed for compatibility with the body to reduce side-effects. Genomics and knowledge of human genetics is having some influence on medicine, as the causative genes of most monogenic genetic disorders have now been identified, and the development of techniques in molecular biology and genetics are influencing medical technology, practice and decision-making.

Evidence-based medicine is a contemporary movement to establish the most effective algorithms of practice (ways of doing things) through the use of systematic reviews and meta-analysis. The movement is facilitated by modern global information science, which allows as much of the available evidence as possible to be collected and analyzed according to standard protocols that are then disseminated to healthcare providers. The Cochrane Collaboration leads this movement. A 2001 review of 160 Cochrane systematic reviews revealed that, according to two readers, 21.3% of the reviews concluded insufficient evidence, 20% concluded evidence of no effect, and 22.5% concluded positive effect.[56]

Traditional medicine (also known as indigenous or folk medicine) comprises knowledge systems that developed over generations within various societies before the era of modern medicine. The World Health Organization (WHO) defines traditional medicine as “the sum total of the knowledge, skills, and practices based on the theories, beliefs, and experiences indigenous to different cultures, whether explicable or not, used in the maintenance of health as well as in the prevention, diagnosis, improvement or treatment of physical and mental illness.”[57]

In some Asian and African countries, up to 80% of the population relies on traditional medicine for their primary health care needs. When adopted outside of its traditional culture, traditional medicine is often called alternative medicine.[57] Practices known as traditional medicines include Ayurveda, Siddha medicine, Unani, ancient Iranian medicine, Irani, Islamic medicine, traditional Chinese medicine, traditional Korean medicine, acupuncture, Muti, If, and traditional African medicine.

The WHO notes however that “inappropriate use of traditional medicines or practices can have negative or dangerous effects” and that “further research is needed to ascertain the efficacy and safety” of several of the practices and medicinal plants used by traditional medicine systems.[57] The line between alternative medicine and quackery is a contentious subject.

Traditional medicine may include formalized aspects of folk medicine, that is to say longstanding remedies passed on and practised by lay people. Folk medicine consists of the healing practices and ideas of body physiology and health preservation known to some in a culture, transmitted informally as general knowledge, and practiced or applied by anyone in the culture having prior experience.[58] Folk medicine may also be referred to as traditional medicine, alternative medicine, indigenous medicine, or natural medicine. These terms are often considered interchangeable, even though some authors may prefer one or the other because of certain overtones they may be willing to highlight. In fact, out of these terms perhaps only indigenous medicine and traditional medicine have the same meaning as folk medicine, while the others should be understood rather in a modern or modernized context.[59]

See original here:

Medicine – Wikipedia, the free encyclopedia

Posted in Alternative Medicine | Comments Off on Medicine – Wikipedia, the free encyclopedia

PRIVATE ISLAND NEWS – Private islands for sale and for …

Posted: September 29, 2016 at 11:52 am

Great news for our German readers you could win a trip to Sir Richard Bransons exclusive private island retreat in order to take part in the final round of the Extreme Tech Challenge.

A new start-up festival located in the German city of Munich is offering participants a unique opportunity to win a Read More: Caribbean: Win A Trip to Sir Richard Bransons Exclusive Private Island Retreat

A lucky Australian man was announced as the new owner of a profitable private island resort on Tuesday evening after securing the winning ticket in the worlds most-talked about prize draw.

A lucky Australian man enjoyed the surprise of his life on Wednesday morning after finding out that hed won a Pacific private island Read More: Micronesia: Australian Man Wins Private Island Resort in $49 Dollar Raffle

The private island lifestyle certainly seems to be going down well with Antiguas hawksbill turtles. The endangered breed is thriving due to a research program funded by private island owners.

A luxury private island in the Caribbean is making a large impact on marine life conservation community all through generous donations from the Read More: Caribbean: The Private Island Where Endangered Hawksbill Turtles Thrive

After sending Wales to dizzy heights at EURO 2016, Gareth Bale splashed the cash on an two-week trip to Ibizan private island Tagomago the favourite hideaway of Real Madrid team mate Cristiano Ronaldo.

After leading his country to their most successful European campaign ever, no-one would begrudge footballing ace Gareth Bale a few Read More: Spain: Football Ace Gareth Bale Enjoys Private Island Vacation on Tagomago

Jumby Bay (Antigua) has become the latest private island to adopt a more sustainable attitude to its culinary output, turning to local farmers and fishermen for ingredients and inspiration.

The eco-tourism trend is on the up even within the luxury sector, where hoteliers are taking increasingly innovative approaches to integrate sustainability into their Read More: Caribbean: Jumby Bay Turns to Farm-to-Table Philosophy as Eco-Tourism Trend Grows

Quite simply its the prize package of the century. For just AUD 49- you could become the proud owner of a private island resort in the South Pacific. What are you waiting for?!

An Australian family has announce plans to raffle-off their personal paradise island in Micronesia for just AUD 49 per ticket Read More: Micronesia: Australian Family Raffle-Off Their Pacific Private Island for Just $49

The Ammonoosuc Conservation Trust has reached an agreement to transfer the two private islands in New Hampshire into public hands. A 311 acre farm will also be protected.

The Ammonoosuc Conservation Trust has announced that is has reached an agreement to permanently conserve two river islands in New Hampshire The announcement follows years of Read More: USA: Another Two Private Islands Transferred into Public Hands

Proposals are in place to protect Floridas Dot-Dash-Dit Islands an island group home to Tampa Bays only coastal colony of wood storks. A public comment session is planned for later this month.

Floridas Dot-Dash-Dit Islands a group of three mangrove islands in Manatee County, Tampa Bay have been earmarked to be Read More: USA: Three State-Owned Florida Islands Earmarked for Further Nature Protection Status

We are the world’s leading news page about islands for rent and sale. Private Island News (PIN) highlights the latest news about the private islands businewss, connecting a large online community of private islands enthusiasts. Private islands fans can read the latest news and stories about the private islands market, and see current ads for private islands to buy or rent. We analyze the island market, look at price trends, and feature commentary from top island experts. We report on business, breaking news, and all other aspects of island life. We explain environmental problems and climate change and bring the latest gossip about celebrities and private islands.

Guiding you in the right direction, we give breaking reports about new property sales and auctions to help you find your dream island retreat. We distribute offers for tropical and Caribbean islands for sale and enable our readers to buy or rent a private island from a celebrity. Newlyweds can also rent a private island for their honeymoon. For all prospective customers, Private Island News makes referrals to Vladi Private Islands, a broker for private island rentals and sales.

Private Island News also informs potential buyers about other relevant matters to life on private islands such as the environment, climate change and island development. We discuss issues of global warming, extreme weather, politics and green technology, and focus on how to live in harmony with the natural environment. PIN also brings you the hottest news about luxury lifestyles, new literature, and the rich and famous who buy or rent islands. Furthermore, we also discuss governments and their strategies and reasons for buying islands.

Read the original:

PRIVATE ISLAND NEWS – Private islands for sale and for …

Posted in Private Islands | Comments Off on PRIVATE ISLAND NEWS – Private islands for sale and for …