Tag Archives: university

Caribbean – Wikipedia

Posted: October 20, 2016 at 11:38 pm

Caribbean Area 2,754,000km2 (1,063,000sqmi) Land area 239,681km2 (92,541sqmi) Population (2016) 43,489,000[1] Density 151.5/km2 (392/sqmi) Ethnic groups Afro-Caribbean, European, Indo-Caribbean, Latino or Hispanic (Spanish and Portuguese), Chinese Caribbean, Jewish Caribbean, Arab, Indonesians/Javanese[2]Amerindian Demonym Caribbean, West Indian Languages Spanish, English, French, Dutch, French Creole, English Creole, Caribbean Hindustani, among others Government 13 sovereign states 17 dependent territories Largest cities List of metropolitan areas in the West Indies Santo Domingo Havana Port-au-Prince Santiago de los Caballeros Kingston Ocho Rios Santiago de Cuba San Juan Holgun Cap-Hatien Fort-de-France Nassau Port of Spain Georgetown Paramaribo San Fernando Chaguanas Internet TLD Multiple Calling code Multiple Time zone UTC-5 to UTC-4

The Caribbean ( or ; Spanish: Caribe; Dutch: Caraben(helpinfo); Caribbean Hindustani: (Kairibiyana); French: Carabes or more commonly Antilles) is a region that consists of the Caribbean Sea, its islands (some surrounded by the Caribbean Sea and some bordering both the Caribbean Sea and the North Atlantic Ocean), and the surrounding coasts. The region is southeast of the Gulf of Mexico and the North American mainland, east of Central America, and north of South America.

Situated largely on the Caribbean Plate, the region comprises more than 700 islands, islets, reefs, and cays. (See the list.) These islands generally form island arcs that delineate the eastern and northern edges of the Caribbean Sea.[3] The Caribbean islands, consisting of the Greater Antilles on the north and the Lesser Antilles on the south and east (including the Leeward Antilles), are part of the somewhat larger West Indies grouping, which also includes the Lucayan Archipelago (comprising The Bahamas and Turks and Caicos Islands) north of the Greater Antilles and Caribbean Sea. In a wider sense, the mainland countries of Belize, Colombia, Venezuela, Guyana, Suriname, and French Guiana are also included.

Geopolitically, the Caribbean islands are usually regarded as a subregion of North America[4][5][6][7][8] and are organized into 30 territories including sovereign states, overseas departments, and dependencies. From December 15, 1954, to October 10, 2010 there was a country known as the Netherlands Antilles composed of five states, all of which were Dutch dependencies.[9] While from January 3, 1958, to May 31, 1962, there was also a short-lived country called the Federation of the West Indies composed of ten English-speaking Caribbean territories, all of which were then British dependencies. The West Indies cricket team continues to represent many of those nations.

The region takes its name from that of the Caribs, an ethnic group present in the Lesser Antilles and parts of adjacent South America at the time of the Spanish conquest.[10]

The two most prevalent pronunciations of “Caribbean” are KARR–BEE-n, with the primary accent on the third syllable, and k-RIB-ee-n, with the accent on the second. The former pronunciation is the older of the two, although the stressed-second-syllable variant has been established for over 75 years.[11] It has been suggested that speakers of British English prefer KARR–BEE-n while North American speakers more typically use k-RIB-ee-n,[12] although not all sources agree.[13] Usage is split within Caribbean English itself.[14]

The word “Caribbean” has multiple uses. Its principal ones are geographical and political. The Caribbean can also be expanded to include territories with strong cultural and historical connections to slavery, European colonisation, and the plantation system.

The geography and climate in the Caribbean region varies: Some islands in the region have relatively flat terrain of non-volcanic origin. These islands include Aruba (possessing only minor volcanic features), Barbados, Bonaire, the Cayman Islands, Saint Croix, the Bahamas, and Antigua. Others possess rugged towering mountain-ranges like the islands of Cuba, Hispaniola, Puerto Rico, Jamaica, Dominica, Montserrat, Saba, Saint Kitts, Saint Lucia, Saint Thomas, Saint John, Tortola, Grenada, Saint Vincent, Guadeloupe, Martinique, and Trinidad & Tobago.

Definitions of the terms Greater Antilles and Lesser Antilles often vary. The Virgin Islands as part of the Puerto Rican bank are sometimes included with the Greater Antilles. The term Lesser Antilles is often used to define an island arc that includes Grenada but excludes Trinidad and Tobago and the Leeward Antilles.

The waters of the Caribbean Sea host large, migratory schools of fish, turtles, and coral reef formations. The Puerto Rico trench, located on the fringe of the Atlantic Ocean and Caribbean Sea just to the north of the island of Puerto Rico, is the deepest point in all of the Atlantic Ocean.[16]

The region sits in the line of several major shipping routes with the Panama Canal connecting the western Caribbean Sea with the Pacific Ocean.

The climate of the area is tropical to subtropical in Cuba, The Bahamas and Puerto Rico. Rainfall varies with elevation, size, and water currents (cool upwellings keep the ABC islands arid). Warm, moist tradewinds blow consistently from the east creating rainforest/semidesert divisions on mountainous islands. Occasional northwesterlies affect the northern islands in the winter. The region enjoys year-round sunshine, divided into ‘dry’ and ‘wet’ seasons, with the last six months of the year being wetter than the first half.

Hurricane Season is from June to November, but they occur more frequently in August and September and more common in the northern islands of the Caribbean. Hurricanes that sometimes batter the region usually strike northwards of Grenada and to the west of Barbados. The principal hurricane belt arcs to northwest of the island of Barbados in the Eastern Caribbean.

Water temperatures vary from 31C (88F) to 22C (72F) all around the year. The air temperature is warm, in the 20s and 30s C (70s, 80s, and 90s F) during the year, only varies from winter to summer about 25 degrees on the southern islands and about 1020 degrees difference can occur in the northern islands of the Caribbean. The northern islands, like the Bahamas, Cuba, Puerto Rico, and The Dominican Republic, may be influenced by continental masses during winter months, such as cold fronts.

Aruba: Latitude 12N

Puerto Rico: Latitude 18N

Cuba: at Latitude 22N

Lucayan Archipelago[a]

Greater Antilles

Lesser Antilles

All islands at some point were, and a few still are, colonies of European nations; a few are overseas or dependent territories:

The British West Indies were united by the United Kingdom into a West Indies Federation between 1958 and 1962. The independent countries formerly part of the B.W.I. still have a joint cricket team that competes in Test matches, One Day Internationals and Twenty20 Internationals. The West Indian cricket team includes the South American nation of Guyana, the only former British colony on the mainland of that continent.

In addition, these countries share the University of the West Indies as a regional entity. The university consists of three main campuses in Jamaica, Barbados and Trinidad and Tobago, a smaller campus in the Bahamas and Resident Tutors in other contributing territories such as Trinidad.

Islands in and near the Caribbean

Maritime boundaries between the Caribbean (island) nations

The Caribbean islands are remarkable for the diversity of their animals, fungi and plants, and have been classified as one of Conservation International’s biodiversity hotspots because of their exceptionally diverse terrestrial and marine ecosystems, ranging from montane cloud forests to cactus scrublands. The region also contains about 8% (by surface area) of the world’s coral reefs[22] along with extensive seagrass meadows,[23] both of which are frequently found in the shallow marine waters bordering island and continental coasts off the region.

For the fungi, there is a modern checklist based on nearly 90,000 records derived from specimens in reference collections, published accounts and field observations.[24] That checklist includes more than 11250 species of fungi recorded from the region. As its authors note, the work is far from exhaustive, and it is likely that the true total number of fungal species already known from the Caribbean is higher. The true total number of fungal species occurring in the Caribbean, including species not yet recorded, is likely far higher given the generally accepted estimate that only about 7% of all fungi worldwide have been discovered.[25] Though the amount of available information is still small, a first effort has been made to estimate the number of fungal species endemic to some Caribbean islands. For Cuba, 2200 species of fungi have been tentatively identified as possible endemics of the island;[26] for Puerto Rico, the number is 789 species;[27] for the Dominican Republic, the number is 699 species;[28] for Trinidad and Tobago, the number is 407 species.[29]

Many of the ecosystems of the Caribbean islands have been devastated by deforestation, pollution, and human encroachment. The arrival of the first humans is correlated with extinction of giant owls and dwarf ground sloths.[30] The hotspot contains dozens of highly threatened animals (ranging from birds, to mammals and reptiles), fungi and plants. Examples of threatened animals include the Puerto Rican amazon, two species of solenodon (giant shrews) in Cuba and the Hispaniola island, and the Cuban crocodile.

The region’s coral reefs, which contain about 70 species of hard corals and between 500700 species of reef-associated fishes[31] have undergone rapid decline in ecosystem integrity in recent years, and are considered particularly vulnerable to global warming and ocean acidification.[32] According to a UNEP report, the caribbean coral reefs might get extinct in next 20 years due to population explosion along the coast lines, overfishing, the pollution of coastal areas and global warming.[33]

Some Caribbean islands have terrain that Europeans found suitable for cultivation for agriculture. Tobacco was an important early crop during the colonial era, but was eventually overtaken by sugarcane production as the region’s staple crop. Sugar was produced from sugarcane for export to Europe. Cuba and Barbados were historically the largest producers of sugar. The tropical plantation system thus came to dominate Caribbean settlement. Other islands were found to have terrain unsuited for agriculture, for example Dominica, which remains heavily forested. The islands in the southern Lesser Antilles, Aruba, Bonaire and Curaao, are extremely arid, making them unsuitable for agriculture. However, they have salt pans that were exploited by the Dutch. Sea water was pumped into shallow ponds, producing coarse salt when the water evaporated.[34]

The natural environmental diversity of the Caribbean islands has led to recent growth in eco-tourism. This type of tourism is growing on islands lacking sandy beaches and dense human populations.[35]

The Martinique amazon, Amazona martinicana, is an extinct species of parrot in the family Psittacidae.

At the time of European contact, the dominant ethnic groups in the Caribbean included the Tano of the Greater Antilles and northern Lesser Antilles, the Island Caribs of the southern Lesser Antilles, and smaller distinct groups such as the Guanajatabey of western Cuba and the Ciguayo of western Hispaniola. The population of the Caribbean is estimated to have been around 750,000 immediately before European contact, although lower and higher figures are given. After contact, social disruption and epidemic diseases such as smallpox and measles (to which they had no natural immunity)[36] led to a decline in the Amerindian population.[37] From 1500 to 1800 the population rose as slaves arrived from West Africa[38] such as the Kongo, Igbo, Akan, Fon and Yoruba as well as military prisoners from Ireland, who were deported during the Cromwellian reign in England.[citation needed] Immigrants from Britain, Italy, France, Spain, the Netherlands, Portugal and Denmark also arrived, although the mortality rate was high for both groups.[39]

The population is estimated to have reached 2.2 million by 1800.[40] Immigrants from India, China, Indonesia, and other countries arrived in the mid-19th century as indentured servants.[41] After the ending of the Atlantic slave trade, the population increased naturally.[42] The total regional population was estimated at 37.5 million by 2000.[43]

The majority of the Caribbean has populations of mainly Africans in the French Caribbean, Anglophone Caribbean and Dutch Caribbean, there are minorities of mixed-race and European people of Dutch, English, French, Italian and Portuguese ancestry. Asians, especially those of Chinese and Indian descent, form a significant minority in the region and also contribute to multiracial communities. Most of their ancestors arrived in the 19th century as indentured laborers.

The Spanish-speaking Caribbean have primarily mixed race, African, or European majorities. Puerto Rico has a European majority with a mixture of European-African-Native American (tri-racial), and a large Mulatto (European-West African) and West African minority. One third of Cuba’s (largest Caribbean island) population is of African descent, with a sizable Mulatto (mixed AfricanEuropean) population, and European majority. The Dominican Republic has the largest mixed race population, primarily descended from Europeans, West Africans, and Amerindians.

Larger islands such as Jamaica, have a very large African majority, in addition to a significant mixed race, Chinese, Europeans, Indian, Lebanese, Latin American, and Syrian populations. This is a result of years of importation of slaves and indentured labourers, and migration. Most multi-racial Jamaicans refer to themselves as either mixed race or Brown. The situation is similar for the Caricom states of Belize, Guyana and Trinidad and Tobago. Trinidad and Tobago has a multi-racial cosmopolitan society due to the Africans, East Indians, Chinese, Arabs, Native Amerindians, Jews, Hispanic/Portuguese, and Europeans. This multi-racial mix has created sub-ethnicities that often straddle the boundaries of major ethnicities and include Chindian, Mulatto and Dougla.

Spanish, English, Portuguese, French, Dutch, Haitian Creole, Caribbean Hindustani, Tamil, and Papiamento are the predominant official languages of various countries in the region, though a handful of unique creole languages or dialects can also be found from one country to another. Other languages such as Danish, Italian, Irish, German, Swedish, Arabic, Chinese, Indonesian, Javanese, Yoruba, Yiddish/Hebrew, Amerindian languages, other African languages, other European languages, other Indian languages, and other Indonesian languages can also be found.

Christianity is the predominant religion in the Caribbean (84.7%).[44] Other religious groups in the region are Hinduism, Islam, Buddhism, Jainism, Sikhism, Zorastrianism, Bah’, Taoism/Chinese folk religion/Confucianism, Kebatinan, Judaism, Rastafari, and Afro-American religions such as Yoruba, Orisha, Santera, and Vodou.

Caribbean societies are very different from other Western societies in terms of size, culture, and degree of mobility of their citizens.[45] The current economic and political problems the states face individually are common to all Caribbean states. Regional development has contributed to attempts to subdue current problems and avoid projected problems. From a political and economic perspective, regionalism serves to make Caribbean states active participants in current international affairs through collective coalitions. In 1973, the first political regionalism in the Caribbean Basin was created by advances of the English-speaking Caribbean nations through the institution known as the Caribbean Common Market and Community (CARICOM)[46] which is located in Guyana.

Certain scholars have argued both for and against generalizing the political structures of the Caribbean. On the one hand the Caribbean states are politically diverse, ranging from communist systems such as Cuba toward more capitalist Westminster-style parliamentary systems as in the Commonwealth Caribbean. Other scholars argue that these differences are superficial, and that they tend to undermine commonalities in the various Caribbean states. Contemporary Caribbean systems seem to reflect a “blending of traditional and modern patterns, yielding hybrid systems that exhibit significant structural variations and divergent constitutional traditions yet ultimately appear to function in similar ways.”[47] The political systems of the Caribbean states share similar practices.

The influence of regionalism in the Caribbean is often marginalized. Some scholars believe that regionalism cannot exist in the Caribbean because each small state is unique. On the other hand, scholars also suggest that there are commonalities amongst the Caribbean nations that suggest regionalism exists. “Proximity as well as historical ties among the Caribbean nations has led to cooperation as well as a desire for collective action.”[48] These attempts at regionalization reflect the nations’ desires to compete in the international economic system.[48]

Furthermore, a lack of interest from other major states promoted regionalism in the region. In recent years the Caribbean has suffered from a lack of U.S. interest. “With the end of the Cold War, U.S. security and economic interests have been focused on other areas. As a result there has been a significant reduction in U.S. aid and investment to the Caribbean.”[49] The lack of international support for these small, relatively poor states, helped regionalism prosper.

Following the Cold War another issue of importance in the Caribbean has been the reduced economic growth of some Caribbean States due to the United States and European Union’s allegations of special treatment toward the region by each other. [clarification needed]

The United States under President Bill Clinton launched a challenge in the World Trade Organization against the EU over Europe’s preferential program, known as the Lom Convention, which allowed banana exports from the former colonies of the Group of African, Caribbean and Pacific states (ACP) to enter Europe cheaply.[50] The World Trade Organization sided in the United States’ favour and the beneficial elements of the convention to African, Caribbean and Pacific states has been partially dismantled and replaced by the Cotonou Agreement.[51]

During the US/EU dispute, the United States imposed large tariffs on European Union goods (up to 100%) to pressure Europe to change the agreement with the Caribbean nations in favour of the Cotonou Agreement.[52]

Farmers in the Caribbean have complained of falling profits and rising costs as the Lom Convention weakens. Some farmers have faced increased pressure to turn towards the cultivation of illegal drugs, which has a higher profit margin and fills the sizable demand for these illegal drugs in North America and Europe.[53][54]

Caribbean nations have also started to more closely cooperate in the Caribbean Financial Action Task Force and other instruments to add oversight of the offshore industry. One of the most important associations that deal with regionalism amongst the nations of the Caribbean Basin has been the Association of Caribbean States (ACS). Proposed by CARICOM in 1992, the ACS soon won the support of the other countries of the region. It was founded in July 1994. The ACS maintains regionalism within the Caribbean on issues unique to the Caribbean Basin. Through coalition building, like the ACS and CARICOM, regionalism has become an undeniable part of the politics and economics of the Caribbean. The successes of region-building initiatives are still debated by scholars, yet regionalism remains prevalent throughout the Caribbean.

The President of Venezuela, Hugo Chavez launched an economic group called the Bolivarian Alliance for the Americas (ALBA), which several eastern Caribbean islands joined. In 2012, the nation of Haiti, with 9 million people, became the largest CARICOM nation that sought to join the union.[55]

Here are some of the bodies that several islands share in collaboration:

Coordinates: 143132N 754906W / 14.52556N 75.81833W / 14.52556; -75.81833

Go here to read the rest:

Caribbean – Wikipedia

Posted in Caribbean | Comments Off on Caribbean – Wikipedia

Robotics – Wikipedia

Posted: at 11:36 pm

Power sourceEdit

At present mostly (leadacid) batteries are used as a power source. Many different types of batteries can be used as a power source for robots. They range from leadacid batteries, which are safe and have relatively long shelf lives but are rather heavy compared to silvercadmium batteries that are much smaller in volume and are currently much more expensive. Designing a battery-powered robot needs to take into account factors such as safety, cycle lifetime and weight. Generators, often some type of internal combustion engine, can also be used. However, such designs are often mechanically complex and need fuel, require heat dissipation and are relatively heavy. A tether connecting the robot to a power supply would remove the power supply from the robot entirely. This has the advantage of saving weight and space by moving all power generation and storage components elsewhere. However, this design does come with the drawback of constantly having a cable connected to the robot, which can be difficult to manage.[20] Potential power sources could be:

Actuators are the “muscles” of a robot, the parts which convert stored energy into movement. By far the most popular actuators are electric motors that rotate a wheel or gear, and linear actuators that control industrial robots in factories. There are some recent advances in alternative types of actuators, powered by electricity, chemicals, or compressed air.

The vast majority of robots use electric motors, often brushed and brushless DC motors in portable robots or AC motors in industrial robots and CNC machines. These motors are often preferred in systems with lighter loads, and where the predominant form of motion is rotational.

Various types of linear actuators move in and out instead of by spinning, and often have quicker direction changes, particularly when very large forces are needed such as with industrial robotics. They are typically powered by compressed air (pneumatic actuator) or an oil (hydraulic actuator).

A spring can be designed as part of the motor actuator, to allow improved force control. It has been used in various robots, particularly walking humanoid robots.[21]

Pneumatic artificial muscles, also known as air muscles, are special tubes that expand(typically up to 40%) when air is forced inside them. They are used in some robot applications.[22][23][24]

Muscle wire, also known as shape memory alloy, Nitinol or Flexinol wire, is a material which contracts (under 5%) when electricity is applied. They have been used for some small robot applications.[25][26]

EAPs or EPAMs are a new[when?] plastic material that can contract substantially (up to 380% activation strain) from electricity, and have been used in facial muscles and arms of humanoid robots,[27] and to enable new robots to float,[28] fly, swim or walk.[29]

Recent alternatives to DC motors are piezo motors or ultrasonic motors. These work on a fundamentally different principle, whereby tiny piezoceramic elements, vibrating many thousands of times per second, cause linear or rotary motion. There are different mechanisms of operation; one type uses the vibration of the piezo elements to step the motor in a circle or a straight line.[30] Another type uses the piezo elements to cause a nut to vibrate or to drive a screw. The advantages of these motors are nanometer resolution, speed, and available force for their size.[31] These motors are already available commercially, and being used on some robots.[32][33]

Elastic nanotubes are a promising artificial muscle technology in early-stage experimental development. The absence of defects in carbon nanotubes enables these filaments to deform elastically by several percent, with energy storage levels of perhaps 10J/cm3 for metal nanotubes. Human biceps could be replaced with an 8mm diameter wire of this material. Such compact “muscle” might allow future robots to outrun and outjump humans.[34]

Sensors allow robots to receive information about a certain measurement of the environment, or internal components. This is essential for robots to perform their tasks, and act upon any changes in the environment to calculate the appropriate response. They are used for various forms of measurements, to give the robots warnings about safety or malfunctions, and to provide real time information of the task it is performing.

Current robotic and prosthetic hands receive far less tactile information than the human hand. Recent research has developed a tactile sensor array that mimics the mechanical properties and touch receptors of human fingertips.[35][36] The sensor array is constructed as a rigid core surrounded by conductive fluid contained by an elastomeric skin. Electrodes are mounted on the surface of the rigid core and are connected to an impedance-measuring device within the core. When the artificial skin touches an object the fluid path around the electrodes is deformed, producing impedance changes that map the forces received from the object. The researchers expect that an important function of such artificial fingertips will be adjusting robotic grip on held objects.

Scientists from several European countries and Israel developed a prosthetic hand in 2009, called SmartHand, which functions like a real oneallowing patients to write with it, type on a keyboard, play piano and perform other fine movements. The prosthesis has sensors which enable the patient to sense real feeling in its fingertips.[37]

Computer vision is the science and technology of machines that see. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences and views from cameras.

In most practical computer vision applications, the computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common.

Computer vision systems rely on image sensors which detect electromagnetic radiation which is typically in the form of either visible light or infra-red light. The sensors are designed using solid-state physics. The process by which light propagates and reflects off surfaces is explained using optics. Sophisticated image sensors even require quantum mechanics to provide a complete understanding of the image formation process. Robots can also be equipped with multiple vision sensors to be better able to compute the sense of depth in the environment. Like human eyes, robots’ “eyes” must also be able to focus on a particular area of interest, and also adjust to variations in light intensities.

There is a subfield within computer vision where artificial systems are designed to mimic the processing and behavior of biological system, at different levels of complexity. Also, some of the learning-based methods developed within computer vision have their background in biology.

Other common forms of sensing in robotics use lidar, radar and sonar.[citation needed]

Robots need to manipulate objects; pick up, modify, destroy, or otherwise have an effect. Thus the “hands” of a robot are often referred to as end effectors,[38] while the “arm” is referred to as a manipulator.[39] Most robot arms have replaceable effectors, each allowing them to perform some small range of tasks. Some have a fixed manipulator which cannot be replaced, while a few have one very general purpose manipulator, for example a humanoid hand.[40] Learning how to manipulate a robot often requires a close feedback between human to the robot, although there are several methods for remote manipulation of robots. [41]

One of the most common effectors is the gripper. In its simplest manifestation it consists of just two fingers which can open and close to pick up and let go of a range of small objects. Fingers can for example be made of a chain with a metal wire run through it.[42] Hands that resemble and work more like a human hand include the Shadow Hand and the Robonaut hand.[43] Hands that are of a mid-level complexity include the Delft hand.[44][45] Mechanical grippers can come in various types, including friction and encompassing jaws. Friction jaws use all the force of the gripper to hold the object in place using friction. Encompassing jaws cradle the object in place, using less friction.

Vacuum grippers are very simple astrictive[46] devices, but can hold very large loads provided the prehension surface is smooth enough to ensure suction.

Pick and place robots for electronic components and for large objects like car windscreens, often use very simple vacuum grippers.

Some advanced robots are beginning to use fully humanoid hands, like the Shadow Hand, MANUS,[47] and the Schunk hand.[48] These are highly dexterous manipulators, with as many as 20 degrees of freedom and hundreds of tactile sensors.[49]

For simplicity most mobile robots have four wheels or a number of continuous tracks. Some researchers have tried to create more complex wheeled robots with only one or two wheels. These can have certain advantages such as greater efficiency and reduced parts, as well as allowing a robot to navigate in confined places that a four-wheeled robot would not be able to.

Balancing robots generally use a gyroscope to detect how much a robot is falling and then drive the wheels proportionally in the same direction, to counterbalance the fall at hundreds of times per second, based on the dynamics of an inverted pendulum.[50] Many different balancing robots have been designed.[51] While the Segway is not commonly thought of as a robot, it can be thought of as a component of a robot, when used as such Segway refer to them as RMP (Robotic Mobility Platform). An example of this use has been as NASA’s Robonaut that has been mounted on a Segway.[52]

A one-wheeled balancing robot is an extension of a two-wheeled balancing robot so that it can move in any 2D direction using a round ball as its only wheel. Several one-wheeled balancing robots have been designed recently, such as Carnegie Mellon University’s “Ballbot” that is the approximate height and width of a person, and Tohoku Gakuin University’s “BallIP”.[53] Because of the long, thin shape and ability to maneuver in tight spaces, they have the potential to function better than other robots in environments with people.[54]

Several attempts have been made in robots that are completely inside a spherical ball, either by spinning a weight inside the ball,[55][56] or by rotating the outer shells of the sphere.[57][58] These have also been referred to as an orb bot [59] or a ball bot.[60][61]

Using six wheels instead of four wheels can give better traction or grip in outdoor terrain such as on rocky dirt or grass.

Tank tracks provide even more traction than a six-wheeled robot. Tracked wheels behave as if they were made of hundreds of wheels, therefore are very common for outdoor and military robots, where the robot must drive on very rough terrain. However, they are difficult to use indoors such as on carpets and smooth floors. Examples include NASA’s Urban Robot “Urbie”.[62]

Walking is a difficult and dynamic problem to solve. Several robots have been made which can walk reliably on two legs, however none have yet been made which are as robust as a human. There has been much study on human inspired walking, such as AMBER lab which was established in 2008 by the Mechanical Engineering Department at Texas A&M University.[63] Many other robots have been built that walk on more than two legs, due to these robots being significantly easier to construct.[64][65] Walking robots can be used for uneven terrains, which would provide better mobility and energy efficiency than other locomotion methods. Hybrids too have been proposed in movies such as I, Robot, where they walk on 2 legs and switch to 4 (arms+legs) when going to a sprint. Typically, robots on 2 legs can walk well on flat floors and can occasionally walk up stairs. None can walk over rocky, uneven terrain. Some of the methods which have been tried are:

The Zero Moment Point (ZMP) is the algorithm used by robots such as Honda’s ASIMO. The robot’s onboard computer tries to keep the total inertial forces (the combination of Earth’s gravity and the acceleration and deceleration of walking), exactly opposed by the floor reaction force (the force of the floor pushing back on the robot’s foot). In this way, the two forces cancel out, leaving no moment (force causing the robot to rotate and fall over).[66] However, this is not exactly how a human walks, and the difference is obvious to human observers, some of whom have pointed out that ASIMO walks as if it needs the lavatory.[67][68][69] ASIMO’s walking algorithm is not static, and some dynamic balancing is used (see below). However, it still requires a smooth surface to walk on.

Several robots, built in the 1980s by Marc Raibert at the MIT Leg Laboratory, successfully demonstrated very dynamic walking. Initially, a robot with only one leg, and a very small foot, could stay upright simply by hopping. The movement is the same as that of a person on a pogo stick. As the robot falls to one side, it would jump slightly in that direction, in order to catch itself.[70] Soon, the algorithm was generalised to two and four legs. A bipedal robot was demonstrated running and even performing somersaults.[71] A quadruped was also demonstrated which could trot, run, pace, and bound.[72] For a full list of these robots, see the MIT Leg Lab Robots page.[73]

A more advanced way for a robot to walk is by using a dynamic balancing algorithm, which is potentially more robust than the Zero Moment Point technique, as it constantly monitors the robot’s motion, and places the feet in order to maintain stability.[74] This technique was recently demonstrated by Anybots’ Dexter Robot,[75] which is so stable, it can even jump.[76] Another example is the TU Delft Flame.

Perhaps the most promising approach utilizes passive dynamics where the momentum of swinging limbs is used for greater efficiency. It has been shown that totally unpowered humanoid mechanisms can walk down a gentle slope, using only gravity to propel themselves. Using this technique, a robot need only supply a small amount of motor power to walk along a flat surface or a little more to walk up a hill. This technique promises to make walking robots at least ten times more efficient than ZMP walkers, like ASIMO.[77][78]

A modern passenger airliner is essentially a flying robot, with two humans to manage it. The autopilot can control the plane for each stage of the journey, including takeoff, normal flight, and even landing.[79] Other flying robots are uninhabited, and are known as unmanned aerial vehicles (UAVs). They can be smaller and lighter without a human pilot on board, and fly into dangerous territory for military surveillance missions. Some can even fire on targets under command. UAVs are also being developed which can fire on targets automatically, without the need for a command from a human. Other flying robots include cruise missiles, the Entomopter, and the Epson micro helicopter robot. Robots such as the Air Penguin, Air Ray, and Air Jelly have lighter-than-air bodies, propelled by paddles, and guided by sonar.

Several snake robots have been successfully developed. Mimicking the way real snakes move, these robots can navigate very confined spaces, meaning they may one day be used to search for people trapped in collapsed buildings.[80] The Japanese ACM-R5 snake robot[81] can even navigate both on land and in water.[82]

A small number of skating robots have been developed, one of which is a multi-mode walking and skating device. It has four legs, with unpowered wheels, which can either step or roll.[83] Another robot, Plen, can use a miniature skateboard or roller-skates, and skate across a desktop.[84]

Several different approaches have been used to develop robots that have the ability to climb vertical surfaces. One approach mimics the movements of a human climber on a wall with protrusions; adjusting the center of mass and moving each limb in turn to gain leverage. An example of this is Capuchin,[85] built by Dr. Ruixiang Zhang at Stanford University, California. Another approach uses the specialized toe pad method of wall-climbing geckoes, which can run on smooth surfaces such as vertical glass. Examples of this approach include Wallbot[86] and Stickybot.[87] China’s Technology Daily reported on November 15, 2008 that Dr. Li Hiu Yeung and his research group of New Concept Aircraft (Zhuhai) Co., Ltd. had successfully developed a bionic gecko robot named “Speedy Freelander”. According to Dr. Li, the gecko robot could rapidly climb up and down a variety of building walls, navigate through ground and wall fissures, and walk upside-down on the ceiling. It was also able to adapt to the surfaces of smooth glass, rough, sticky or dusty walls as well as various types of metallic materials. It could also identify and circumvent obstacles automatically. Its flexibility and speed were comparable to a natural gecko. A third approach is to mimic the motion of a snake climbing a pole.[citation needed]. Lastely one may mimic the movements of a human climber on a wall with protrusions; adjusting the center of mass and moving each limb in turn to gain leverage.

It is calculated that when swimming some fish can achieve a propulsive efficiency greater than 90%.[88] Furthermore, they can accelerate and maneuver far better than any man-made boat or submarine, and produce less noise and water disturbance. Therefore, many researchers studying underwater robots would like to copy this type of locomotion.[89] Notable examples are the Essex University Computer Science Robotic Fish G9,[90] and the Robot Tuna built by the Institute of Field Robotics, to analyze and mathematically model thunniform motion.[91] The Aqua Penguin,[92] designed and built by Festo of Germany, copies the streamlined shape and propulsion by front “flippers” of penguins. Festo have also built the Aqua Ray and Aqua Jelly, which emulate the locomotion of manta ray, and jellyfish, respectively.

In 2014 iSplash-II was developed by R.J Clapham PhD at Essex University. It was the first robotic fish capable of outperforming real carangiform fish in terms of average maximum velocity (measured in body lengths/ second) and endurance, the duration that top speed is maintained. This build attained swimming speeds of 11.6BL/s (i.e. 3.7m/s).[93] The first build, iSplash-I (2014) was the first robotic platform to apply a full-body length carangiform swimming motion which was found to increase swimming speed by 27% over the traditional approach of a posterior confined wave form.[94]

Sailboat robots have also been developed in order to make measurements at the surface of the ocean. A typical sailboat robot is Vaimos [95] built by IFREMER and ENSTA-Bretagne. Since the propulsion of sailboat robots uses the wind, the energy of the batteries is only used for the computer, for the communication and for the actuators (to tune the rudder and the sail). If the robot is equipped with solar panels, the robot could theoretically navigate forever. The two main competitions of sailboat robots are WRSC, which takes place every year in Europe, and Sailbot.

Though a significant percentage of robots in commission today are either human controlled, or operate in a static environment, there is an increasing interest in robots that can operate autonomously in a dynamic environment. These robots require some combination of navigation hardware and software in order to traverse their environment. In particular unforeseen events (e.g. people and other obstacles that are not stationary) can cause problems or collisions. Some highly advanced robots such as ASIMO, and Mein robot have particularly good robot navigation hardware and software. Also, self-controlled cars, Ernst Dickmanns’ driverless car, and the entries in the DARPA Grand Challenge, are capable of sensing the environment well and subsequently making navigational decisions based on this information. Most of these robots employ a GPS navigation device with waypoints, along with radar, sometimes combined with other sensory data such as lidar, video cameras, and inertial guidance systems for better navigation between waypoints.

The state of the art in sensory intelligence for robots will have to progress through several orders of magnitude if we want the robots working in our homes to go beyond vacuum-cleaning the floors. If robots are to work effectively in homes and other non-industrial environments, the way they are instructed to perform their jobs, and especially how they will be told to stop will be of critical importance. The people who interact with them may have little or no training in robotics, and so any interface will need to be extremely intuitive. Science fiction authors also typically assume that robots will eventually be capable of communicating with humans through speech, gestures, and facial expressions, rather than a command-line interface. Although speech would be the most natural way for the human to communicate, it is unnatural for the robot. It will probably be a long time before robots interact as naturally as the fictional C-3PO, or Data of Star Trek, Next Generation.

Interpreting the continuous flow of sounds coming from a human, in real time, is a difficult task for a computer, mostly because of the great variability of speech.[96] The same word, spoken by the same person may sound different depending on local acoustics, volume, the previous word, whether or not the speaker has a cold, etc.. It becomes even harder when the speaker has a different accent.[97] Nevertheless, great strides have been made in the field since Davis, Biddulph, and Balashek designed the first “voice input system” which recognized “ten digits spoken by a single user with 100% accuracy” in 1952.[98] Currently, the best systems can recognize continuous, natural speech, up to 160 words per minute, with an accuracy of 95%.[99]

Other hurdles exist when allowing the robot to use voice for interacting with humans. For social reasons, synthetic voice proves suboptimal as a communication medium,[100] making it necessary to develop the emotional component of robotic voice through various techniques.[101][102]

One can imagine, in the future, explaining to a robot chef how to make a pastry, or asking directions from a robot police officer. In both of these cases, making hand gestures would aid the verbal descriptions. In the first case, the robot would be recognizing gestures made by the human, and perhaps repeating them for confirmation. In the second case, the robot police officer would gesture to indicate “down the road, then turn right”. It is likely that gestures will make up a part of the interaction between humans and robots.[103] A great many systems have been developed to recognize human hand gestures.[104]

Facial expressions can provide rapid feedback on the progress of a dialog between two humans, and soon may be able to do the same for humans and robots. Robotic faces have been constructed by Hanson Robotics using their elastic polymer called Frubber, allowing a large number of facial expressions due to the elasticity of the rubber facial coating and embedded subsurface motors (servos).[105] The coating and servos are built on a metal skull. A robot should know how to approach a human, judging by their facial expression and body language. Whether the person is happy, frightened, or crazy-looking affects the type of interaction expected of the robot. Likewise, robots like Kismet and the more recent addition, Nexi[106] can produce a range of facial expressions, allowing it to have meaningful social exchanges with humans.[107]

Artificial emotions can also be generated, composed of a sequence of facial expressions and/or gestures. As can be seen from the movie Final Fantasy: The Spirits Within, the programming of these artificial emotions is complex and requires a large amount of human observation. To simplify this programming in the movie, presets were created together with a special software program. This decreased the amount of time needed to make the film. These presets could possibly be transferred for use in real-life robots.

Many of the robots of science fiction have a personality, something which may or may not be desirable in the commercial robots of the future.[108] Nevertheless, researchers are trying to create robots which appear to have a personality:[109][110] i.e. they use sounds, facial expressions, and body language to try to convey an internal state, which may be joy, sadness, or fear. One commercial example is Pleo, a toy robot dinosaur, which can exhibit several apparent emotions.[111]

The Socially Intelligent Machines Lab of the Georgia Institute of Technology researches new concepts of guided teaching interaction with robots. Aim of the projects is a social robot learns task goals from human demonstrations without prior knowledge of high-level concepts. These new concepts are grounded from low-level continuous sensor data through unsupervised learning, and task goals are subsequently learned using a Bayesian approach. These concepts can be used to transfer knowledge to future tasks, resulting in faster learning of those tasks. The results are demonstrated by the robot Curi who can scoop some pasta from a pot onto a plate and serve the sauce on top.[112]

Original post:

Robotics – Wikipedia

Posted in Robotics | Comments Off on Robotics – Wikipedia

Colonization of Mars – Wikipedia

Posted: at 11:32 pm

Mars is the focus of much scientific study about possible human colonization. Its surface conditions and the presence of water on Mars make it arguably the most hospitable of the planets in the Solar System, other than Earth. Mars requires less energy per unit mass (delta-v) to reach from Earth than any planet except Venus.

One of Elon Musk’s stated goals through his company SpaceX is to make such colonization possible by providing transportation, and to “help humanity establish a permanent, self-sustaining colony on [Mars] within the next 50 to 100 years”.[1]

Earth is similar to its “sister planet” Venus in bulk composition, size and surface gravity, but Mars’s similarities to Earth are more compelling when considering colonization. These include:

Conditions on the surface of Mars are closer to the conditions on Earth in terms of temperature and atmospheric pressure than on any other planet or moon, except for the cloud tops of Venus.[21] However, the surface is not hospitable to humans or most known life forms due to greatly reduced air pressure, and an atmosphere with only 0.1%oxygen.

In 2012, it was reported that some lichen and cyanobacteria survived and showed remarkable adaptation capacity for photosynthesis after 34 days in simulated Martian conditions in the Mars Simulation Laboratory (MSL) maintained by the German Aerospace Center (DLR).[22][23][24] Some scientists think that cyanobacteria could play a role in the development of self-sustainable manned outposts on Mars.[25] They propose that cyanobacteria could be used directly for various applications, including the production of food, fuel and oxygen, but also indirectly: products from their culture could support the growth of other organisms, opening the way to a wide range of life-support biological processes based on Martian resources.[25]

Humans have explored parts of Earth that match some conditions on Mars. Based on NASA rover data, temperatures on Mars (at low latitudes) are similar to those in Antarctica.[26] The atmospheric pressure at the highest altitudes reached by manned balloon ascents (35km (114,000 feet) in 1961,[27] 38km in 2012) is similar to that on the surface of Mars.[28]

Human survival on Mars would require complex life-support measures and living in artificial environments.

There is much discussion regarding the possibility of terraforming Mars to allow a wide variety of life forms, including humans, to survive unaided on Mars’s surface, including the technologies needed to do so.[29]

Mars has no global magnetosphere as Earth does. Combined with a thin atmosphere, this permits a significant amount of ionizing radiation to reach the Martian surface. The Mars Odyssey spacecraft carries an instrument, the Mars Radiation Environment Experiment (MARIE), to measure the radiation. MARIE found that radiation levels in orbit above Mars are 2.5 times higher than at the International Space Station. The average daily dose was about 220Gy (22mrad) equivalent to 0.08Gy per year.[30] A three-year exposure to such levels would be close to the safety limits currently adopted by NASA.[citation needed] Levels at the Martian surface would be somewhat lower and might vary significantly at different locations depending on altitude and local magnetic fields. Building living quarters underground (possibly in lava tubes that are already present) would significantly lower the colonists’ exposure to radiation. Occasional solar proton events (SPEs) produce much higher doses.

Much remains to be learned about space radiation. In 2003, NASA’s Lyndon B. Johnson Space Center opened a facility, the NASA Space Radiation Laboratory, at Brookhaven National Laboratory, that employs particle accelerators to simulate space radiation. The facility studies its effects on living organisms, as well as experimenting with shielding techniques.[31] Initially, there was some evidence that this kind of low level, chronic radiation is not quite as dangerous as once thought; and that radiation hormesis occurs.[32] However, results from a 2006 study indicated that protons from cosmic radiation may cause twice as much serious damage to DNA as previously estimated, exposing astronauts to greater risk of cancer and other diseases.[33] As a result of the higher radiation in the Martian environment, the summary report of the Review of U.S. Human Space Flight Plans Committee released in 2009 reported that “Mars is not an easy place to visit with existing technology and without a substantial investment of resources.”[33] NASA is exploring a variety of alternative techniques and technologies such as deflector shields of plasma to protect astronauts and spacecraft from radiation.[33]

Mars requires less energy per unit mass (delta V) to reach from Earth than any planet except Venus. Using a Hohmann transfer orbit, a trip to Mars requires approximately nine months in space.[34] Modified transfer trajectories that cut the travel time down to seven or six months in space are possible with incrementally higher amounts of energy and fuel compared to a Hohmann transfer orbit, and are in standard use for robotic Mars missions. Shortening the travel time below about six months requires higher delta-v and an exponentially increasing amount of fuel, and is not feasible with chemical rockets, but might be feasible with advanced spacecraft propulsion technologies, some of which have already been tested, such as Variable Specific Impulse Magnetoplasma Rocket,[35] and nuclear rockets. In the former case, a trip time of forty days could be attainable,[36] and in the latter, a trip time down to about two weeks.[37] In 2016, NASA scientists said they could further reduce travel time to Mars down to “as little as 72 hours” with the use of a “photonic propulsion” system instead of the fuel-based rocket propulsion system.[38]

During the journey the astronauts are subject to radiation, which requires a means to protect them. Cosmic radiation and solar wind cause DNA damage, which increases the risk of cancer significantly. The effect of long term travel in interplanetary space is unknown, but scientists estimate an added risk of between 1% and 19%, most likely 3.4%, for men to die of cancer because of the radiation during the journey to Mars and back to Earth. For women the probability is higher due to their larger glandular tissues.[39]

Mars has a gravity 0.38 times that of Earth and the density of its atmosphere is about 0.6% of that on Earth.[40] The relatively strong gravity and the presence of aerodynamic effects makes it difficult to land heavy, crewed spacecraft with thrusters only, as was done with the Apollo Moon landings, yet the atmosphere is too thin for aerodynamic effects to be of much help in aerobraking and landing a large vehicle. Landing piloted missions on Mars will require braking and landing systems different from anything used to land crewed spacecraft on the Moon or robotic missions on Mars.[41]

If one assumes carbon nanotube construction material will be available with a strength of 130 GPa then a space elevator could be built to land people and material on Mars.[42] A space elevator on Phobos has also been proposed.[43]

Colonization of Mars will require a wide variety of equipmentboth equipment to directly provide services to humans and production equipment used to produce food, propellant, water, energy and breathable oxygenin order to support human colonization efforts. Required equipment will include:[37]

According to Elon Musk, “even at a million people [working on Mars] you’re assuming an incredible amount of productivity per person, because you would need to recreate the entire industrial base on Mars… You would need to mine and refine all of these different materials, in a much more difficult environment than Earth”.[46]

Communications with Earth are relatively straightforward during the half-sol when Earth is above the Martian horizon. NASA and ESA included communications relay equipment in several of the Mars orbiters, so Mars already has communications satellites. While these will eventually wear out, additional orbiters with communication relay capability are likely to be launched before any colonization expeditions are mounted.

The one-way communication delay due to the speed of light ranges from about 3 minutes at closest approach (approximated by perihelion of Mars minus aphelion of Earth) to 22minutes at the largest possible superior conjunction (approximated by aphelion of Mars plus aphelion of Earth). Real-time communication, such as telephone conversations or Internet Relay Chat, between Earth and Mars would be highly impractical due to the long time lags involved. NASA has found that direct communication can be blocked for about two weeks every synodic period, around the time of superior conjunction when the Sun is directly between Mars and Earth,[47] although the actual duration of the communications blackout varies from mission to mission depending on various factorssuch as the amount of link margin designed into the communications system, and the minimum data rate that is acceptable from a mission standpoint. In reality most missions at Mars have had communications blackout periods of the order of a month.[48]

A satellite at the L4 or L5 EarthSun Lagrangian point could serve as a relay during this period to solve the problem; even a constellation of communications satellites would be a minor expense in the context of a full colonization program. However, the size and power of the equipment needed for these distances make the L4 and L5 locations unrealistic for relay stations, and the inherent stability of these regions, although beneficial in terms of station-keeping, also attracts dust and asteroids, which could pose a risk.[49] Despite that concern, the STEREO probes passed through the L4 and L5 regions without damage in late 2009.

Recent work by the University of Strathclyde’s Advanced Space Concepts Laboratory, in collaboration with the European Space Agency, has suggested an alternative relay architecture based on highly non-Keplerian orbits. These are a special kind of orbit produced when continuous low-thrust propulsion, such as that produced from an ion engine or solar sail, modifies the natural trajectory of a spacecraft. Such an orbit would enable continuous communications during solar conjunction by allowing a relay spacecraft to “hover” above Mars, out of the orbital plane of the two planets.[50] Such a relay avoids the problems of satellites stationed at either L4 or L5 by being significantly closer to the surface of Mars while still maintaining continuous communication between the two planets.

The path to a human colony could be prepared by robotic systems such as the Mars Exploration Rovers Spirit, Opportunity and Curiosity. These systems could help locate resources, such as ground water or ice, that would help a colony grow and thrive. The lifetimes of these systems would be measured in years and even decades, and as recent developments in commercial spaceflight have shown, it may be that these systems will involve private as well as government ownership. These robotic systems also have a reduced cost compared with early crewed operations, and have less political risk.

Wired systems might lay the groundwork for early crewed landings and bases, by producing various consumables including fuel, oxidizers, water, and construction materials. Establishing power, communications, shelter, heating, and manufacturing basics can begin with robotic systems, if only as a prelude to crewed operations.

Mars Surveyor 2001 Lander MIP (Mars ISPP Precursor) was to demonstrate manufacture of oxygen from the atmosphere of Mars,[51] and test solar cell technologies and methods of mitigating the effect of Martian dust on the power systems.[52][needs update]

Before any people are transported to Mars on the notional 2030s Interplanetary Transport System envisioned by SpaceX, a number of robotic cargo missions would be undertaken first in order to transport the requisite equipment, habitats and supplies.[53] Equipment that would be necessary would include “machines to produce fertilizer, methane and oxygen from Mars’ atmospheric nitrogen and carbon dioxide and the planet’s subsurface water ice” as well as construction materials to build transparent domes for initial agricultural areas.[54]

In 1948, Wernher von Braun described in his book The Mars Project that a fleet of 10 spaceships could be built using 1000three-stage rockets. These could bring a population of 70people to Mars.

All of the early human mission concepts to Mars as conceived by national governmental space programssuch as those being tentatively planned by NASA, FKA and ESAwould not be direct precursors to colonization. They are intended solely as exploration missions, as the Apollo missions to the Moon were not planned to be sites of a permanent base.

Colonization requires the establishment of permanent bases that have potential for self-expansion. A famous proposal for building such bases is the Mars Direct and the Semi-Direct plans, advocated by Robert Zubrin.[37]

Other proposals that envision the creation of a settlement have come from Jim McLane and Bas Lansdorp (the man behind Mars One, which envisions no planned return flight for the humans embarking on the journey),[55] as well as from Elon Musk whose SpaceX company, as of 2015[update], is funding development work on a space transportation system called the Interplanetary Transport System.[56][57]

As with early colonies in the New World, economics would be a crucial aspect to a colony’s success. The reduced gravity well of Mars and its position in the Solar System may facilitate MarsEarth trade and may provide an economic rationale for continued settlement of the planet. Given its size and resources, this might eventually be a place to grow food and produce equipment to mine the asteroid belt.

A major economic problem is the enormous up-front investment required to establish the colony and perhaps also terraform the planet.

Some early Mars colonies might specialize in developing local resources for Martian consumption, such as water and/or ice. Local resources can also be used in infrastructure construction.[58] One source of Martian ore currently known to be available is metallic iron in the form of nickeliron meteorites. Iron in this form is more easily extracted than from the iron oxides that cover the planet.

Another main inter-Martian trade good during early colonization could be manure.[59] Assuming that life doesn’t exist on Mars, the soil is going to be very poor for growing plants, so manure and other fertilizers will be valued highly in any Martian civilization until the planet changes enough chemically to support growing vegetation on its own.

Solar power is a candidate for power for a Martian colony. Solar insolation (the amount of solar radiation that reaches Mars) is about 42% of that on Earth, since Mars is about 52% farther from the Sun and insolation falls off as the square of distance. But the thin atmosphere would allow almost all of that energy to reach the surface as compared to Earth, where the atmosphere absorbs roughly a quarter of the solar radiation. Sunlight on the surface of Mars would be much like a moderately cloudy day on Earth.[60]

Space colonization on Mars can roughly be said to be possible when the necessary methods of space colonization become cheap enough (such as space access by cheaper launch systems) to meet the cumulative funds that have been gathered for the purpose.

Although there are no immediate prospects for the large amounts of money required for any space colonization to be available given traditional launch costs,[61][full citation needed] there is some prospect of a radical reduction to launch costs in the 2010s, which would consequently lessen the cost of any efforts in that direction. With a published price of US$56.5 million per launch of up to 13,150kg (28,990lb) payload[62] to low Earth orbit, SpaceX Falcon 9 rockets are already the “cheapest in the industry”.[63] Advancements currently being developed as part of the SpaceX reusable launch system development program to enable reusable Falcon 9s “could drop the price by an order of magnitude, sparking more space-based enterprise, which in turn would drop the cost of access to space still further through economies of scale.”[63] SpaceX’s reusable plans include Falcon Heavy and future methane-based launch vehicles including the Interplanetary Transport System. If SpaceX is successful in developing the reusable technology, it would be expected to “have a major impact on the cost of access to space”, and change the increasingly competitive market in space launch services.[64]

Alternative funding approaches might include the creation of inducement prizes. For example, the 2004 President’s Commission on Implementation of United States Space Exploration Policy suggested that an inducement prize contest should be established, perhaps by government, for the achievement of space colonization. One example provided was offering a prize to the first organization to place humans on the Moon and sustain them for a fixed period before they return to Earth.[65]

Mars Odyssey found what appear to be natural caves near the volcano Arsia Mons. It has been speculated that settlers could benefit from the shelter that these or similar structures could provide from radiation and micrometeoroids. Geothermal energy is also suspected in the equatorial regions.[66]

Several lava tube skylights on Mars have been located on the flanks of Arsia Mons. Earth based examples indicate that some should have lengthy passages offering complete protection from radiation and be relatively easy to seal using on-site materials, especially in small subsections.[67]

Robotic spacecraft to Mars are required to be sterilized, to have at most 300,000 spores on the exterior of the craftand more thoroughly sterilized if they contact “special regions” containing water,[68][69] otherwise there is a risk of contaminating not only the life-detection experiments but possibly the planet itself.

It is impossible to sterilize human missions to this level, as humans are host to typically a hundred trillion microorganisms of thousands of species of the human microbiome, and these cannot be removed while preserving the life of the human. Containment seems the only option, but it is a major challenge in the event of a hard landing.[70] There have been several planetary workshops on this issue, but with no final guidelines for a way forward yet.[71] Human explorers would also be vulnerable to back contamination to Earth if they become carriers of microorganisms.[72]

Mars colonization is advocated by several non-governmental groups for a range of reasons and with varied proposals. One of the oldest groups is the Mars Society who promote a NASA program to accomplish human exploration of Mars and have set up Mars analog research stations in Canada and the United States. Mars to Stay advocates recycling emergency return vehicles into permanent settlements as soon as initial explorers determine permanent habitation is possible. Mars One, which went public in June2012, aims to establish a fully operational permanent human colony on Mars by 2027 with funding coming from a reality TV show and other commercial exploitation, although this approach has been widely criticized as unrealistic and infeasible.[73][74][75]

Elon Musk founded SpaceX with the long-term goal of developing the technologies that will enable a self-sustaining human colony on Mars.[76][77] In 2015 he stated “I think weve got a decent shot of sending a person to Mars in 11 or 12years”.[78]Richard Branson, in his lifetime, is “determined to be a part of starting a population on Mars. I think it is absolutely realistic. It will happen… I think over the next 20 years, we will take literally hundreds of thousands of people to space and that will give us the financial resources to do even bigger things”.[79]

In June 2013, Buzz Aldrin, American engineer and former astronaut, and the second person to walk on the Moon, wrote an opinion, published in The New York Times, supporting a manned mission to Mars and viewing the Moon “not as a destination but more a point of departure, one that places humankind on a trajectory to homestead Mars and become a two-planet species.”[80] In August 2015, Aldrin, in association with the Florida Institute of Technology, presented a “master plan”, for NASA consideration, for astronauts, with a “tour of duty of ten years”, to colonize Mars before the year 2040.[81]

A few instances in fiction provide detailed descriptions of Mars colonization. They include:

Follow this link:
Colonization of Mars – Wikipedia

Posted in Mars Colonization | Comments Off on Colonization of Mars – Wikipedia

New Atheism – Wikipedia

Posted: October 19, 2016 at 4:10 am

New Atheism is the journalistic term used to describe the positions promoted by atheists of the twenty-first century. This modern-day atheism and secularism is advanced by critics of religion and religious belief,[1] a group of modern atheist thinkers and writers who advocate the view that superstition, religion and irrationalism should not simply be tolerated but should be countered, criticized, and exposed by rational argument wherever its influence arises in government, education and politics.[2]

New Atheism lends itself to and often overlaps with secular humanism and antitheism, particularly in its criticism of what many New Atheists regard as the indoctrination of children and the perpetuation of ideologies founded on belief in the supernatural.

The 2004 publication of The End of Faith: Religion, Terror, and the Future of Reason by Sam Harris, a bestseller in the United States, was joined over the next couple years by a series of popular best-sellers by atheist authors.[3] Harris was motivated by the events of September 11, 2001, which he laid directly at the feet of Islam, while also directly criticizing Christianity and Judaism.[4] Two years later Harris followed up with Letter to a Christian Nation, which was also a severe criticism of Christianity.[5] Also in 2006, following his television documentary The Root of All Evil?, Richard Dawkins published The God Delusion, which was on the New York Times best-seller list for 51 weeks.[6]

In a 2010 column entitled “Why I Don’t Believe in the New Atheism”, Tom Flynn contends that what has been called “New Atheism” is neither a movement nor new, and that what was new was the publication of atheist material by big-name publishers, read by millions, and appearing on bestseller lists.[7]

These are some of the significant books on the subject of atheism and religion:

On September 30, 2007 four prominent atheists (Richard Dawkins, Christopher Hitchens, Sam Harris, and Daniel Dennett) met at Hitchens’ residence for a private two-hour unmoderated discussion. The event was videotaped and titled “The Four Horsemen”.[9] During “The God Debate” in 2010 featuring Christopher Hitchens vs Dinesh D’Souza the men were collectively referred to as the “Four Horsemen of the Non-Apocalypse”,[10] an allusion to the biblical Four Horsemen from the Book of Revelation.[11]

Sam Harris is the author of the bestselling non-fiction books The End of Faith, Letter to a Christian Nation, The Moral Landscape, and Waking Up: A Guide to Spirituality Without Religion, as well as two shorter works, initially published as e-books, Free Will[12] and Lying.[13] Harris is a co-founder of the Reason Project.

Richard Dawkins is the author of The God Delusion,[14] which was preceded by a Channel 4 television documentary titled The Root of all Evil?. He is also the founder of the Richard Dawkins Foundation for Reason and Science.

Christopher Hitchens was the author of God Is Not Great[15] and was named among the “Top 100 Public Intellectuals” by Foreign Policy and Prospect magazine. In addition, Hitchens served on the advisory board of the Secular Coalition for America. In 2010 Hitchens published his memoir Hitch-22 (a nickname provided by close personal friend Salman Rushdie, whom Hitchens always supported during and following The Satanic Verses controversy).[16] Shortly after its publication, Hitchens was diagnosed with esophageal cancer, which led to his death in December 2011.[17] Before his death, Hitchens published a collection of essays and articles in his book Arguably;[18] a short edition Mortality[19] was published posthumously in 2012. These publications and numerous public appearances provided Hitchens with a platform to remain an astute atheist during his illness, even speaking specifically on the culture of deathbed conversions and condemning attempts to convert the terminally ill, which he opposed as “bad taste”.[20][21]

Daniel Dennett, author of Darwin’s Dangerous Idea,[22]Breaking the Spell[23] and many others, has also been a vocal supporter of The Clergy Project,[24] an organization that provides support for clergy in the US who no longer believe in God and cannot fully participate in their communities any longer.[25]

The “Four Horsemen” video, convened by Dawkins’ Foundation, can be viewed free online at his web site: Part 1, Part 2.

After the death of Hitchens, Ayaan Hirsi Ali (who attended the 2012 Global Atheist Convention, which Hitchens was scheduled to attend) was referred to as the “plus one horse-woman”, since she was originally invited to the 2007 meeting of the “Horsemen” atheists but had to cancel at the last minute.[26] Hirsi Ali was born in Mogadishu, Somalia, fleeing in 1992 to the Netherlands in order to escape an arranged marriage.[27] She became involved in Dutch politics, rejected faith, and became vocal in opposing Islamic ideology, especially concerning women, as exemplified by her books Infidel and The Caged Virgin.[28] Hirsi Ali was later involved in the production of the film Submission, for which her friend Theo Van Gogh was murdered with a death threat to Hirsi Ali pinned to his chest.[29] This resulted in Hirsi Ali’s hiding and later immigration to the United States, where she now resides and remains a prolific critic of Islam,[30] and the treatment of women in Islamic doctrine and society,[31] and a proponent of free speech and the freedom to offend.[32][33]

While “The Four Horsemen” are arguably the foremost proponents of atheism, there are a number of other current, notable atheists including: Lawrence M. Krauss, (author of A Universe from Nothing),[34]James Randi (paranormal debunker and former illusionist),[35]Jerry Coyne (Why Evolution is True[36] and its complementary blog,[37] which specifically includes polemics against topical religious issues), Greta Christina (Why are you Atheists so Angry?),[38]Victor J. Stenger (The New Atheism),[39]Michael Shermer (Why People Believe Weird Things),[40]David Silverman (President of the American Atheists and author of Fighting God: An Atheist Manifesto for a Religious World), Ibn Warraq (Why I Am Not a Muslim),[41]Matt Dillahunty (host of the Austin-based webcast and cable-access television show The Atheist Experience),[42]Bill Maher (writer and star of the 2008 documentary Religulous),[43]Steven Pinker (noted cognitive scientist, linguist, psychologist and author),[44]Julia Galef (co-host of the podcast Rationally Speaking), A.C. Grayling (philosopher and considered to be the “Fifth Horseman of New Atheism”), and Michel Onfray (Atheist Manifesto: The Case Against Christianity, Judaism, and Islam).

Many contemporary atheists write from a scientific perspective. Unlike previous writers, many of whom thought that science was indifferent, or even incapable of dealing with the “God” concept, Dawkins argues to the contrary, claiming the “God Hypothesis” is a valid scientific hypothesis,[45] having effects in the physical universe, and like any other hypothesis can be tested and falsified. Other contemporary atheists such as Victor Stenger propose that the personal Abrahamic God is a scientific hypothesis that can be tested by standard methods of science. Both Dawkins and Stenger conclude that the hypothesis fails any such tests,[46] and argue that naturalism is sufficient to explain everything we observe in the universe, from the most distant galaxies to the origin of life, species, and the inner workings of the brain and consciousness. Nowhere, they argue, is it necessary to introduce God or the supernatural to understand reality. Atheists have been associated with the argument from divine hiddenness and the idea that “absence of evidence is evidence of absence” when evidence can be expected.[citation needed]

Non-believers assert that many religious or supernatural claims (such as the virgin birth of Jesus and the afterlife) are scientific claims in nature. They argue, as do deists and Progressive Christians, for instance, that the issue of Jesus’ supposed parentage is not a question of “values” or “morals”, but a question of scientific inquiry.[47] Rational thinkers believe science is capable of investigating at least some, if not all, supernatural claims.[48] Institutions such as the Mayo Clinic and Duke University are attempting to find empirical support for the healing power of intercessory prayer.[49] According to Stenger, these experiments have found no evidence that intercessory prayer works.[50]

Stenger also argues in his book, God: The Failed Hypothesis, that a God having omniscient, omnibenevolent and omnipotent attributes, which he termed a 3O God, cannot logically exist.[51] A similar series of logical disproofs of the existence of a God with various attributes can be found in Michael Martin and Ricki Monnier’s The Impossibility of God,[52] or Theodore M. Drange’s article, “Incompatible-Properties Arguments”.[53]

Richard Dawkins has been particularly critical of the conciliatory view that science and religion are not in conflict, noting, for example, that the Abrahamic religions constantly deal in scientific matters. In a 1998 article published in Free Inquiry magazine,[47] and later in his 2006 book The God Delusion, Dawkins expresses disagreement with the view advocated by Stephen Jay Gould that science and religion are two non-overlapping magisteria (NOMA) each existing in a “domain where one form of teaching holds the appropriate tools for meaningful discourse and resolution”. In Gould’s proposal, science and religion should be confined to distinct non-overlapping domains: science would be limited to the empirical realm, including theories developed to describe observations, while religion would deal with questions of ultimate meaning and moral value. Dawkins contends that NOMA does not describe empirical facts about the intersection of science and religion, “it is completely unrealistic to claim, as Gould and many others do, that religion keeps itself away from science’s turf, restricting itself to morals and values. A universe with a supernatural presence would be a fundamentally and qualitatively different kind of universe from one without. The difference is, inescapably, a scientific difference. Religions make existence claims, and this means scientific claims.” Matt Ridley notes that religion does more than talk about ultimate meanings and morals, and science is not proscribed from doing the same. After all, morals involve human behavior, an observable phenomenon, and science is the study of observable phenomena. Ridley notes that there is substantial scientific evidence on evolutionary origins of ethics and morality.[54]

Popularized by Sam Harris is the view that science and thereby currently unknown objective facts may instruct human morality in a globally comparable way. Harris’ book The Moral Landscape[55] and accompanying TED Talk How Science can Determine Moral Values[56] proposes that human well-being and conversely suffering may be thought of as a landscape with peaks and valleys representing numerous ways to achieve extremes in human experience, and that there are objective states of well-being.

New atheism is politically engaged in a variety of ways. These include campaigns to reduce the influence of religion in the public sphere, attempts to promote cultural change (centering, in the United States, on the mainstream acceptance of atheism), and efforts to promote the idea of an “atheist identity”. Internal strategic divisions over these issues have also been notable, as are questions about the diversity of the movement in terms of its gender and racial balance.[57]

Edward Feser’s book The Last Superstition presents arguments based on the philosophy of Aristotle and Thomas Aquinas against New Atheism.[58] According to Feser it necessarily follows from AristotelianThomistic metaphysics that God exists, that the human soul is immortal, and that the highest end of human life (and therefore the basis of morality) is to know God. Feser argues that science never disproved Aristotle’s metaphysics, but rather Modern philosophers decided to reject it on the basis of wishful thinking. In the latter chapters Feser proposes that scientism and materialism are based on premises that are inconsistent and self-contradictory and that these conceptions lead to absurd consequences.

Cardinal William Levada believes that New Atheism has misrepresented the doctrines of the church.[59] Cardinal Walter Kasper described New Atheism as “aggressive”, and he believed it to be the primary source of discrimination against Christians.[60] In a Salon interview, the journalist Chris Hedges argued that New Atheism propaganda is just as extreme as that of Christian right propaganda.[61]

The theologians Jeffrey Robbins and Christopher Rodkey take issue with what they regard as “the evangelical nature of the new atheism, which assumes that it has a Good News to share, at all cost, for the ultimate future of humanity by the conversion of as many people as possible.” They believe they have found similarities between new atheism and evangelical Christianity and conclude that the all-consuming nature of both “encourages endless conflict without progress” between both extremities.[62] Sociologist William Stahl said “What is striking about the current debate is the frequency with which the New Atheists are portrayed as mirror images of religious fundamentalists.”[63]

The atheist philosopher of science Michael Ruse has made the claim that Richard Dawkins would fail “introductory” courses on the study of “philosophy or religion” (such as courses on the philosophy of religion), courses which are offered, for example, at many educational institutions such as colleges and universities around the world.[64][65] Ruse also claims that the movement of New Atheismwhich is perceived, by him, to be a “bloody disaster”makes him ashamed, as a professional philosopher of science, to be among those hold to an atheist position, particularly as New Atheism does science a “grave disservice” and does a “disservice to scholarship” at more general level.[64][65]

Glenn Greenwald,[66][67] Toronto-based journalist and Mideast commentator Murtaza Hussain,[66][67]Salon columnist Nathan Lean,[67] scholars Wade Jacoby and Hakan Yavuz,[68] and historian of religion William Emilsen[69] have accused the New Atheist movement of Islamophobia. Wade Jacoby and Hakan Yavuz assert that “a group of ‘new atheists’ such as Richard Dawkins, Sam Harris, and Christopher Hitchens” have “invoked Samuel Huntington’s ‘clash of civilizations’ theory to explain the current political contestation” and that this forms part of a trend toward “Islamophobia […] in the study of Muslim societies”.[68] William W. Emilson argues that “the ‘new’ in the new atheists’ writings is not their aggressiveness, nor their extraordinary popularity, nor even their scientific approach to religion, rather it is their attack not only on militant Islamism but also on Islam itself under the cloak of its general critique of religion”.[69] Murtaza Hussain has alleged that leading figures in the New Atheist movement “have stepped in to give a veneer of scientific respectability to today’s politically useful bigotry”.[66][70]

See the rest here:
New Atheism – Wikipedia

Posted in Atheism | Comments Off on New Atheism – Wikipedia

Human Genome Project – Wikipedia

Posted: at 4:07 am

The Human Genome Project (HGP) is an international scientific research project with the goal of determining the sequence of chemical base pairs which make up human DNA, and of identifying and mapping all of the genes of the human genome from both a physical and a functional standpoint.[1] It remains the world’s largest collaborative biological project.[2] After the idea was picked up in 1984 by the US government when the planning started, with the project formally launched in 1990, and finally declared complete in 2003. Funding came from the US government through the National Institutes of Health (NIH) as well as numerous other groups from around the world. A parallel project was conducted outside of government by the Celera Corporation, or Celera Genomics, which was formally launched in 1998. Most of the government-sponsored sequencing was performed in twenty universities and research centers in the United States, the United Kingdom, Japan, France, Germany, Canada, and China.[3]

The Human Genome Project originally aimed to map the nucleotides contained in a human haploid reference genome (more than three billion). The “genome” of any given individual is unique; mapping the “human genome” involves sequencing multiple variations of each gene.[4] In May 2016, scientists considered extending the HGP to include creating a synthetic human genome.[5] In June 2016, scientists formally announced HGP-Write, a plan to synthesize the human genome.[6][7]

The Human Genome Project was a 13-year-long, publicly funded project initiated in 1990 with the objective of determining the DNA sequence of the entire euchromatic human genome within 15 years.[8] In May 1985, Robert Sinsheimer organized a workshop to discuss sequencing the human genome,[9] but for a number of reasons the NIH was uninterested in pursuing the proposal. The following March, the Santa Fe Workshop was organized by Charles DeLisi and David Smith of the Department of Energy’s Office of Health and Environmental Research (OHER).[10] At the same time Renato Dulbecco proposed whole genome sequencing in an essay in Science.[11] James Watson followed two months later with a workshop held at the Cold Spring Harbor Laboratory.

The fact that the Santa Fe workshop was motivated and supported by a Federal Agency opened a path, albeit a difficult and tortuous one,[12] for converting the idea into public policy. In a memo to the Assistant Secretary for Energy Research (Alvin Trivelpiece), Charles DeLisi, who was then Director of OHER, outlined a broad plan for the project.[13] This started a long and complex chain of events which led to approved reprogramming of funds that enabled OHER to launch the Project in 1986, and to recommend the first line item for the HGP, which was in President Regan’s 1988 budget submission,[12] and ultimately approved by the Congress. Of particular importance in Congressional approval was the advocacy of Senator Peter Domenici, whom DeLisi had befriended.[14] Domenici chaired the Senate Committee on Energy and Natural Resources, as well as the Budget Committee, both of which were key in the DOE budget process. Congress added a comparable amount to the NIH budget, thereby beginning official funding by both agencies.

Alvin Trivelpiece sought and obtained the approval of DeLisi’s proposal by Deputy Secretary William Flynn Martin. This chart[15] was used in the spring of 1986 by Trivelpiece, then Director of the Office of Energy Research in the Department of Energy, to brief Martin and Under Secretary Joseph Salgado regarding his intention to reprogram $4 million to initiate the project with the approval of Secretary Herrington. This reprogramming was followed by a line item budget of $16 million in the Reagan Administrations 1987 budget submission to Congress.[16] It subsequently passed both Houses. The Project was planned for 15 years.[17]

Candidate technologies were already being considered for the proposed undertaking at least as early as 1985.[18]

In 1990, the two major funding agencies, DOE and NIH, developed a memorandum of understanding in order to coordinate plans and set the clock for the initiation of the Project to 1990.[19] At that time, David Galas was Director of the renamed Office of Biological and Environmental Research in the U.S. Department of Energys Office of Science and James Watson headed the NIH Genome Program. In 1993, Aristides Patrinos succeeded Galas and Francis Collins succeeded James Watson, assuming the role of overall Project Head as Director of the U.S. National Institutes of Health (NIH) National Center for Human Genome Research (which would later become the National Human Genome Research Institute). A working draft of the genome was announced in 2000 and the papers describing it were published in February 2001. A more complete draft was published in 2003, and genome “finishing” work continued for more than a decade.

The $3-billion project was formally founded in 1990 by the US Department of Energy and the National Institutes of Health, and was expected to take 15 years.[20] In addition to the United States, the international consortium comprised geneticists in the United Kingdom, France, Australia, China and myriad other spontaneous relationships.[21]

Due to widespread international cooperation and advances in the field of genomics (especially in sequence analysis), as well as major advances in computing technology, a ‘rough draft’ of the genome was finished in 2000 (announced jointly by U.S. President Bill Clinton and the British Prime Minister Tony Blair on June 26, 2000).[22] This first available rough draft assembly of the genome was completed by the Genome Bioinformatics Group at the University of California, Santa Cruz, primarily led by then graduate student Jim Kent. Ongoing sequencing led to the announcement of the essentially complete genome on April 14, 2003, two years earlier than planned.[23][24] In May 2006, another milestone was passed on the way to completion of the project, when the sequence of the last chromosome was published in Nature.[25]

The project did not aim to sequence all the DNA found in human cells. It sequenced only “euchromatic” regions of the genome, which make up about 90% of the genome. The other regions, called “heterochromatic” are found in centromeres and telomeres, and were not sequenced under the project.[26]

The Human Genome Project was declared complete in April 2003. An initial rough draft of the human genome was available in June 2000 and by February 2001 a working draft had been completed and published followed by the final sequencing mapping of the human genome on April 14, 2003. Although this was reported to be 99% of the euchromatic human genome with 99.99% accuracy a major quality assessment of the human genome sequence was published on May 27, 2004 indicating over 92% of sampling exceeded 99.99% accuracy which was within the intended goal.[27] Further analyses and papers on the HGP continue to occur.[28]

The sequencing of the human genome holds benefits for many fields, from molecular medicine to human evolution. The Human Genome Project, through its sequencing of the DNA, can help us understand diseases including: genotyping of specific viruses to direct appropriate treatment; identification of mutations linked to different forms of cancer; the design of medication and more accurate prediction of their effects; advancement in forensic applied sciences; biofuels and other energy applications; agriculture, animal husbandry, bioprocessing; risk assessment; bioarcheology, anthropology and evolution. Another proposed benefit is the commercial development of genomics research related to DNA based products, a multibillion-dollar industry.

The sequence of the DNA is stored in databases available to anyone on the Internet. The U.S. National Center for Biotechnology Information (and sister organizations in Europe and Japan) house the gene sequence in a database known as GenBank, along with sequences of known and hypothetical genes and proteins. Other organizations, such as the UCSC Genome Browser at the University of California, Santa Cruz,[29] and Ensembl[30] present additional data and annotation and powerful tools for visualizing and searching it. Computer programs have been developed to analyze the data, because the data itself is difficult to interpret without such programs. Generally speaking, advances in genome sequencing technology have followed Moores Law, a concept from computer science which states that integrated circuits can increase in complexity at an exponential rate.[31] This means that the speeds at which whole genomes can be sequenced can increase at a similar rate, as was seen during the development of the above-mentioned Human Genome Project.

The process of identifying the boundaries between genes and other features in a raw DNA sequence is called genome annotation and is in the domain of bioinformatics. While expert biologists make the best annotators, their work proceeds slowly, and computer programs are increasingly used to meet the high-throughput demands of genome sequencing projects. Beginning in 2008, a new technology known as RNA-seq was introduced that allowed scientists to directly sequence the messenger RNA in cells. This replaced previous methods of annotation, which relied on inherent properties of the DNA sequence, with direct measurement, which was much more accurate. Today, annotation of the human genome and other genomes relies primarily on deep sequencing of the transcripts in every human tissue using RNA-seq. These experiments have revealed that over 90% of genes contain at least one and usually several alternative splice variants, in which the exons are combined in different ways to produce 2 or more gene products from the same locus.[citation needed]

The genome published by the HGP does not represent the sequence of every individual’s genome. It is the combined mosaic of a small number of anonymous donors, all of European origin. The HGP genome is a scaffold for future work in identifying differences among individuals. Subsequent projects sequenced the genomes of multiple distinct ethnic groups, though as of today there is still only one “reference genome.”[citation needed]

Key findings of the draft (2001) and complete (2004) genome sequences include:

The Human Genome Project was started in 1990 with the goal of sequencing and identifying all three billion chemical units in the human genetic instruction set, finding the genetic roots of disease and then developing treatments. It is considered a Mega Project because the human genome has approximately 3.3 billion base-pairs. With the sequence in hand, the next step was to identify the genetic variants that increase the risk for common diseases like cancer and diabetes.[19][36]

It was far too expensive at that time to think of sequencing patients whole genomes. So the National Institutes of Health embraced the idea for a “shortcut”, which was to look just at sites on the genome where many people have a variant DNA unit. The theory behind the shortcut was that, since the major diseases are common, so too would be the genetic variants that caused them. Natural selection keeps the human genome free of variants that damage health before children are grown, the theory held, but fails against variants that strike later in life, allowing them to become quite common. (In 2002 the National Institutes of Health started a $138 million dollar project called the HapMap to catalog the common variants in European, East Asian and African genomes.)[37]

The genome was broken into smaller pieces; approximately 150,000 base pairs in length.[36] These pieces were then ligated into a type of vector known as “bacterial artificial chromosomes”, or BACs, which are derived from bacterial chromosomes which have been genetically engineered. The vectors containing the genes can be inserted into bacteria where they are copied by the bacterial DNA replication machinery. Each of these pieces was then sequenced separately as a small “shotgun” project and then assembled. The larger, 150,000 base pairs go together to create chromosomes. This is known as the “hierarchical shotgun” approach, because the genome is first broken into relatively large chunks, which are then mapped to chromosomes before being selected for sequencing.[38][39]

Funding came from the US government through the National Institutes of Health in the United States, and a UK charity organization, the Wellcome Trust, as well as numerous other groups from around the world. The funding supported a number of large sequencing centers including those at Whitehead Institute, the Sanger Centre, Washington University in St. Louis, and Baylor College of Medicine.[20][40]

The United Nations Educational, Scientific and Cultural Organization (UNESCO) served as an important channel for the involvement of developing countries in the Human Genome Project.[41]

In 1998, a similar, privately funded quest was launched by the American researcher Craig Venter, and his firm Celera Genomics. Venter was a scientist at the NIH during the early 1990s when the project was initiated. The $300,000,000 Celera effort was intended to proceed at a faster pace and at a fraction of the cost of the roughly $3 billion publicly funded project. The Celera approach was able to proceed at a much more rapid rate, and at a lower cost than the public project because it relied upon data made available by the publicly funded project.[42]

Celera used a technique called whole genome shotgun sequencing, employing pairwise end sequencing,[43] which had been used to sequence bacterial genomes of up to six million base pairs in length, but not for anything nearly as large as the three billion base pair human genome.

Celera initially announced that it would seek patent protection on “only 200300” genes, but later amended this to seeking “intellectual property protection” on “fully-characterized important structures” amounting to 100300 targets. The firm eventually filed preliminary (“place-holder”) patent applications on 6,500 whole or partial genes. Celera also promised to publish their findings in accordance with the terms of the 1996 “Bermuda Statement”, by releasing new data annually (the HGP released its new data daily), although, unlike the publicly funded project, they would not permit free redistribution or scientific use of the data. The publicly funded competitors were compelled to release the first draft of the human genome before Celera for this reason. On July 7, 2000, the UCSC Genome Bioinformatics Group released a first working draft on the web. The scientific community downloaded about 500 GB of information from the UCSC genome server in the first 24 hours of free and unrestricted access.[44]

In March 2000, President Clinton announced that the genome sequence could not be patented, and should be made freely available to all researchers. The statement sent Celera’s stock plummeting and dragged down the biotechnology-heavy Nasdaq. The biotechnology sector lost about $50 billion in market capitalization in two days.

Although the working draft was announced in June 2000, it was not until February 2001 that Celera and the HGP scientists published details of their drafts. Special issues of Nature (which published the publicly funded project’s scientific paper)[45] and Science (which published Celera’s paper[46]) described the methods used to produce the draft sequence and offered analysis of the sequence. These drafts covered about 83% of the genome (90% of the euchromatic regions with 150,000 gaps and the order and orientation of many segments not yet established). In February 2001, at the time of the joint publications, press releases announced that the project had been completed by both groups. Improved drafts were announced in 2003 and 2005, filling in to approximately 92% of the sequence currently.

In the IHGSC international public-sector Human Genome Project (HGP), researchers collected blood (female) or sperm (male) samples from a large number of donors. Only a few of many collected samples were processed as DNA resources. Thus the donor identities were protected so neither donors nor scientists could know whose DNA was sequenced. DNA clones from many different libraries were used in the overall project, with most of those libraries being created by Pieter J. de Jong’s lab. Much of the sequence (>70%) of the reference genome produced by the public HGP came from a single anonymous male donor from Buffalo, New York (code name RP11).[47][48]

HGP scientists used white blood cells from the blood of two male and two female donors (randomly selected from 20 of each) each donor yielding a separate DNA library. One of these libraries (RP11) was used considerably more than others, due to quality considerations. One minor technical issue is that male samples contain just over half as much DNA from the sex chromosomes (one X chromosome and one Y chromosome) compared to female samples (which contain two X chromosomes). The other 22 chromosomes (the autosomes) are the same for both sexes.

Although the main sequencing phase of the HGP has been completed, studies of DNA variation continue in the International HapMap Project, whose goal is to identify patterns of single-nucleotide polymorphism (SNP) groups (called haplotypes, or haps). The DNA samples for the HapMap came from a total of 270 individuals: Yoruba people in Ibadan, Nigeria; Japanese people in Tokyo; Han Chinese in Beijing; and the French Centre dEtude du Polymorphisme Humain (CEPH) resource, which consisted of residents of the United States having ancestry from Western and Northern Europe.

In the Celera Genomics private-sector project, DNA from five different individuals were used for sequencing. The lead scientist of Celera Genomics at that time, Craig Venter, later acknowledged (in a public letter to the journal Science) that his DNA was one of 21 samples in the pool, five of which were selected for use.[49][50]

In 2007, a team led by Jonathan Rothberg published James Watson’s entire genome, unveiling the six-billion-nucleotide genome of a single individual for the first time.[51]

The work on interpretation and analysis of genome data is still in its initial stages. It is anticipated that detailed knowledge of the human genome will provide new avenues for advances in medicine and biotechnology. Clear practical results of the project emerged even before the work was finished. For example, a number of companies, such as Myriad Genetics, started offering easy ways to administer genetic tests that can show predisposition to a variety of illnesses, including breast cancer, hemostasis disorders, cystic fibrosis, liver diseases and many others. Also, the etiologies for cancers, Alzheimer’s disease and other areas of clinical interest are considered likely to benefit from genome information and possibly may lead in the long term to significant advances in their management.[37][52]

There are also many tangible benefits for biologists. For example, a researcher investigating a certain form of cancer may have narrowed down his/her search to a particular gene. By visiting the human genome database on the World Wide Web, this researcher can examine what other scientists have written about this gene, including (potentially) the three-dimensional structure of its product, its function(s), its evolutionary relationships to other human genes, or to genes in mice or yeast or fruit flies, possible detrimental mutations, interactions with other genes, body tissues in which this gene is activated, and diseases associated with this gene or other datatypes. Further, deeper understanding of the disease processes at the level of molecular biology may determine new therapeutic procedures. Given the established importance of DNA in molecular biology and its central role in determining the fundamental operation of cellular processes, it is likely that expanded knowledge in this area will facilitate medical advances in numerous areas of clinical interest that may not have been possible without them.[53]

The analysis of similarities between DNA sequences from different organisms is also opening new avenues in the study of evolution. In many cases, evolutionary questions can now be framed in terms of molecular biology; indeed, many major evolutionary milestones (the emergence of the ribosome and organelles, the development of embryos with body plans, the vertebrate immune system) can be related to the molecular level. Many questions about the similarities and differences between humans and our closest relatives (the primates, and indeed the other mammals) are expected to be illuminated by the data in this project.[37][54]

The project inspired and paved the way for genomic work in other fields, such as agriculture. For example, by studying the genetic composition of Tritium aestivum, the worlds most commonly used bread wheat, great insight has been gained into the ways that domestication has impacted the evolution of the plant.[55] Which loci are most susceptible to manipulation, and how does this play out in evolutionary terms? Genetic sequencing has allowed these questions to be addressed for the first time, as specific loci can be compared in wild and domesticated strains of the plant. This will allow for advances in genetic modification in the future which could yield healthier, more disease-resistant wheat crops.

At the onset of the Human Genome Project several ethical, legal, and social concerns were raised in regards to how increased knowledge of the human genome could be used to discriminate against people. One of the main concerns of most individuals was the fear that both employers and health insurance companies would refuse to hire individuals or refuse to provide insurance to people because of a health concern indicated by someone’s genes.[56] In 1996 the United States passed the Health Insurance Portability and Accountability Act (HIPAA) which protects against the unauthorized and non-consensual release of individually identifiable health information to any entity not actively engaged in the provision of healthcare services to a patient.[57]

Along with identifying all of the approximately 20,00025,000 genes in the human genome, the Human Genome Project also sought to address the ethical, legal, and social issues that were created by the onset of the project. For that the Ethical, Legal, and Social Implications (ELSI) program was founded in 1990. Five percent of the annual budget was allocated to address the ELSI arising from the project.[20][58] This budget started at approximately $1.57 million in the year 1990, but increased to approximately $18 million in the year 2014. [59]

Whilst the project may offer significant benefits to medicine and scientific research, some authors have emphasised the need to address the potential social consequences of mapping the human genome. “Molecularising disease and their possible cure will have a profound impact on what patients expect from medical help and the new generation of doctors’ perception of illness.”[60]

Read more from the original source:
Human Genome Project – Wikipedia

Posted in Genome | Comments Off on Human Genome Project – Wikipedia

CFR-Trilateral pedophile Jeffrey Epsteins corporate …

Posted: October 15, 2016 at 5:29 am

Jeffrey Epstein is currently infamous for his conviction for soliciting a fourteen-year-old girl for prostitution and for allegedly orchestrating underage sex slave orgies at his private Virgin Island mansion, where he purportedly pimped out underage girls to elite political figures such as Prince Andrew, Alan Dershowitz, and probably Bill Clinton as well (he also traveled to Thailand in 2001 with Prince Andrew, probably to indulge in the countrys rampant child sex trade).

But before these sex scandals were the highlight of Epsteins celebrity, he was better known not just for his financial prowess, but also for his extensive funding of biotechnological and evolutionary science. With his bankster riches, he founded the Jeffrey Epstein VI Foundation which established Harvard Universitys Program for Evolutionary Dynamics.

Epstein, a former CFR and Trilateral Commission Member, also sat on the board of Harvards Mind, Brain, and Behavior Committee. He has furthermore been actively involved in . . . the Theoretical Biology Initiative at the Institute for Advanced Study at Princeton, the Quantum Gravity Program at the University of Pennsylvania, and the Santa Fe Institute, which is a transdisciplinary research community that expands the boundaries of scientific understanding . . . to discover, comprehend, and communicate the common fundamental principles in complex physical, computational, biological, and social systems.

The scope of Epsteins various science projects spans research into genetics, neuroscience, robotics, computer science, and artificial intelligence (AI). Altogether, the convergence of these science subfields comprises an interdisciplinary science known as transhumanism: the artificial perfection of human evolution through humankinds merger with technology. In fact, Epstein partners with Humanity+, a major transhumanism interest group.

Transhumanists believe that technologically upgrading humankind into a singularity will bring about a utopia in which poor health, the ravages of old age and even death itself will all be things of the past. In fact, eminent transhumanist Ray Kurzweil, chief of engineering at Google, believes that he will become godlike as a result of the singularity.

But the truth is that transhumanism is merely a more high-tech revision of eugenics conceptualized by eugenicist and UNESCO Director-General Julian Huxley. And when corporate philanthropists like pedophile Epsteinas well as Bill Gates, Mark Zuckerberg, Peter Thiel, and Google executives such as Eric Schmidt and Larry Pageare the major bankrollers behind these transhumanism projects, the whole enterprise seems ominously reminiscent of the corporate-philanthropic funding of American and Nazi eugenics.

In America, Charles Davenports eugenics research at Cold Spring Harbor was bankrolled by elite financiers, such as the Harriman family, as well as robber barons and their nonprofit foundations such as the Rockefeller Foundation and the Carnegie Institute of Washington. Davenport collaborated with Nazi eugenicists who were likewise funded by the Rockefeller Foundation. In the end, these Rockefeller-funded eugenics programs contributed to the forced sterilization of over 60,000 Americans and the macabre human experimentation and genocide of the Nazi concentration camps. (This sinister collusion is thoroughly documented in War Against the Weak by award-winning investigative journalist Edwin Black).

If history has shown us that these are the sordid bioethics that result from corporate-funded biosocial science, shouldnt we be weary of the transhumanism projects of neo-robber barons like Epstein, Gates, Zuckerberg, Thiel, and the Google gang?

It should be noted that Epstein once sat on the board of Rockefeller University. At the same time, the Rockefeller Foundationwhich has continued to finance Cold Spring Harbor programs as recently as 2010also funds the Santa Fe Institute and the New York Academy of Sciences, both of which Epstein has been actively involved in.

The Rockefeller Foundation also funds the Malthusian-eugenic Population Council, which transhumanist Bill Gates likewise finances in carrying on the population reduction activism of his father, William H. Gates Sr.

And in 2013, the Rockefeller Foundation funded a transhumanistic white paper titled Dreaming the Future of Health for the Next 100 Years, which explores [r]e-engineering of humans into separate and unequal forms through genetic engineering or mixed human-robots.

So, considering that transhumanismthe outgrowth of eugenicsis being steered not only by twenty-first-century robber barons, but by corporatist monopoly men who are connected to the very transhumanist Rockefeller Foundation which funded Nazi eugenics, I suspect that transhumanist technology will not upgrade the common person. Rather, it will only be disseminated to the public in such a wayas Stanford University Professor Paul Saffo predictsthat converts social class hierarchies into bio(techno)logical hierarchies by artificially evolving the One Percent into a species separate from the unfit working poor, which will be downgraded as a slave class.

In his 1932 eugenic-engineering dystopia, Brave New World, Aldous Huxley (Julians brother) depicts how biotechnology, drugs, and psychological conditioning would in the future be used to establish a Scientific Caste System ruled by a global scientific dictatorship. But Huxley was not warning us with his novel. As historian Joanne Woiak demonstrates in her journal article entitled Designing a Brave New World: Eugenics, Politics, and Fiction, Aldous brave new world can . . . be understood as a serious design for social reform (105). In a 1932 essay, titled Science and Civilization, Huxley promoted his eugenic caste system: in a scientific civilization society must be organized on a caste basis. The rulers and their advisory experts will be a kind of Brahmins controlling, in virtue of a special and mysterious knowledge, vast hordes of the intellectual equivalents of Sudras and Untouchables (153-154).

With the aforementioned digital robber barons driving the burgeoning age of transhumanist neo-eugenics, I fear that Huxleys Scientific Caste System may become a reality. And with Epstein behind the wheel, the new GMO Sudras will likely consist of not only unskilled labor slaves, but also child sex slaves wholike the preadolescents in Brave New Worldwill be brainwashed with Elementary Sex Education, which will inculcate them with a smash monogamy sexuality that will serve the elite World Controllers.


Huxley, Aldous. Science and Civilization. Aldous Huxley: Complete Essays. Eds. Robert S. Baker and James Sexton. Vol. III. Chicago: Ivan R. Dee, 2000. 148-155. Print. 4 vols.

John Klyczek has an MA in English and is a college English instructor, concentrating on the history of global eugenics and Aldous Huxleys dystopian novel, Brave New World.

See original here:

CFR-Trilateral pedophile Jeffrey Epsteins corporate …

Posted in Neo-eugenics | Comments Off on CFR-Trilateral pedophile Jeffrey Epsteins corporate …

Artificial intelligence positioned to be a game-changer – CBS …

Posted: October 13, 2016 at 5:27 am

The following script is from Artificial Intelligence, which aired on Oct. 9, 2016. Charlie Rose is the correspondent. Nichole Marks, producer.

The search to improve and eventually perfect artificial intelligence is driving the research labs of some of the most advanced and best-known American corporations. They are investing billions of dollars and many of their best scientific minds in pursuit of that goal. All that money and manpower has begun to pay off.

In the past few years, artificial intelligence — or A.I. — has taken a big leap — making important strides in areas like medicine and military technology. What was once in the realm of science fiction has become day-to-day reality. Youll find A.I. routinely in your smart phone, in your car, in your household appliances and it is on the verge of changing everything.

Play Video

On 60 Minutes Overtime, Charlie Rose explores the labs at Carnegie Mellon on the cutting edge of A.I. See robots learning to go where humans can’…

It was, for decades, primitive technology. But it now has abilities we never expected. It can learn through experience — much the way humans do — and it wont be long before machines, like their human creators, begin thinking for themselves, creatively. Independently with judgment — sometimes better judgment than humans have.

The technology is so promising that IBM has staked its 105-year-old reputation on its version of artificial intelligence called Watson — one of the most sophisticated computing systems ever built.

John Kelly, is the head of research at IBM and the godfather of Watson. He took us inside Watsons brain.

Charlie Rose: Oh, here we are.

John Kelly: Here we are.

Charlie Rose: You can feel the heat already.

John Kelly: You can feel the heat — the 85,000 watts you can hear the blowers cooling it, but this is the hardware that the brains of Watson sat in.

Five years ago, IBM built this system made up of 90 servers and 15 terabytes of memory enough capacity to process all the books in the American Library of Congress. That was necessary because Watson is an avid reader — able to consume the equivalent of a million books per second. Today, Watsons hardware is much smaller, but it is just as smart.

Play Video

What happens when Charlie Rose attempts to interview a robot named “Sophia” for his 60 Minutes report on artificial intelligence

Charlie Rose: Tell me about Watsons intelligence.

John Kelly: So it has no inherent intelligence as it starts. Its essentially a child. But as its given data and given outcomes, it learns, which is dramatically different than all computing systems in the past, which really learned nothing. And as it interacts with humans, it gets even smarter. And it never forgets.

[Announcer: This is Jeopardy!]

That helped Watson land a spot on one of the most challenging editions of the game show Jeopardy! in 2011.

[Announcer: An IBM computer system able to understand and analyze natural language Watson]

It took five years to teach Watson human language so it would be ready to compete against two of the shows best champions.

Play Video

Five years after beating humans on “Jeopardy!” an IBM technology known as Watson is becoming a tool for doctors treating cancer, the head of IBM …

Because Watsons A.I. is only as intelligent as the data it ingests, Kellys team trained it on all of Wikipedia and thousands of newspapers and books. It worked by using machine-learning algorithms to find patterns in that massive amount of data and formed its own observations. When asked a question, Watson considered all the information and came up with an educated guess.

[Alex Trebek: Watson, what are you gonna wager?]

IBM gambled its reputation on Watson that night. It wasnt a sure bet.

[Watson: I will take a guess: What is Baghdad?]

[Alex Trebek: Even though you were only 32 percent sure of your response, you are correct.]

The wager paid off. For the first time, a computer system proved it could actually master human language and win a game show, but that wasnt IBMs endgame.

Charlie Rose: Man, thats a big day, isnt it?

John Kelly: Thats a big day

Charlie Rose: The day that you realize that, If we can do this

John Kelly: Thats right.

Charlie Rose: –the future is ours.

John Kelly: Thats right.

Charlie Rose: This is almost like youre watching something grow up. I mean, youve seen

John Kelly: It is.

Charlie Rose: –the birth, youve seen it pass the test. Youre watching adolescence.

John Kelly: Thats a great analogy. Actually, on that Jeopardy! game five years ago, I– when we put that computer system on television, we let go of it. And I often feel as though I was putting my child on a school bus and I would no longer have control over it.

Charlie Rose: Cause it was reacting to something that it did not know what would it be?

John Kelly: It had no idea what questions it was going to get. It was totally self-contained. I couldnt touch it any longer. And its learned ever since. So fast-forward from that game show, five years later, were in cancer now.

Charlie Rose: Youre in cancer? Youve gone

John Kelly: Were– yeah. To cancer

Charlie Rose: –from game show to cancer in five years?

John Kelly: –in five years. In five years.

Five years ago, Watson had just learned how to read and answer questions.

Now, its gone through medical school. IBM has enlisted 20 top-cancer institutes to tutor Watson in genomics and oncology. One of the places Watson is currently doing its residency is at the university of North Carolina at Chapel Hill. Dr. Ned Sharpless runs the cancer center here.

Charlie Rose: What did you know about artificial intelligence and Watson before IBM suggested it might make a contribution in medical care?

Ned Sharpless: I– not much, actually. I had watched it play Jeopardy!

Charlie Rose: Yes.

Ned Sharpless: So I knew about that. And I was very skeptical. I was, like, oh, this what we need, the Jeopardy-playing computer. Thats gonna solve everything.

Charlie Rose: So what fed your skepticism?

Ned Sharpless: Cancers tough business. Theres a lot of false prophets and false promises. So Im skeptical of, sort of, almost any new idea in cancer. I just didnt really understand what it would do.

What Watsons A.I. technology could do is essentially what Dr. Sharpless and his team of experts do every week at this molecular tumor board meeting.

They come up with possible treatment options for cancer patients who already failed standard therapies. They try to do that by sorting through all of the latest medical journals and trial data, but it is nearly impossible to keep up.

Charlie Rose: To be on top of everything thats out there, all the trials that have taken place around the world, it seems like an incredible task

Ned Sharpless: Well, yeah, its r

Charlie Rose: –for any one university, only one facility to do.

Ned Sharpless: Yeah, its essentially undoable. And understand we have, sort of, 8,000 new research papers published every day. You know, no one has time to read 8,000 papers a day. So we found that we were deciding on therapy based on information that was always, in some cases, 12, 24 months out-of-date.

However, its a task thats elementary for Watson.

Ned Sharpless: They taught Watson to read medical literature essentially in about a week.

Charlie Rose: Yeah.

Ned Sharpless: It was not very hard and then Watson read 25 million papers in about another week. And then, it also scanned the web for clinical trials open at other centers. And all of the sudden, we had this complete list that was, sort of, everything one needed to know.

Charlie Rose: Did this blow your mind?

Ned Sharpless: Oh, totally blew my mind.

Watson was proving itself to be a quick study. But, Dr. Sharpless needed further validation. He wanted to see if Watson could find the same genetic mutations that his team identified when they make treatment recommendations for cancer patients.

Ned Sharpless: We did an analysis of 1,000 patients, where the humans meeting in the Molecular Tumor Board– doing the best that they could do, had made recommendations. So not at all a hypothetical exercise. These are real-world patients where we really conveyed information that could guide care. In 99 percent of those cases, Watson found the same the humans recommended. That was encouraging.

Charlie Rose: Did it encourage your confidence in Watson?

Ned Sharpless: Yeah, it was– it was nice to see that– well, it was also– it encouraged my confidence in the humans, you know. Yeah. You know–

Charlie Rose: Yeah.

Ned Sharpless: But, the probably more exciting part about it is in 30 percent of patients Watson found something new. And so thats 300-plus people where Watson identified a treatment that a well-meaning, hard-working group of physicians hadnt found.

Charlie Rose: Because?

Ned Sharpless: The trial had opened two weeks earlier, a paper had come out in some journal no one had seen — you know, a new therapy had become approved

Charlie Rose: 30 percent though?

Ned Sharpless: We were very– that part was disconcerting. Because I thought it was gonna be 5 perc

Charlie Rose: Disconcerting that the Watson found

Ned Sharpless: Yeah.

Charlie Rose: –30 percent?

Ned Sharpless: Yeah. These were real, you know, things that, by our own definition, we wouldve considered actionable had we known about it at the time of the diagnosis.

Some cases — like the case of Pam Sharpe — got a second look to see if something had been missed.

Charlie Rose: When did they tell you about the Watson trial?

Pam Sharpe: He called me in January. He said that they had sent off my sequencing to be studied by– at IBM by Watson. I said, like the

Charlie Rose: Your genomic sequencing?

Pam Sharpe: Right. I said, Like the computer on Jeopardy!? And he said, Yeah–

Charlie Rose: Yes. And whatd you think of that?

Pam Sharpe: Oh I thought, Wow, thats pretty cool.

Pam has metastatic bladder cancer and for eight years has tried and failed several therapies. At 66 years old, she was running out of options.

Charlie Rose: And at this time for you, Watson was the best thing out there cause youd tried everything else?

Pam Sharpe: Ive been on standard chemo. Ive been on a clinical trial. And the prescription chemo Im on isnt working either.

One of the ways doctors can tell whether a drug is working is to analyze scans of cancer tumors. Watson had to learn to do that too so IBMs John Kelly and his team taught the system how to see.

It can help diagnose diseases and catch things the doctors might miss.

John Kelly: And what Watson has done here, it has looked over tens of thousands of images, and it knows what normal looks like. And it knows what normal isnt. And it has identified where in this image are there anomalies that could be significant problems.

[Billy Kim: You know, you had CT scan yesterday. There does appear to be progression of the cancer.]

Pam Sharpes doctor, Billy Kim, arms himself with Watsons input to figure out her next steps.

[Billy Kim: I can show you the interface for Watson.]

Watson flagged a genetic mutation in Pams tumor that her doctors initially overlooked. It enabled them to put a new treatment option on the table.

Charlie Rose: What would you say Watson has done for you?

Pam Sharpe: It may have extended my life. And I dont know how much time Ive got. So by using this Watson, its maybe saved me some time that I wont– wouldnt have had otherwise.

But, Pam sadly ran out of time. She died a few months after we met her from an infection never getting the opportunity to see what a Watson adjusted treatment could have done for her. Dr. Sharpless has now used Watson on more than 2,000 patients and is convinced doctors couldnt do the job alone. He has started using Watson as part of UNCs standard of care so it can help patients earlier than it reached Pam.

Charlie Rose: So what do you call Watson? A physicians assistant, a physicians tool, a physicians diagnostic mastermind?

See the article here:

Artificial intelligence positioned to be a game-changer – CBS …

Posted in Artificial Intelligence | Comments Off on Artificial intelligence positioned to be a game-changer – CBS …

Genetic Engineering – The Canadian Encyclopedia

Posted: October 4, 2016 at 1:21 pm

Interspecies gene transfer occurs naturally; interspecies hybrids produced by sexual means can lead to new species with genetic components of both pre-existing species. Interspecies hybridization played an important role in the development of domesticated plants.

Interspecies gene transfer occurs naturally; interspecies hybrids produced by sexual means can lead to new species with genetic components of both pre-existing species. Interspecies hybridization played an important role in the development of domesticated plants. Interspecies hybrids can also be produced artificiallly between sexually incompatible species. Cells of both plants and animals can be caused to fuse, producing viable hybrid cell-lines. Cultured hybrid plant cells can regenerate whole plants, so cell fusion allows crosses of sexually incompatible species. Most animal cells cannot regenerate whole individuals; however, the fusion of antibody-forming cells (which are difficult to culture) and "transformed" (cancer-like) cells, gives rise to immortal cell-lines, each producing one particular antibody, so-called monoclonal antibodies. These cell-lines can be used for the commercial production of diagnostic and antidisease antibody preparations. (Fusions involving human cells play a major role in investigations of human heredity and GENETIC DISEASE.)

In nature, the transfer of genes between sexually incompatible species also occurs; for example, genes can be carried between species during viral infection. In its most limited sense, genetic engineering exploits the possibility of such transfers between remotely related species. There are two principle methods. First, genes from one organism can be implanted within another, so that the implanted genes function in the host organism. Alternatively, the new host organism (often a micro-organism) produces quantities of the DNA segment that contains a foreign gene, which can then be analysed and modified in the test tube, before return to the species from which the gene originated. Dr Michael SMITH of the University of British Columbia was the corecipient of the 1993 NOBEL PRIZE in Chemistry for his invention of one of the most direct means to modify gene structure in the test tube, a technique known as in vitro mutagenesis.

The continuing development of modern genetic engineering depends upon a number of major technical advances: cloning, gene cloning and DNA sequencing.

Cloning is the production of a group of genetically identical cells or individuals from a single starting cell; all members of a clone are effectively genetically identical. Most single-celled organisms, many plants and a few multicellular animals form clones as a means of reproduction – "asexual" reproduction. In humans, identical twins are clones, developing after the separation of the earliest cells formed from a single fertilized egg.

Cloning is not strictly genetic engineering, since the genome normally remains unaltered, but it is a practical means to propagate engineered organisms.

In combination with test-tube fertilization and embryo transplants, Alta Genetics of Calgary is a world leader in the use of artificial twinning as a tool in the genetic engineering of cattle. Manipulating plant hormones in plant cell cultures can yield clones consisting of millions of plantlets, which may be packageable to form artificial seed.

Cloning of genetically engineered animals is generally difficult. Clones of frogs have been produced by transplanting identical nuclei from a single embryo, each to a different nucleus-free egg. This technique is not applicable to mammals. However, clones of cells derived from very young mammalian embryos (embryonic stem cells) can be used to reconstitute whole animals and are widely used for genetic engineering of mice. There is no reported instance of cloning of humans by any artificial means. Nonetheless, frequent calls for regulation of human cloning and genetic engineering occur, which stem from the same considerations that lead most commentators to reject eugenics.

Gene cloning is fundamental to genetic engineering. A segment of DNA from any donor organism is joined in the test tube to a second DNA molecule, known as a vector, to form a "recombinant " DNA molecule.

The design of appropriate vectors is an important practical area. Entry of DNA into each kind of cell is best mediated by different vectors. For BACTERIA, vectors are based on DNA molecules that move between cells in nature – bacterial VIRUSES and plasmids. Mammalian vectors usually derive from mammalian viruses. In higher plants, the favoured system is the infectious agent of crown-gall tumours.

Gene cloning in microbes has reached commercial application, notably with the marketing of human INSULIN produced by bacteria. Many similar products are now available, including growth hormones, blood-clotting factors and antiviral interferons. Gene cloning has revolutionized the understanding of genes, cells and diseases particularly of CANCER. It has raised the diagnosis of hereditary disease to high science, has contributed precise diagnostic tools for infectious disease and is fundamental to the use of DNA testing in forensic science.

The ability to clone genes led directly to the discovery of the means to analyse the precise chemical structure of DNA; that is, DNA sequencing. A worldwide co-operative project, the Human Genome Project, is now underway, with the object of cloning and sequencing the totality of human DNA, which contains perhaps 100000 or more genes. To date, at least 80% of the DNA has been cloned and localized roughly within the human chromosome set. It is predicted that the sequencing will be effectively completed in less than 20 years. However, it is clear that the biological meaning of the DNA structure will take decades, if not centuries, to decipher.

To avoid potential hazards deriving from genetic engineering, gene cloning even in bacteria is publicly regulated in Canada and the US by the scientific granting agencies and in some other countries by law. Biological containment, the deliberate hereditary debilitation of host cells and vectors, is required. In using mammals and higher plants, especially strict regulations apply, requiring physical isolation.

A great deal of work remains, both in the development of techniques and in the acquisition of fundamental knowledge needed to apply the techniques appropriately. Nonetheless, genetic engineering promises a world of tailor-made CROP plants and farm animals; cures for hereditary disease by gene replacement therapy; an analytical understanding of cancer and its treatment; and a world in which much of our present-day harsh chemical technology is replaced by milder, organism-dependent, fermentation processing.

In Canada, genetic engineering research is taking place in the laboratories of universities, industries, and federal and provincial research organizations. In the industrial sector, medical applications are being developed, for example at Ayerst Laboratories, Montral, AVENTIS PASTEUR LTD., Toronto, and theINSTITUT ARMAND-FRAPPIER, Laval-des-Rapides, Qubec.

Inco is researching applications for MINING and METALLURGY, and LABATT’S BREWERIESis applying recombinant DNA techniques to brewing technologies. A large number of Canadian companies engage in the research and development of genetically engineered products, particularly in the area of PHARMACEUTICALS and medical diagnostics. As many as half of the federally operated NATIONAL RESEARCH COUNCIL Research Institutes have significant involvement with genetic engineering, including the Biotechnology Research Institute (Montral) and the Plant Biotechnology Institute (Saskatoon), whose mandates are largely in this area. The Veterinary Infectious Disease Organization, based at University of Saskatchewan, is using genetic engineering technology for production of new vaccines for livestock diseases.


Read more from the original source:
Genetic Engineering – The Canadian Encyclopedia

Posted in Genetic Engineering | Comments Off on Genetic Engineering – The Canadian Encyclopedia

Entheogen – New World Encyclopedia

Posted: October 3, 2016 at 1:02 am

This entry covers entheogens as psychoactive substances used in religious or shamanic contexts. For general information about these substances and their use outside religious contexts, see psychedelics, dissociatives and deliriants.

An entheogen, in the strictest sense, is a psychoactive substance used in a religious or shamanic context. Historically, entheogens are derived primarily from plant sources and have been used in a variety of traditional religious practices. With the advent of organic chemistry, there now exist many synthetic substances with similar properties.

More broadly, the term entheogen is used to refer to such substances when used for their religious or spiritual effects, whether or not in a formal religious or traditional structure. This terminology is often chosen in contrast with recreational use of the same substances. These spiritual effects have been demonstrated in peer-reviewed studies (see below) though research remains problematic due to ongoing drug prohibition.

Entheogens have been used in a ritualized context for thousands of years. Examples of entheogens from ancient sources include: Greek: kykeon; African: Iboga; Vedic: Soma, Amrit. Chemicals used today as entheogens, whether in pure form or as plant-derived substances, include mescaline, DMT, LSD, psilocin, ibogaine, and salvinorin A.

The word “entheogen” is a neologism derived from two words of ancient Greek, (entheos) and (genesthai). The adjective entheos translates to English as “full of the god, inspired, possessed,” and is the root of the English word “enthusiasm.” The Greeks used it as a term of praise for poets and other artists. Genesthai means “to come into being.” Thus, an entheogen is a substance that causes one to become inspired or to experience feelings of inspiration, often in a religious or “spiritual” manner.

The word entheogen was coined in 1979 by a group of ethnobotanists and scholars of mythology (Carl A. P. Ruck, Jeremy Bigwood, Danny Staples, Richard Evans Schultes, Jonathan Ott and R. Gordon Wasson). The literal meaning of the word is “that which causes God to be within an individual.” The translation “creating the divine within” is sometimes given, but it should be noted that entheogen implies neither that something is created (as opposed to just perceiving something that is already there) nor that the experienced is within the user (as opposed to having independent existence).

It was coined as a replacement for the terms “hallucinogen” (popularized by Aldous Huxley’s experiences with mescaline, published as The Doors of Perception in 1953) and “psychedelic” (a Greek neologism for “mind manifest,” coined by psychiatrist Humphry Osmond, who was quite surprised when the well-known author, Aldous Huxley, volunteered to be a subject in experiments Osmond was running on mescaline). Ruck et al. argued that the term “hallucinogen” was inappropriate due to its etymological relationship to words relating to delirium and insanity. The term “psychedelic” was also seen as problematic, due to the similarity in sound to words pertaining to psychosis and also due to the fact that it had become irreversibly associated with various connotations of 1960s pop culture. In modern usage “entheogen” may be used synonymously with these terms, or it may be chosen to contrast with recreational use of the same substances.

The meanings of the term “entheogen” were formally defined by Ruck et al.:

In a strict sense, only those vision-producing drugs that can be shown to have figured in shamanic or religious rites would be designated entheogens, but in a looser sense, the term could also be applied to other drugs, both natural and artificial, that induce alterations of consciousness similar to those documented for ritual ingestion of traditional entheogens.

Since 1979, when the term was proposed, its use has become widespread in certain circles. In particular, the word fills a vacuum for those users of entheogens who feel that the term “hallucinogen,” which remains common in medical, chemical and anthropological literature, denigrates their experience and the world view in which it is integrated. Use of the strict sense of the word has, therefore, arisen amongst religious entheogen users, and also amongst others who wish to practice spiritual or religious tolerance.

The use of the word “entheogen” in its broad sense as a synonym for “hallucinogenic drug” has attracted criticism on three grounds:

Ideological objections to the broad use of the term often relate to the widespread existence of taboos surrounding psychoactive drugs, with both religious and secular justifications. The perception that the broad sense of the term “entheogen” is used as a euphemism by hallucinogenic drug-users bothers both critics and proponents of the secular use of hallucinogenic drugs. Critics frequently see the use of the term as an attempt to obscure what they perceive as illegitimate motivations and contexts of secular drug use. Some proponents also object to the term, arguing that the trend within their own subcultures and in the scientific literature towards the use of term “entheogen” as a synonym for “hallucinogen” devalues the positive uses of drugs in contexts that are secular but nevertheless, in their view, legitimate.

Beyond the use of the term itself, the validity of drug-induced, facilitated, or enhanced religious experience has been questioned. The claim that such experiences are less valid than religious experience without the use of any sacramental catalyst faces the problem that the descriptions of religious experiences by those using entheogens are indistinguishable from many reports of religious experiences which, are presumed in modern times to, have been experienced without their use. Such a claim, however, depends entirely on the assumption that the reports of well-known mystics were not influenced by ingesting visionary plants, a derivation which Dan Merkur calls into question.

In an attempt to empirically answer the question about whether neurochemical augmentation through entheogens may enable religio-mystical experience, the Marsh Chapel Experiment was conducted by physician and theology doctoral candidate, Walter Pahnke, under the supervision of Timothy Leary and the Harvard Psilocybin Project. In the double-blind experiment, volunteer graduate school divinity students from the Boston area almost all claimed to have had profound religious experiences subsequent to the ingestion of pure psilocybin. In 2006, a more rigorously controlled version of this experiment was conducted at Johns Hopkins University, yielding very similar results.[1] To date there is little peer-reviewed research on this subject, due to ongoing drug prohibition and the difficulty of getting approval from institutional review boards. However, there is little doubt that entheogens can enable powerful experiences that are subjectively judged as important in a religious or spiritual context. Rather, it is the precise characterization and quantification of these experiences, and of religious experience in general, that is not yet developed.

Naturally occurring entheogens such as psilocybin and dimethyltryptamine, also known as N,N-dimethyltryptamine, or simply DMT (in the preparation ayahuasca) were discovered and used by older cultures, as part of their spiritual and religious life, as plants and agents which were respected, or in some cases revered. By contrast, artificial and modern entheogens, such as MDMA, never had a tradition of religious use.

Entheogens have been used in various ways, including as part of established religious traditions, secularly for personal spiritual development, as tools (or “plant teachers”) to augment the mind,[2][3] secularly as recreational drugs, and for medical and therapeutic use.

The use of entheogens in human cultures is nearly ubiquitous throughout recorded history.

The best-known entheogen-using culture of Africa is the Bwitists, who used a preparation of the root bark of Iboga (Tabernanthe iboga).[4] A famous entheogen of ancient Egypt is the blue lotus (Nymphaea caerulea). There is evidence for the use of entheogenic mushrooms in Cte d’Ivoire (Samorini 1995). Numerous other plants used in shamanic ritual in Africa, such as Silene capensis sacred to the Xhosa, are yet to be investigated by western science.

Entheogens have played a pivotal role in the spiritual practices of American cultures for millennia. The first American entheogen to be subject to scientific analysis was the peyote cactus (Lophophora williamsii). For his part, one of the founders of modern ethno-botany, the late Richard Evans Schultes of Harvard University documented the ritual use of peyote cactus among the Kiowa who live in what became Oklahoma. Used traditionally by many cultures of what is now Mexico, its use spread throughout North America, replacing the toxic entheogen Sophora secundiflora (mescal bean). Other well-known entheogens used by Mexican cultures include psilocybin mushrooms (known to indigenous Mexicans under the Nhuatl name teonancatl), the seeds of several morning glories (Nhuatl: tlitlltzin and ololihqui) and Salvia divinorum (Mazateco: Ska Pastora; Nhuatl: pipiltzintzntli).

Indigenous peoples of South America employ a wide variety of entheogens. Better-known examples include ayahuasca (Banisteriopsis caapi plus admixtures) among indigenous peoples (such as the Urarina) of Peruvian Amazonia. Other well-known entheogens include: borrachero (Brugmansia spp); San Pedro Trichocereus spp); and various tryptamine-bearing snuffs, for example Epen (Virola spp), Vilca and Yopo (Anadananthera spp). The familiar tobacco plant, when used uncured in large doses in shamanic contexts, also serves as an entheogen in South America. Additionally, a tobacco that contains higher nicotine content, and therefore smaller doses required, called Nicotiana rustica was commonly used.

Over and above the indigenous use of entheogens in the Americas, one should also note their important role in contemporary religious movements, such as the Rastafari movement and the Church of the Universe.

The indigenous peoples of Siberia (from whom the term shaman was appropriated) have used the fly agaric mushroom (Amanita muscaria) as an entheogen. The ancient inebriant Soma, mentioned often in the Vedas, may have been an entheogen. (In his 1967 book, Wasson argues that Soma was fly agaric. The active ingredient of Soma is presumed by some to be ephedrine, an alkaloid with stimulant and (somewhat debatable) entheogenic properties derived from the soma plant, identified as Ephedra pachyclada.) However, there are also arguments to suggest that Soma could have also been Syrian Rue, Cannabis, or some combination of any of the above plants.

An early entheogen in Aegean civilization, predating the introduction of wine, which was the more familiar entheogen of the reborn Dionysus and the maenads, was fermented honey, known in Northern Europe as mead; its cult uses in the Aegean world are bound up with the mythology of the bee.

The extent of the use of visionary plants throughout European history has only recently been seriously investigated, since around 1960. The use of entheogens in Europe may have become greatly reduced by the time of the rise of Christianity. European witches used various entheogens, including thorn-apple (Datura), deadly nightshade (Atropa belladonna), mandrake (Mandragora officinarum) and henbane (Hyoscyamus niger). These plants were used, among other things, for the manufacture of “flying ointments.”

The growth of Roman Christianity also saw the end of the 2,000-year-old tradition of the Eleusinian Mysteries, the initiation ceremony for the cult of Demeter and Persephone involving the use of a possibly entheogenic substance known as kykeon. Similarly, there is evidence that nitrous oxide or ethylene may have been in part responsible for the visions of the equally long-lived Delphic oracle (Hale et al. 2003).

In ancient Germanic culture, cannabis was associated with the Germanic love goddess Freya. The harvesting of the plant was connected with an erotic high festival. It was believed that Freya lived as a fertile force in the plant’s feminine flowers and by ingesting them one became influenced by this divine force. Similarly, fly agaric was consecrated to Odin, the god of ecstasy, while henbane stood under the dominion of the thunder godThor in Germanic mythologyand Jupiter among the Romans (Rtsch 2003).

An ancient entheogenic substance in the Middle East is hashish. Its use by the “Hashshashin” to stupefy and recruit new initiates was widely reported during the Crusades. However, the drug used by the Hashshashin was likely wine, opium, henbane, or some combination of these, and, in any event, the use of this drug was for stupefaction rather than for entheogenic use. It has been suggested that the ritual use of small amounts of Syrian Rue is an artifact of its ancient use in higher doses as an entheogen.

Philologist John Marco Allegro has argued in his book The Sacred Mushroom and the Cross that early Jewish and Christian cultic practice was based on the use of Amanita muscaria which was later forgotten by its adherents, though this hypothesis has not received much consideration or become widely accepted. Allegro’s hypothesis that Amanita use was forgotten after primitive Christianity seems contradicted by his own view that the chapel in Plaincourault shows evidence of Christian Amanita use in the 1200s.[5]

Indigenous Australians are generally thought not to have used entheogens, although there is a strong barrier of secrecy surrounding Aboriginal shamanism, which has likely limited what has been told to outsiders. There are no known uses of entheogens by the Mori of New Zealand. Natives of Papua New Guinea are known to use several species of entheogenic mushrooms (Psilocybe spp, Boletus manicus).[6]

Kava or Kava Kava (Piper Methysticum) has been cultivated for at least 3,000 years by a number of Pacific island-dwelling peoples. Historically, most Polynesian, many Melanesian, and some Micronesian cultures have ingested the psychoactive pulverized root, typically taking it mixed with water. Much traditional usage of Kava, though somewhat suppressed by Christian missionaries in the nineteenth and twentieth centuries, is thought to facilitate contact with the spirits of the dead, especially relatives and ancestors (Singh 2004).

There have been several examples of the use of entheogens in the archaeological record. Many of these researchers, like R. Gordon Wasson or Giorgio Samorini,[7][8] have recently produced a plethora of evidence, which has not yet received enough consideration within academia. The first direct evidence of entheogen use comes from Tassili, Algeria, with a cave painting of a mushroom-man, dating to 8000 BP. Hemp seeds discovered by archaeologists at Pazyryk suggest early ceremonial practices by the Scythians occurred during the fifth to second century B.C.E., confirming previous historical reports by Herodotus.

Although entheogens are taboo and most of them are officially prohibited in Christian and Islamic societies, their ubiquity and prominence in the spiritual traditions of various other cultures is unquestioned. The entheogen, “the spirit, for example, need not be chemical, as is the case with the ivy and the olive: and yet the god was felt to be within them; nor need its possession be considered something detrimental, like drugged, hallucinatory, or delusionary: but possibly instead an invitation to knowledge or whatever good the god’s spirit had to offer” (Ruck and Staples).

Most of the well-known modern examples, such as peyote, psilocybe and other psychoactive mushrooms and ololiuhqui, are from the native cultures of the Americas. However, it has also been suggested that entheogens played an important role in ancient Indo-European culture, for example by inclusion in the ritual preparations of the Soma, the “pressed juice” that is the subject of Book 9 of the Rig Veda. Soma was ritually prepared and drunk by priests and initiates and elicited a paean in the Rig Veda that embodies the nature of an entheogen:

Splendid by Law! declaring Law, truth speaking, truthful in thy works, Enouncing faith, King Soma!… O [Soma] Pavmana, place me in that deathless, undecaying world wherein the light of heaven is set, and everlasting lustre shines…. Make me immortal in that realm where happiness and transports, where joy and felicities combine…

The Kykeon that preceded initiation into the Eleusinian Mysteries is another entheogen, which was investigated (before the word was coined) by Carl Kereny, in Eleusis: Archetypal Image of Mother and Daughter. Other entheogens in the Ancient Near East and the Aegean include the poppy, Datura, the unidentified “lotus” eaten by the Lotus-Eaters in the Odyssey and Narkissos.

According to Ruck, Eyan, and Staples, the familiar shamanic entheogen that the Indo-Europeans brought with them was knowledge of the wild Amanita mushroom. It could not be cultivated; thus it had to be found, which suited it to a nomadic lifestyle. When they reached the world of the Caucasus and the Aegean, the Indo-Europeans encountered wine, the entheogen of Dionysus, who brought it with him from his birthplace in the mythical Nysa, when he returned to claim his Olympian birthright. The Indo-European proto-Greeks “recognized it as the entheogen of Zeus, and their own traditions of shamanism, the Amanita and the ‘pressed juice’ of Soma but better since no longer unpredictable and wild, the way it was found among the Hyperboreans: as befit their own assimilation of agrarian modes of life, the entheogen was now cultivable” (Ruck and Staples). Robert Graves, in his foreword to The Greek Myths, argues that the ambrosia of various pre-Hellenic tribes were amanita and possibly panaeolus mushrooms.

Amanita was divine food, according to Ruck and Staples, not something to be indulged in or sampled lightly, not something to be profaned. It was the food of the gods, their ambrosia, and it mediated between the two realms. It is said that Tantalus’s crime was inviting commoners to share his ambrosia.

The entheogen is believed to offer godlike powers in many traditional tales, including immortality. The failure of Gilgamesh in retrieving the plant of immortality from beneath the waters teaches that the blissful state cannot be taken by force or guile: when Gilgamesh lay on the bank, exhausted from his heroic effort, the serpent came and ate the plant.

Another attempt at subverting the natural order is told in a (according to some) strangely metamorphosed myth, in which natural roles have been reversed to suit the Hellenic world-view. The Alexandrian Apollodorus relates how Gaia (spelled “Ge” in the following passage), Mother Earth herself, has supported the Titans in their battle with the Olympian intruders. The Giants have been defeated:

When Ge learned of this, she sought a drug that would prevent their destruction even by mortal hands. But Zeus barred the appearance of Eos (the Dawn), Selene (the Moon), and Helios (the Sun), and chopped up the drug himself before Ge could find it.

According to The Living Torah, cannabis was an ingredient of holy anointing oil mentioned in various sacred Hebrew texts.[9] The herb of interest is most commonly known as kaneh-bosm (Hebrew: -). This is mentioned several times in the Old Testament as a bartering material, incense, and an ingredient in holy anointing oil used by the high priest of the temple. Although Chris Bennett’s research in this area focuses on cannabis, he mentions evidence suggesting use of additional visionary plants such as henbane, as well.

The Septuagint translates kaneh-bosm as calamus, and this translation has been propagated unchanged to most later translations of the Hebrew Bible. However, Polish anthropologist Sula Benet published etymological arguments that the Aramaic word for hemp can be read as kannabos and appears to be a cognate to the modern word ‘cannabis’,[10] with the root kan meaning reed or hemp and bosm meaning fragrant. Both cannabis and calamus are fragrant, reedlike plants containing psychotropic compounds.

Although philologist John Marco Allegro has suggested that the self-revelation and healing abilities attributed to the figure of Jesus may have been associated with the effects of the plant medicines [from the Aramaic: “to heal”], this evidence is dependent on pre-Septuagint interpretation of Torah, and goes firmly against the accepted teachings of the Holy See. However Merkur contends that a minority of Christian hermits and mystics could possibly have used entheogens, in conjunction with fasting, meditation and prayer.

Allegro was the only non-Catholic appointed to the position of translating the Dead Sea Scrolls. His extrapolations are often the object of scorn due to Allegro’s theory of Jesus as a mythological personification of the essence of the psychoactive sacrament, furthermore they seem to conflict with the position of the Catholic Church in regards to the exclusivity of the non-canonical practice of transubstantiation and endorsement of alcohol ingestion as the exclusive means to attain communion with God. Allegro’s book, The Sacred Mushroom and the Cross, relates the development of language to the development of myths, religions and cultic practices in world cultures. Allegro believed he could prove, through etymology, that the roots of Christianity, as of many other religions, lay in fertility cults; and that cult practices, such as ingesting visionary plants (or “psychedelics”) to perceive the Mind of God [Avestan: Vohu Mana], persisted into the early Christian era, and to some unspecified extent into the 1200s with reoccurrences in the 1700s and mid 1900s, as he interprets the Plaincourault chapel’s fresco to be an accurate depiction of the ritual ingestion of Amanita Muscaria as the Eucharist.

The historical picture portrayed by the Entheos journal is of fairly widespread use of visionary plants in early Christianity and the surrounding culture, with a gradual reduction of use of entheogens in Christianity.[11] R. Gordon Wasson’s book Soma prints a letter from art historian Erwin Panofsky asserting that art scholars are aware of many ‘mushroom trees’ in Christian art.[12]

The question of the extent of visionary plant use throughout the history of Christian practice has barely been considered yet by academic or independent scholars. The question of whether visionary plants were used in pre-Theodosius Christianity is distinct from evidence that indicates the extent to which visionary plants were utilized or forgotten in later Christianity, including so-called “heretical” or “quasi-” Christian groups,[13] and the question of other groups such as elites or laity within “orthodox” Catholic practice.

James Arthur asserts that the little scroll from the angel with writing on it referred to in Ezekiel 2: 8,9,10 and Ezekiel 3: 1,2,3 and Book of Revelation 10: 9,10 was the speckled cap of the Amanita Muscaria mushroom.[14]

The substance melange (spice) in Frank Herbert’s Dune universe acts as both an entheogen and a geriatric medicine. Control of the supply of melange was crucial to the Empire, as it was necessary for, among other things, faster than light navigation.

Consumption of the imaginary mushroom anochi as the entheogen underlying the creation of Christianity is the premise of Philip K. Dick’s last novel, The Transmigration of Timothy Archer, a theme which seems to be inspired by John Allegro’s book.

Aldous Huxley’s final novel, Island (1962), depicted a fictional entheogenic mushroomtermed “moksha medicine”used by the people of Pala in rites of passage, such as the transition to adulthood and at the end of life.

Bruce Sterling’s Holy Fire novel refers to the religion in the future as a result of entheogens, used freely by the population.

In Stephen King’s The Gunslinger, Book 1 of The Dark Tower series, the main character receives guidance after taking mescaline.

The Alastair Reynolds novel Absolution Gap features a moon under the control of a religious government which uses neurological viruses to induce religious faith.

All links retrieved September 23, 2013.

New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:

Note: Some restrictions may apply to use of individual images which are separately licensed.

More here:

Entheogen – New World Encyclopedia

Posted in Entheogens | Comments Off on Entheogen – New World Encyclopedia

Executive Team | www.extropy.com

Posted: October 1, 2016 at 1:48 am

Nicholas Montera – Chief Executive Officer (CEO) and Managing Partner

Nicholas is one of the founding members of Extropy and is a leading information technology expert with extensive experience involving large-scale IT initiatives. In his role as CEO Nicholas has developed Extropys strategic approach and focused its efforts to provide world class professional services and solutions. His vision is for Extropy to recruit and cultivate the best talent in the industry expanding Extropys “tribal knowledge” base to deliver innovative solutions founded in deep experience and due diligence. As the Chief Executive Officer of Extropy, he provides leadership to the executive management team; ensuring that the company is focused on the mission and our actions are in alignment with our core values. Nicholas has been deeply involved throughout his career in information technology planning, sales and engagement management. He came to form Extropy after successful careers with both British Telecom and Avaya. Nicholas attended the Florida Institute of Technology studying Mechanical Engineering and Business Administration. He also holds the industries highest networking certification, Cisco Certified Internetworking Expert, (CCIE #11811).

Brett Coover – Chief Technology Officer (CTO) and Managing Partner

Brett is one of the founding members of Extropy and is a thought leader in solutions across many technologies and industries. As CTO Brett focuses Extropys “tribal knowledge” to define and refine our technology and business solutions; always staying ahead of the curve. His vision is centered on creating and delivering the most innovative technology solutions available and developing them into long term growth opportunities. Brett has extensive experience in delivering innovative solutions to Fortune 500 customers, transforming their businesses as a trusted partner. Additionally Brett has experience in developing and operating service provider organizations. Brett’s educational experience spans across the sciences, having attended Clarkson and the University of Buffalo studying physics, information and computer sciences. He also holds numerous certifications, including the industries highest networking certification, Cisco Certified Internetworking Expert, (CCIE #11918), and the most respected security certification, Certified Information Systems Security Professional (CISSP).

Do you know anyone at Extropy ?

Go here to read the rest:

Executive Team | http://www.extropy.com

Posted in Extropy | Comments Off on Executive Team | www.extropy.com