Author Archives: AcachfumExcum

Euthanasia in the United States – Wikipedia, the free …

Posted: August 29, 2016 at 7:47 am

Euthanasia is illegal in most of the United States. Physician aid in dying (PAD), or assisted suicide, is legal in the states of Washington, Oregon, California, and Vermont;[1] its status is disputed in Montana. The key difference between euthanasia and PAD is who administers the lethal dose of medication: Euthanasia entails the physician or another third party administering the medication, whereas PAD requires the patient to self-administer the medication and to determine whether and when to do this.[citation needed] Attempts to legalize PAD resulted in ballot initiatives and “legislation bills” within the United States of America in the last 20 years. For example, the state of Washington voters saw Ballot Initiative 119 in 1991, the state of California placed Proposition 161 on the ballot in 1992, Oregon voters passed Measure 16 (Death with Dignity Act) in 1994, the state of Michigan included Proposal B in their ballot in 1998, and Washington’s Initiative 1000 passed in 2008. Vermont’s state legislature passed a bill making PAD legal in May 2013. However, on May 31, 2013, Maine rejected a similar bill within its state legislature (95-13).[citation needed]

Debates about the ethics of euthanasia and physician-assisted suicide date from ancient Greece and Rome. After the development of ether, physicians began advocating the use of anesthetics to relieve the pain of death. In 1870, Samuel Williams first proposed using anesthetics and morphine to intentionally end a patient’s life. Over the next 35 years, debates about euthanasia raged in the United States which resulted in an Ohio bill to legalize euthanasia in 1906, a bill that was ultimately defeated.[2]

Euthanasia advocacy in the U.S. peaked again during the 1930s and diminished significantly during and after World War II. Euthanasia efforts were revived during the 1960s and 1970s, under the right-to-die rubric, physician assisted death in liberal bioethics, and through advance directives and do not resuscitate orders.

Several major court cases advanced the legal rights of patients, or their guardians, to practice at least voluntary passive euthanasia (physician assisted death). These include the Karen Ann Quinlan (1976), Brophy and Nancy Cruzan cases. More recent years have seen policies fine-tuned and re-stated, as with Washington v. Glucksberg (1997) and the Terri Schiavo case. The numerous legislative rulings and legal precedents that were brought about in the wake of the Quinlan case had their ethical foundation in the famous 1983 report completed by the Presidents Commission for the Study of Ethical Problems in Medicine, under the title “Deciding to Forgo Life-Sustaining Treatment” (Angell, Marcia. “How to Die in Massachusetts.” The New York Review of Books. 21 February 2013: 60.3. Web. 14 Jul. 2014.). The Commission sustained in its findings that it was morally acceptable to give up a life-supporting therapy and that withholding or withdrawing such a therapy is the same thing from an ethical stand-point, while artificial feeding and other life-supporting therapy are of the same importance for the patients and doctors. Before this report, to withdraw a medical therapy was regarded as much more serious decision than not to start a therapy at all, while artificial feeding was viewed as a special treatment. By 1990, barely a decade and a half after the New Jersey Supreme Courts historic decision, patients were well aware that they could decline any form of medical therapy if they simply choose to do that either directly or by expressing their wish via appointed representative.

In a 2004 article in the Bulletin of the History of Medicine, Brown University historian Jacob M. Appel documented extensive political debate over legislation to legalize physician-assisted suicide in both Iowa and Ohio in 1906. The driving force behind this movement was social activist Anna S. Hall. Canadian historian Ian Dowbiggen’s 2003 book, A Merciful End, revealed the role that leading public figures, including Clarence Darrow and Jack London, played in advocating for the legalization of euthanasia.

In the 1983 case of Barber v. Superior Court, two physicians had honored a family’s request to withdraw both respirator and intravenous feeding and hydration tubes from a comatose patient. The physicians were charged with murder, despite the fact that they were doing what the family wanted. The court held that all charges should be dropped because the treatments had all been ineffective and burdensome. Withdrawal of treatment, even if life-ending, is morally and legally permitted. Competent patients or their surrogates can decide to withdraw treatments, usually after the treatments are found ineffective, painful, or burdensome.[3]

The California legislature passed a bill legalizing physician-assisted suicide in September 2015, and the bill was signed into law by Governor Jerry Brown on October 5, 2015. [4] The law went into effect in June 2016.[5]

On May 31, 2013, the Maine state legislature rejected decriminalization of physician assisted suicide and voluntary euthanasia (95-43).

On December 5, 2009, state District Court judge Dorothy McCarter ruled in favor of a terminally ill Billings resident who had filed a lawsuit with the assistance of Compassion & Choices, a patient rights group. The ruling states that competent, terminally ill patients have the right to self-administer lethal doses of medication as prescribed by a physician. Physicians who prescribe such medications will not face legal punishment.[6] On December 31, 2009, the Montana Supreme Court delivered its verdict in the case of Baxter v. Montana. The court held that there was “nothing in Montana Supreme Court precedent or Montana statutes indicating that physician aid in dying is against public policy,” although prosecutions under the state’s assisted suicide statute are still possible.

In the United States legal and ethical debates about euthanasia became more prominent in the case of Karen Ann Quinlan who went into a coma after allegedly mixing tranquilizers with alcohol, surviving biologically for 9 years in a “persistent vegetative state” even after the New Jersey Supreme Court approval to remove her from a respirator. This case caused a widespread public concern about “lives not worth living” and the possibility of at least voluntary euthanasia if it could be ascertained that the patient would not have wanted to live in this condition.[7]

Measure 16 in 1994 established the Oregon Death with Dignity Act, which legalizes physician-assisted dying with certain restrictions, making Oregon the first U.S. state and one of the first jurisdictions in the world to officially do so. The measure was approved in the 8 November 1994 general election in a tight race with the final tally showing 627,980 votes (51.3%) in favor, and 596,018 votes (48.7%) against.[8] The law survived an attempted repeal in 1997, which was defeated at the ballot by a 60% vote.[9] In 2005, after several attempts by lawmakers at both the state and federal level to overturn the Oregon law, the Supreme Court of the United States ruled 6-3 to uphold the law after hearing arguments in the case of Gonzales v. Oregon.

In 1999, the state of Texas passed the Advance Directives Act. Under the law, in some situations, Texas hospitals and physicians have the right to withdraw life support measures, such as mechanical respiration, from terminally ill patients when such treatment is considered to be both futile and inappropriate. This is sometimes referred to as “passive euthanasia”.

In 2005, a six-month-old infant, Sun Hudson, with a uniformly fatal disease thanatophoric dysplasia, was the first patient in which “a United States court has allowed life-sustaining treatment to be withdrawn from a pediatric patient over the objections of the child’s parent.”[10]

In 2008, the electorate of the state of Washington voted in favor of Initiative 1000 which made assisted suicide legal in the state through the Washington Death with Dignity Act.

On May 20, 2013, Vermont Governor Peter Shumlin signed a legislative bill making PAD legal in Vermont.

Attempts to legalize euthanasia and assisted suicide resulted in ballot initiatives and legislation bills within the United States in the last 20 years. For example, Washington voters saw Ballot Initiative 119 in 1991, California placed Proposition 161 on the ballot in 1992, Oregon passed the Death with Dignity Act in 1994, and Michigan included Proposal B in their ballot in 1998. Despite the earlier failure, in November 2008 physician-assisted dying was approved in Washington by Initiative 1000.

In 2000, Maine voters defeated a referendum to legalize physician-assisted suicide. The proposal was defeated by a 51%-49% margin.

Reflecting the religious and cultural diversity of the United States, there is a wide range of public opinion about euthanasia and the right-to-die movement in the United States. During the past 30 years, public research shows that views on euthanasia tend to correlate with religious affiliation and culture, though not gender.

In one recent study dealing primarily with Christian denominations such as Southern Baptists, Pentecostals, and Evangelicals and Catholics tended to be opposed to euthanasia. Moderate Protestants, (e.g., Lutherans and Methodists) showed mixed views concerning end of life decisions in general. Both of these groups showed less support than non-affiliates, but were less opposed to it than conservative Protestants. Respondents that did not affiliate with a religion were found to support euthanasia more than those who did. The liberal Protestants (including some Presbyterians and Episcopalians) were the most supportive. In general, liberal Protestants affiliate more loosely with religious institutions and their views were not similar to those of non-affiliates. Within all groups, religiosity (i.e., self-evaluation and frequency of church attendance) also correlated to opinions on euthanasia. Individuals who attended church regularly and more frequently and considered themselves more religious were found to be more opposed to euthanasia than to those who had a lower level of religiosity.[11]

Recent studies have shown white Americans to be more accepting of euthanasia than black Americans. They are also more likely to have advance directives and to use other end-of-life measures.[12] Black Americans are almost 3 times more likely to oppose euthanasia than white Americans. Some speculate that this discrepancy is due to the lower levels of trust in the medical establishment.[13] Select researchers believe that historical medical abuses towards minorities (such as the Tuskegee Syphilis Study) have made minority groups less trustful of the level of care they receive. One study also found that there are significant disparities in the medical treatment and pain management that white Americans and other Americans receive.[14]

Among black Americans, education correlates to support for euthanasia. Black Americans without a four-year degree are twice as likely to oppose euthanasia than those with at least that much education. Level of education, however, does not significantly influence other racial groups in the US. Some researchers suggest that black Americans tend to be more religious, a claim that is difficult to substantiate and define.[13] Only black and white Americans have been studied in extensive detail. Although it has been found that minority groups are less supportive of euthanasia than white Americans, there is still some ambiguity as to what degree this is true.

A recent Gallup Poll found that 84% of males supported euthanasia compared to 64% of females.[15] Some cite the prior studies showing that women have a higher level of religiosity and moral conservatism as an explanation. Within both sexes, there are differences in attitudes towards euthanasia due to other influences. For example, one study found that black American women are 2.37 times more likely to oppose euthanasia than white American women. Black American men are 3.61 times more likely to oppose euthanasia than white American men.[16]

In “Gender, Feminism, and Death: Physician-Assisted Suicide and Euthanasia” Susan M. Wolf warns of the gender disparities if euthanasia or physician-assisted suicide were legal. Wolf highlights four possible gender effects: higher incidence of women than men dying by physician-assisted suicide; more women seeking physician-assisted suicide or euthanasia for different reasons than men; physicians granting or refusing requests for assisted suicide or euthanasia because of the gender of the patient; gender affecting the broad public debate by envisioning a woman patient when considering the debate.[17]

View post:

Euthanasia in the United States – Wikipedia, the free …

Posted in Euthanasia | Comments Off on Euthanasia in the United States – Wikipedia, the free …

Four fundamentals of workplace automation | McKinsey & Company

Posted: August 27, 2016 at 7:13 pm

Article Actions

As the automation of physical and knowledge work advances, many jobs will be redefined rather than eliminatedat least in the short term.

The potential of artificial intelligence and advanced robotics to perform tasks once reserved for humans is no longer reserved for spectacular demonstrations by the likes of IBMs Watson, Rethink Robotics Baxter, DeepMind, or Googles driverless car. Just head to an airport: automated check-in kiosks now dominate many airlines ticketing areas. Pilots actively steer aircraft for just three to seven minutes of many flights, with autopilot guiding the rest of the journey. Passport-control processes at some airports can place more emphasis on scanning document bar codes than on observing incoming passengers.

What will be the impact of automation efforts like these, multiplied many times across different sectors of the economy? Can we look forward to vast improvements in productivity, freedom from boring work, and improved quality of life? Should we fear threats to jobs, disruptions to organizations, and strains on the social fabric?

Earlier this year, we launched research to explore these questions and investigate the potential that automation technologies hold for jobs, organizations, and the future of work. Our results to date suggest, first and foremost, that a focus on occupations is misleading. Very few occupations will be automated in their entirety in the near or medium term. Rather, certain activities are more likely to be automated, requiring entire business processes to be transformed, and jobs performed by people to be redefined, much like the bank tellers job was redefined with the advent of ATMs.

More specifically, our research suggests that as many as 45 percent of the activities individuals are paid to perform can be automated by adapting currently demonstrated technologies. In the United States, these activities represent about $2 trillion in annual wages. Although we often think of automation primarily affecting low-skill, low-wage roles, we discovered that even the highest-paid occupations in the economy, such as financial managers, physicians, and senior executives, including CEOs, have a significant amount of activity that can be automated.

The organizational and leadership implications are enormous: leaders from the C-suite to the front line will need to redefine jobs and processes so that their organizations can take advantage of the automation potential that is distributed across them. And the opportunities extend far beyond labor savings. When we modeled the potential of automation to transform business processes across several industries, we found that the benefits (ranging from increased output to higher quality and improved reliability, as well as the potential to perform some tasks at superhuman levels) typically are between three and ten times the cost. The magnitude of those benefits suggests that the ability to staff, manage, and lead increasingly automated organizations will become an important competitive differentiator.

Our research is ongoing, and in 2016, we will release a detailed report. What follows here are four interim findings elaborating on the core insight that the road ahead is less about automating individual jobs wholesale, than it is about automating the activities within occupations and redefining roles and processes.

These preliminary findings are based on data for the US labor market. We structured our analysis around roughly 2,000 individual work activities, and assessed the requirements for each of these activities against 18 different capabilities that potentially could be automated (Exhibit 1). Those capabilities range from fine motor skills and navigating in the physical world, to sensing human emotion and producing natural language. We then assessed the automatability of those capabilities through the use of current, leading-edge technology, adjusting the level of capability required for occupations where work occurs in unpredictable settings.

Exhibit 1

The bottom line is that 45 percent of work activities could be automated using already demonstrated technology. If the technologies that process and understand natural language were to reach the median level of human performance, an additional 13 percent of work activities in the US economy could be automated. The magnitude of automation potential reflects the speed with which advances in artificial intelligence and its variants, such as machine learning, are challenging our assumptions about what is automatable. Its no longer the case that only routine, codifiable activities are candidates for automation and that activities requiring tacit knowledge or experience that is difficult to translate into task specifications are immune to automation.

In many cases, automation technology can already match, or even exceed, the median level of human performance required. For instance, Narrative Sciences artificial-intelligence system, Quill, analyzes raw data and generates natural language, writing reports in seconds that readers would assume were written by a human author. Amazons fleet of Kiva robots is equipped with automation technologies that plan, navigate, and coordinate among individual robots to fulfill warehouse orders roughly four times faster than the companys previous system. IBMs Watson can suggest available treatments for specific ailments, drawing on the body of medical research for those diseases.

According to our analysis, fewer than 5 percent of occupations can be entirely automated using current technology. However, about 60 percent of occupations could have 30 percent or more of their constituent activities automated. In other words, automation is likely to change the vast majority of occupationsat least to some degreewhich will necessitate significant job redefinition and a transformation of business processes. Mortgage-loan officers, for instance, will spend much less time inspecting and processing rote paperwork and more time reviewing exceptions, which will allow them to process more loans and spend more time advising clients. Similarly, in a world where the diagnosis of many health issues could be effectively automated, an emergency room could combine triage and diagnosis and leave doctors to focus on the most acute or unusual cases while improving accuracy for the most common issues.

As roles and processes get redefined, the economic benefits of automation will extend far beyond labor savings. Particularly in the highest-paid occupations, machines can augment human capabilities to a high degree, and amplify the value of expertise by increasing an individuals work capacity and freeing the employee to focus on work of higher value. Lawyers are already using text-mining techniques to read through the thousands of documents collected during discovery, and to identify the most relevant ones for deeper review by legal staff. Similarly, sales organizations could use automation to generate leads and identify more likely opportunities for cross-selling and upselling, increasing the time frontline salespeople have for interacting with customers and improving the quality of offers.

Conventional wisdom suggests that low-skill, low-wage activities on the front line are the ones most susceptible to automation. Were now able to scrutinize this view using the comprehensive database of occupations we created as part of this research effort. It encompasses not only occupations, work activities, capabilities, and their automatability, but also the wages paid for each occupation.

Our work to date suggests that a significant percentage of the activities performed by even those in the highest-paid occupations (for example, financial planners, physicians, and senior executives) can be automated by adapting current technology. For example, we estimate that activities consuming more than 20 percent of a CEOs working time could be automated using current technologies. These include analyzing reports and data to inform operational decisions, preparing staff assignments, and reviewing status reports. Conversely, there are many lower-wage occupations such as home health aides, landscapers, and maintenance workers, where only a very small percentage of activities could be automated with technology available today (Exhibit 2).

Exhibit 2

Capabilities such as creativity and sensing emotions are core to the human experience and also difficult to automate. The amount of time that workers spend on activities requiring these capabilities, though, appears to be surprisingly low. Just 4 percent of the work activities across the US economy require creativity at a median human level of performance. Similarly, only 29 percent of work activities require a median human level of performance in sensing emotion.

While these findings might be lamented as reflecting the impoverished nature of our work lives, they also suggest the potential to generate a greater amount of meaningful work. This could occur as automation replaces more routine or repetitive tasks, allowing employees to focus more on tasks that utilize creativity and emotion. Financial advisors, for example, might spend less time analyzing clients financial situations, and more time understanding their needs and explaining creative options. Interior designers could spend less time taking measurements, developing illustrations, and ordering materials, and more time developing innovative design concepts based on clients desires.

These interim findings, emphasizing the clarity brought by looking at automation through the lens of work activities as opposed to jobs, are in no way intended to diminish the pressing challenges and risks that must be understood and managed. Clearly, organizations and governments will need new ways of mitigating the human costs, including job losses and economic inequality, associated with the dislocation that takes place as companies separate activities that can be automated from the individuals who currently perform them. Other concerns center on privacy, as automation increases the amount of data collected and dispersed. The quality and safety risks arising from automated processes and offerings also are largely undefined, while the legal and regulatory implications could be enormous. To take one case: who is responsible if a driverless school bus has an accident?

Nor do we yet have a definitive perspective on the likely pace of transformation brought by workplace automation. Critical factors include the speed with which automation technologies are developed, adopted, and adapted, as well as the speed with which organization leaders grapple with the tricky business of redefining processes and roles. These factors may play out differently across industries. Those where automation is mostly software based can expect to capture value much faster and at a far lower cost. (The financial-services sector, where technology can readily manage straight-through transactions and trade processing, is a prime example.) On the other hand, businesses that are capital or hardware intensive, or constrained by heavy safety regulation, will likely see longer lags between initial investment and eventual benefits, and their pace of automation may be slower as a result.

All this points to new top-management imperatives: keep an eye on the speed and direction of automation, for starters, and then determine where, when, and how much to invest in automation. Making such determinations will require executives to build their understanding of the economics of automation, the trade-offs between augmenting versus replacing different types of activities with intelligent machines, and the implications for human skill development in their organizations. The degree to which executives embrace these priorities will influence not only the pace of change within their companies, but also to what extent those organizations sharpen or lose their competitive edge.

Michael Chui is a principal at the McKinsey Global Institute, where James Manyika is a director; Mehdi Miremadi is a principal in McKinseys Chicago office.

The authors wish to thank McKinseys Rick Cavolo, Martin Dewhurst, Katy George, Andrew Grant, Sean Kane, Bill Schaninger, Stefan Spang, and Paul Willmott for their contributions to this article.

Read the original post:

Four fundamentals of workplace automation | McKinsey & Company

Posted in Automation | Comments Off on Four fundamentals of workplace automation | McKinsey & Company

History of technology – Wikipedia, the free encyclopedia

Posted: at 7:13 pm

The history of technology is the history of the invention of tools and techniques and is similar to other sides of the history of humanity. Technology can refer to methods ranging from as simple as language and stone tools to the complex genetic engineering and information technology that has emerged since the 1980s.

New knowledge has enabled people to create new things, and conversely, many scientific endeavors are made possible by technologies which assist humans in travelling to places they could not previously reach, and by scientific instruments by which we study nature in more detail than our natural senses allow.

Since much of technology is applied science, technical history is connected to the history of science. Since technology uses resources, technical history is tightly connected to economic history. From those resources, technology produces other resources, including technological artifacts used in everyday life.

Technological change affects, and is affected by, a society’s cultural traditions. It is a force for economic growth and a means to develop and project economic, political and military power.

Many sociologists and anthropologists have created social theories dealing with social and cultural evolution. Some, like Lewis H. Morgan, Leslie White, and Gerhard Lenski, have declared technological progress to be the primary factor driving the development of human civilization. Morgan’s concept of three major stages of social evolution (savagery, barbarism, and civilization) can be divided by technological milestones, such as fire. White argued the measure by which to judge the evolution of culture was energy.[1]

For White, “the primary function of culture” is to “harness and control energy.” White differentiates between five stages of human development: In the first, people use energy of their own muscles. In the second, they use energy of domesticated animals. In the third, they use the energy of plants (agricultural revolution). In the fourth, they learn to use the energy of natural resources: coal, oil, gas. In the fifth, they harness nuclear energy. White introduced a formula P=E*T, where E is a measure of energy consumed, and T is the measure of efficiency of technical factors utilizing the energy. In his own words, “culture evolves as the amount of energy harnessed per capita per year is increased, or as the efficiency of the instrumental means of putting the energy to work is increased”. Russian astronomer Nikolai Kardashev extrapolated his theory, creating the Kardashev scale, which categorizes the energy use of advanced civilizations.

Lenski’s approach focuses on information. The more information and knowledge (especially allowing the shaping of natural environment) a given society has, the more advanced it is. He identifies four stages of human development, based on advances in the history of communication. In the first stage, information is passed by genes. In the second, when humans gain sentience, they can learn and pass information through by experience. In the third, the humans start using signs and develop logic. In the fourth, they can create symbols, develop language and writing. Advancements in communications technology translates into advancements in the economic system and political system, distribution of wealth, social inequality and other spheres of social life. He also differentiates societies based on their level of technology, communication and economy:

In economics productivity is a measure of technological progress. Productivity increases when fewer inputs (labor, energy, materials or land) are used in the production of a unit of output.[2] Another indicator of technological progress is the development of new products and services, which is necessary to offset unemployment that would otherwise result as labor inputs are reduced. In developed countries productivity growth has been slowing since the late 1970s; however, productivity growth was higher in some economic sectors, such as manufacturing.[3] For example, in employment in manufacturing in the United States declined from over 30% in the 1940s to just over 10% 70 years later. Similar changes occurred in other developed countries. This stage is referred to as post-industrial.

In the late 1970s sociologists and anthropologists like Alvin Toffler (author of Future Shock), Daniel Bell and John Naisbitt have approached the theories of post-industrial societies, arguing that the current era of industrial society is coming to an end, and services and information are becoming more important than industry and goods. Some extreme visions of the post-industrial society, especially in fiction, are strikingly similar to the visions of near and post-Singularity societies.

The following is a summary of the history of technology by time period and geography:

-10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

During most of the Paleolithic – the bulk of the Stone Age – all humans had a lifestyle which involved limited tools and few permanent settlements. The first major technologies were tied to survival, hunting, and food preparation. Stone tools and weapons, fire, and clothing were technological developments of major importance during this period.

Human ancestors have been using stone and other tools since long before the emergence of Homo sapiens approximately 200,000 years ago.[4] The earliest methods of stone tool making, known as the Oldowan “industry”, date back to at least 2.3 million years ago,[5] with the earliest direct evidence of tool usage found in Ethiopia within the Great Rift Valley, dating back to 2.5 million years ago.[6] This era of stone tool use is called the Paleolithic, or “Old stone age”, and spans all of human history up to the development of agriculture approximately 12,000 years ago.

To make a stone tool, a “core” of hard stone with specific flaking properties (such as flint) was struck with a hammerstone. This flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers.[7] These tools greatly aided the early humans in their hunter-gatherer lifestyle to perform a variety of tasks including butchering carcasses (and breaking bones to get at the marrow); chopping wood; cracking open nuts; skinning an animal for its hide; and even forming other tools out of softer materials such as bone and wood.[8]

The earliest stone tools were crude, being little more than a fractured rock. In the Acheulian era, beginning approximately 1.65 million years ago, methods of working these stone into specific shapes, such as hand axes emerged. This early Stone Age is described as Epipaleolithic or Mesolithic. The former is generally used to describe the early Stone Age in areas with limited glacial impact.

The Middle Paleolithic, approximately 300,000 years ago, saw the introduction of the prepared-core technique, where multiple blades could be rapidly formed from a single core stone.[7] The Upper Paleolithic, beginning approximately 40,000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely.[9]

The later Stone Age, during which the rudiments of agricultural technology were developed, is called the Neolithic period. During this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunnelling underground, the first steps in mining technology. The polished axes were used for forest clearance and the establishment of crop farming, and were so effective as to remain in use when bronze and iron appeared.

Stone Age cultures developed music, and engaged in organized warfare. Stone Age humans developed ocean-worthy outrigger canoe technology, leading to migration across the Malay archipelago, across the Indian Ocean to Madagascar and also across the Pacific Ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation.

Although Paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. Such evidence includes ancient tools,[10]cave paintings, and other prehistoric art, such as the Venus of Willendorf. Human remains also provide direct evidence, both through the examination of bones, and the study of mummies. Scientists and historians have been able to form significant inferences about the lifestyle and culture of various prehistoric peoples, and especially their technology.

The Stone Age developed into the Bronze Age after the Neolithic Revolution. The Neolithic Revolution involved radical changes in agricultural technology which included development of agriculture, animal domestication, and the adoption of permanent settlements. These combined factors made possible the development of metal smelting, with copper and later bronze, an alloy of tin and copper, being the materials of choice, although polished stone tools continued to be used for a considerable time owing to their abundance compared with the less common metals (especially tin).

This technological trend apparently began in the Fertile Crescent, and spread outward over time. These developments were not, and still are not, universal. The three-age system does not accurately describe the technology history of groups outside of Eurasia, and does not apply at all in the case of some isolated populations, such as the Spinifex People, the Sentinelese, and various Amazonian tribes, which still make use of Stone Age technology, and have not developed agricultural or metal technology.

The Iron age involved the adoption of iron smelting technology. It generally replaced bronze, and made it possible to produce tools which were stronger, lighter and cheaper to make than bronze equivalents. In many Eurasian cultures, the Iron Age was the last major step before the development of written language, though again this was not universally the case. It was not possible to mass manufacture steel because high furnace temperatures were needed, but steel could be produced by forging bloomery iron to reduce the carbon content in a controllable way. Iron ores were much more widespread than either copper or tin. In Europe, large hill forts were built either as a refuge in time of war, or sometimes as permanent settlements. In some cases, existing forts from the Bronze Age were expanded and enlarged. The pace of land clearance using the more effective iron axes increased, providing more farmland to support the growing population.

It was the growth of the ancient civilizations which produced the greatest advances in technology and engineering, advances which stimulated other societies to adopt new ways of living and governance.

The Egyptians invented and used many simple machines, such as the ramp to aid construction processes. The Indus Valley Civilization, situated in a resource-rich area, is notable for its early application of city planning and sanitation technologies. Ancient India was also at the forefront of seafaring technologya panel found at Mohenjodaro depicts a sailing craft. Indian construction and architecture, called ‘Vaastu Shastra’, suggests a thorough understanding of materials engineering, hydrology, and sanitation.

The peoples of Mesopotamia (Sumerians, Assyrians, and Babylonians) have been credited with the invention of the wheel, but this is no longer certain. They lived in cities from c. 4000BC,[11] and developed a sophisticated architecture in mud-brick and stone,[12] including the use of the true arch. The walls of Babylon were so massive they were quoted as a Wonder of the World. They developed extensive water systems; canals for transport and irrigation in the alluvial south, and catchment systems stretching for tens of kilometres in the hilly north. Their palaces had sophisticated drainage systems.[13]

Writing was invented in Mesopotamia, using cuneiform script. Many records on clay tablets and stone inscriptions have survived. These civilizations were early adopters of bronze technologies which they used for tools, weapons and monumental statuary. By 1200BC they could cast objects 5 m long in a single piece. The Assyrian King Sennacherib (704-681BC) claims to have invented automatic sluices and to have been the first to use water screws, of up to 30 tons weight, which were cast using two-part clay moulds rather than by the ‘lost wax’ process.[13] The Jerwan Aqueduct (c. 688BC) is made with stone arches and lined with waterproof concrete.[14]

The Babylonian astronomical diaries spanned 800 years. They enabled meticulous astronomers to plot the motions of the planets and to predict eclipses.[15]

The Chinese made many first-known discoveries and developments. Major technological contributions from China include early seismological detectors, matches, paper, sliding calipers, the double-action piston pump, cast iron, the iron plough, the multi-tube seed drill, the wheelbarrow, the suspension bridge, the parachute, natural gas as fuel, the compass, the raised-relief map, the propeller, the crossbow, the South Pointing Chariot and gunpowder.

Other Chinese discoveries and inventions from the Medieval period,include: block printing, movable type printing, phosphorescent paint, endless power chain drive and the clock escapement mechanism. The solid-fuel rocket was invented in China about 1150, nearly 200 years after the invention of gunpowder (which acted as the rocket’s fuel). Decades before the West’s age of exploration, the Chinese emperors of the Ming Dynasty also sent large fleets for maritime voyages, some reaching Africa.

Greek and Hellenistic engineers were responsible for myriad inventions and improvements to existing technology. The Hellenistic period in particular saw a sharp increase in technological advancement, fostered by a climate of openness to new ideas, the blossoming of a mechanistic philosophy, and the establishment of the Library of Alexandria and its close association with the adjacent museion. In contrast to the typically anonymous inventors of earlier ages, ingenious minds such as Archimedes, Philo of Byzantium, Heron, Ctesibius, and Archytas remain known by name to posterity.

Ancient Greek innovations were particularly pronounced in mechanical technology, including the ground-breaking invention of the watermill which constituted the first human-devised motive force not to rely on muscle power (besides the sail). Apart from their pioneering use of waterpower, Greek inventors were also the first to experiment with wind power (see Heron’s windwheel) and even created the earliest steam engine (the aeolipile), opening up entirely new possibilities in harnessing natural forces whose full potential would not be exploited until the Industrial Revolution. The newly devised right-angled gear and screw would become particularly important to the operation of mechanical devices. Thats when when the age of mechanical devices started.

Ancient agriculture, as in any period prior to the modern age the primary mode of production and subsistence, and its irrigation methods, were considerably advanced by the invention and widespread application of a number of previously unknown water-lifting devices, such as the vertical water-wheel, the compartmented wheel, the water turbine, Archimedes’ screw, the bucket-chain and pot-garland, the force pump, the suction pump, the double-action piston pump and quite possibly the chain pump.[16]

In music, the water organ, invented by Ctesibius and subsequently improved, constituted the earliest instance of a keyboard instrument. In time-keeping, the introduction of the inflow clepsydra and its mechanization by the dial and pointer, the application of a feedback system and the escapement mechanism far superseded the earlier outflow clepsydra.

The famous Antikythera mechanism, a kind of analogous computer working with a differential gear, and the astrolabe both show great refinement in astronomical science.

Greek engineers were also the first to devise automata such as vending machines, suspended ink pots, automatic washstands and doors, primarily as toys, which however featured many new useful mechanisms such as the cam and gimbals.

In other fields, ancient Greek inventions include the catapult and the gastraphetes crossbow in warfare, hollow bronze-casting in metallurgy, the dioptra for surveying, in infrastructure the lighthouse, central heating, the tunnel excavated from both ends by scientific calculations, the ship trackway, the dry dock and plumbing. In horizontal vertical and transport great progress resulted from the invention of the crane, the winch, the wheelbarrow and the odometer.

Further newly created techniques and items were spiral staircases, the chain drive, sliding calipers and showers.

The Romans developed an intensive and sophisticated agriculture, expanded upon existing iron working technology, created laws providing for individual ownership, advanced stone masonry technology, advanced road-building (exceeded only in the 19th century), military engineering, civil engineering, spinning and weaving and several different machines like the Gallic reaper that helped to increase productivity in many sectors of the Roman economy. Roman engineers were the first to build monumental arches, amphitheatres, aqueducts, public baths, true arch bridges, harbours, reservoirs and dams, vaults and domes on a very large scale across their Empire. Notable Roman inventions include the book (Codex), glass blowing and concrete. Because Rome was located on a volcanic peninsula, with sand which contained suitable crystalline grains, the concrete which the Romans formulated was especially durable. Some of their buildings have lasted 2000 years, to the present day.

The engineering skills of the Inca and the Mayans were great, even by today’s standards. An example is the use of pieces weighing upwards of one ton in their stonework placed together so that not even a blade can fit in-between the cracks. The villages used irrigation canals and drainage systems, making agriculture very efficient. While some claim that the Incas were the first inventors of hydroponics, their agricultural technology was still soil based, if advanced. Though the Maya civilization had no metallurgy or wheel technology, they developed complex writing and astrological systems, and created sculptural works in stone and flint. Like the Inca, the Maya also had command of fairly advanced agricultural and construction technology. Throughout this time period, much of this construction was made only by women, as men of the Maya civilization believed that females were responsible for the creation of new things. The main contribution of the Aztec rule was a system of communications between the conquered cities. In Mesoamerica, without draft animals for transport (nor, as a result, wheeled vehicles), the roads were designed for travel on foot, just like the Inca and Mayan civilizations

As earlier empires had done, the Muslim caliphates united in trade large areas that had previously traded little. The conquered sometimes paid lower taxes than in their earlier independence, and ideas spread even more easily than goods. Peace was more frequent than it had been. These conditions fostered improvements in agriculture and other technology as well as in sciences which largely adapted from earlier Greek, Roman and Persian empires, with improvements.

European technology in the Middle Ages may be best described as a symbiosis of traditio et innovatio. While medieval technology has been long depicted as a step backwards in the evolution of Western technology, sometimes willfully so by modern authors intent on denouncing the church as antagonistic to scientific progress (see e.g. Myth of the Flat Earth), a generation of medievalists around the American historian of science Lynn White stressed from the 1940s onwards the innovative character of many medieval techniques. Genuine medieval contributions include for example mechanical clocks, spectacles and vertical windmills. Medieval ingenuity was also displayed in the invention of seemingly inconspicuous items like the watermark or the functional button. In navigation, the foundation to the subsequent age of exploration was laid by the introduction of pintle-and-gudgeon rudders, lateen sails, the dry compass, the horseshoe and the astrolabe.

Significant advances were also made in military technology with the development of plate armour, steel crossbows, counterweight trebuchets and cannon. The Middle Ages are perhaps best known for their architectural heritage: While the invention of the rib vault and pointed arch gave rise to the high rising Gothic style, the ubiquitous medieval fortifications gave the era the almost proverbial title of the ‘age of castles’.

Papermaking, a 2nd-century Chinese technology, was carried to the Middle East when a group of Chinese papermakers were captured in the 8th century.[17] Papermaking technology was spread to Europe by the Umayyad conquest of Hispania.[18] A paper mill was established in Sicily in the 12th century. In Europe the fiber to make pulp for making paper was obtained from linen and cotton rags. Lynn White credited the spinning wheel with increasing the supply of rags, which led to cheap paper, which was a factor in the development of printing.[19]

The era is marked by such profound technical advancements like linear perceptivity, double shell domes or Bastion fortresses. Note books of the Renaissance artist-engineers such as Taccola and Leonardo da Vinci give a deep insight into the mechanical technology then known and applied. Architects and engineers were inspired by the structures of Ancient Rome, and men like Brunelleschi created the large dome of Florence Cathedral as a result. He was awarded one of the first patents ever issued in order to protect an ingenious crane he designed to raise the large masonry stones to the top of the structure. Military technology developed rapidly with the widespread use of the cross-bow and ever more powerful artillery, as the city-states of Italy were usually in conflict with one another. Powerful families like the Medici were strong patrons of the arts and sciences. Renaissance science spawned the Scientific Revolution; science and technology began a cycle of mutual advancement.

The invention of the movable cast metal type printing press, whose pressing mechanism was adapted from an olive screw press, (c. 1441) lead to a tremendous increase in the number of books and the number of titles published.

An improved sailing ship, the (nau or carrack), enabled the Age of Exploration with the European colonization of the Americas, epitomized by Francis Bacon’s New Atlantis. Pioneers like Vasco da Gama, Cabral, Magellan and Christopher Columbus explored the world in search of new trade routes for their goods and contacts with Africa, India and China to shorten the journey compared with traditional routes overland. They produced new maps and charts which enabled following mariners to explore further with greater confidence. Navigation was generally difficult, however, owing to the problem of longitude and the absence of accurate chronometers. European powers rediscovered the idea of the civil code, lost since the time of the Ancient Greeks.

The British Industrial Revolution is characterized by developments in the areas of textile manufacturing, mining, metallurgy and transport driven by the development of the steam engine. Above all else, the revolution was driven by cheap energy in the form of coal, produced in ever-increasing amounts from the abundant resources of Britain. Coal converted to coke gave the blast furnace and cast iron in much larger amounts than before, and a range of structures could be created, such as The Iron Bridge. Cheap coal meant that industry was no longer constrained by water resources driving the mills, although it continued as a valuable source of power. The steam engine helped drain the mines, so more coal reserves could be accessed, and the output of coal increased. The development of the high-pressure steam engine made locomotives possible, and a transport revolution followed.[20]

The 19th century saw astonishing developments in transportation, construction, manufacturing and communication technologies originating in Europe. The steam engine which had existed since the early 18th century, was practically applied to both steamboat and railway transportation. The Liverpool and Manchester Railway, the first purpose built railway line, opened in 1830, the Rocket locomotive of Robert Stephenson being one of its first working locomotives used. Telegraphy also developed into a practical technology in the 19th century to help run the railways safely.

Other technologies were explored for the first time, including the incandescent light bulb. The invention of the incandescent light bulb had a profound effect on the workplace because factories could now have second and third shift workers. Manufacture of ships’ pulley blocks by all-metal machines at the Portsmouth Block Mills instigated the age of mass production. Machine tools used by engineers to manufacture parts began in the first decade of the century, notably by Richard Roberts and Joseph Whitworth. The development of interchangeable parts through what is now called the American system of manufacturing began in the firearms industry at the U.S Federal arsenals in the early 19th century, and became widely used by the end of the century.

Shoe production was mechanized and sewing machines introduced around the middle of the 19th century. Mass production of sewing machines and agricultural machinery such as reapers occurred in the mid to late 19th century. Bicycles were mass-produced beginning in the 1880s.

Steam-powered factories became widespread, although the conversion from water power to steam occurred in England before in the U.S.

Steamships were eventually completely iron-clad, and played a role in the opening of Japan and China to trade with the West. The Second Industrial Revolution at the end of the 19th century saw rapid development of chemical, electrical, petroleum, and steel technologies connected with highly structured technology research.

The period from the last third of the 19th century until WW1 is sometimes referred to as the Second Industrial Revolution.

20th century technology developed rapidly. Broad teaching and implementation of the scientific method, and increased research spending contributed to the advancement of modern science and technology. New technology improved communication and transport, thus spreading technical understanding.

Mass production brought automobiles and other high-tech goods to masses of consumers. Military research and development sped advances including electronic computing and jet engines. Radio and telephony improved greatly and spread to larger populations of users, though near-universal access would not be possible until mobile phones became affordable to developing world residents in the late 2000s and early 2010s.

Energy and engine technology improvements included nuclear power, developed after the Manhattan project which heralded the new Atomic Age. Rocket development led to long range missiles and the first space age that lasted from the 1950s with the launch of Sputnik to the mid-1980s.

Electrification spread rapidly in the 20th century. At the beginning of the century electric power was for the most part only available to wealthy people in a few major cities such as New York, London, Paris, and Newcastle upon Tyne, but by the time the World Wide Web was invented in 1990 an estimated 62 percent of homes worldwide had electric power, including about a third of households in [21] the rural developing world.

Birth control also became widespread during the 20th century. Electron microscopes were very powerful by the late 1970s and genetic theory and knowledge were expanding, leading to developments in genetic engineering .

The first “test tube baby” Louise Brown was born in 1978, which led to the first successful gestational surrogacy pregnancy in 1985 and the first pregnancy by ICSI in 1991, which is the implanting of a single sperm into an egg. Preimplantation genetic diagnosis was first performed in late 1989 and led to successful births in July 1990. These procedures have become relatively common and are changing the concept of what it means to be a parent.

The massive data analysis resources necessary for running transatlantic research programs such as the Human Genome Project and the Large Electron-Positron Collider led to a necessity for distributed communications, causing Internet protocols to be more widely adopted by researchers and also creating a justification for Tim Berners-Lee to create the World Wide Web.

Vaccination spread rapidly to the developing world from the 1980s onward due to many successful humanitarian initiatives, greatly reducing childhood mortality in many poor countries with limited medical resources.

The US National Academy of Engineering, by expert vote, established the following ranking of the most important technological developments of the 20th century:[22]

In the early 21st century research is ongoing into quantum computers, gene therapy (introduced 1990), 3D printing (introduced 1981), nanotechnology (introduced 1985), bioengineering/biotechnology, nuclear technology, advanced materials (e.g., graphene), the scramjet and drones (along with railguns and high-energy laser beams for military uses), superconductivity, the memristor, and green technologies such as alternative fuels (e.g., fuel cells, self-driving electric & plug-in hybrid cars), augmented reality devices and wearable electronics, artificial intelligence, and more efficient & powerful LEDs, solar cells, integrated circuits, wireless power devices, engines, and batteries.

Perhaps the greatest research tool built in the 21st century is the Large Hadron Collider, the largest single machine ever built. The understanding of particle physics is expected to expand with better instruments including larger particle accelerators such as the LHC [23] and better neutrino detectors. Dark matter is sought via underground detectors and observatories like LIGO have started to detect gravitational waves.

Genetic engineering technology continues to improve, and the importance of epigenetics on development and inheritance has also become increasingly recognized.[24]

New spaceflight technology and spacecraft are also being developed, like the Orion and Dragon. New, more capable space telescopes are being designed. The International Space Station was completed in the 2000s, and NASA and ESA plan a manned mission to Mars in the 2030s. The Variable Specific Impulse Magnetoplasma Rocket (VASIMR) is an electro-magnetic thruster for spacecraft propulsion and is expected to be tested in 2015.

2004 saw the first manned commercial spaceflight when Mike Melvill crossed the boundary of space on June 21, 2004.

Originally posted here:

History of technology – Wikipedia, the free encyclopedia

Posted in Technology | Comments Off on History of technology – Wikipedia, the free encyclopedia

NOAA Ocean Explorer: Technology

Posted: at 7:13 pm

Todays technologies allow us to explore the ocean in increasingly systematic, scientific, and noninvasive ways. With continuing scientific and technological advances, our ability to observe the ocean environment and its resident creatures is beginning to catch up with our imaginations, expanding our understanding and appreciation of this still largely unexplored realm.

This section of the Ocean Explorer website highlights the technologies that make today’s explorations possible and the scientific achievements that result from these explorations. Technologies include platforms such as vessels and submersibles, observing systems and sensors, communication technologies, and diving technologies that transport us across ocean waters and into the depths, allowing us to scientifically examine, record, and analyze the mysteries of the ocean.

From onboard equipment to collect weather and ocean information to divers, submersibles, and other observations deployed from a ship, vessels are the most critical tool for scientists when it comes to exploring the ocean.

Darkness, cold, and crushing pressures have challenged the most experienced engineers to develop submersibles that descend to sea floor depths that are not safe for divers, allowing us to explore ocean depths firsthand, make detailed observations, and collect samples of unexplored ecosystems.

Scientists rely on an array of tools to collect weather and ocean observations such as water temperatures and salinities, the shape of the seafloor, and the speed of currents. Using to tools to record and monitor water column condition and to collect samples for analyses allows scientists to enhance our understanding of the ocean.

Technologies that allow scientists to collaborate and transmit data more quickly and to a greater number of users are changing the way that we explore. From telepresence to shipboard computers, these technologies are increasing the pace, efficiency, and scope of ocean exploration.

When depths are not too great or conditions are not too unsafe, divers can descend into the water to explore the ocean realm. It is only through relatively recent advances in technology that this type of exploration has been possible.

These pages offer a comprehensive look at NOAA’s history of ocean exploration through a series of chronological essays. Also included is a rich selection of historical quotations, arranged thematically, that capture the many advances, challenges, and misunderstandings through the years as both early and modern explorers struggled to study the mysterious ocean realm.

Read more here:

NOAA Ocean Explorer: Technology

Posted in Technology | Comments Off on NOAA Ocean Explorer: Technology

Amazon.com: Tor: Tor Browser: Anonymous Surfing Ultimate …

Posted: at 7:09 pm

All You Ever Needed And Wanted To Know about the Tor Browser and how it relates to Internet Security

With The Complete Tor Browser Guide, you’ll learn everything that you need to know about the Tor Browser and how it relates to internet security. When you enter the online world you are putting yourself at risk every time you log on. With the Tor Browser you can safely shield your identity from those who might want to take advantage of you. The Tor Browser bounces your internet communications across the Tor Network to ensure:

Anonymity Tor Browser Tor Relays Hidden Services Security Total Privacy Tor Abuse

When you install the Tor Browser on your local computer you are ensuring yourself total privacy while you are on the internet. To maintain total anonymity while browsing the internet your information will pass through a minimum of three Tor relays before it reaches its final destination. Tor users can set up relays to become part of the Tor network, but they can also configure hidden services to offer even more security. As with any privacy software there is a potential for abuse, but actual Tor abuse is minimal.

One of the greatest things about the Tor Browser is it can be used by anybody looking to protect their privacy. Unlike other similar programs the Tor Browser offers users total anonymity and security. Ordinary people can use it to protect themselves from identity thieves, while law enforcement and military personnel can use it to gather intelligence for investigations and undercover operations. With the Tor Browser nobody will know who you are or where you are.

The rest is here:
Amazon.com: Tor: Tor Browser: Anonymous Surfing Ultimate …

Posted in Tor Browser | Comments Off on Amazon.com: Tor: Tor Browser: Anonymous Surfing Ultimate …

Space Travel Facts for Kids

Posted: August 25, 2016 at 4:32 pm

A few hundred years ago, traveling over the Earths surface was a risky adventure. Early explorers who set out to explore the New World went by boat, enduring fierce storms, disease and hunger, to reach their destinations. Today, astronauts exploring space face similar challenges.

All About Space Travel: One space shuttle launch costs $450 million

Space travel has become much safer as scientists have overcome potential problems, but its still dangerous. Its also very expensive. In order for a space shuttle to break free of Earths gravity, it has to travel at a speed of 15,000 miles per hour. Space shuttles need 1.9 million liters of fuel just to launch into space. Thats enough fuel to fill up 42,000 cars! Combine the high speed, heat and fuel needed for launching and youve got a very potentially dangerous situation.

In 1949, Albert II, a Rhesus monkey went to space. Keep reading to find out more all about space travel.

Re-entering the atmosphere is dangerous too. When a space craft re-enters the atmosphere, it is moving very fast. As it moves through the air, friction causes it to heat up to a temperature of 2,691 degrees. The first spacecrafts were destroyed during re-entry. Todays space shuttles have special ceramic tiles that help absorb some of the heat, keeping the astronauts safe during re-entry.

In 1957, the Russian space dog, Laika, orbited the Earth.

In 1959, the Russian space craft, Luna 2, landed on the moon. It crashed at high speed.

Russian astronaut, Yuri Gagarin, was the first human in space. He orbited the Earth in 1961.

On July 20, 1969, Neil Armstrong and Buzz Aldrin became the first men to walk on the moon and return home safely a journey of 250,000 miles.

Check out this cool video all about space travel:

A video about the N.E.X.T. mission for space travel by NASA.

Enjoyed the Easy Science for Kids Website all about Space Travel info? Take the FREE & fun all about Space Travel quiz and download FREE Space Travel worksheet for kids. For lengthy info click here.

Continued here:

Space Travel Facts for Kids

Posted in Space Travel | Comments Off on Space Travel Facts for Kids

Offshore | World Oil Online

Posted: August 23, 2016 at 9:31 am

Cathelco will be providing a marine growth prevention system (MGPS) for the latest in a series of jackup rigs to be built by Lamprell, the UAE-based provider of fabrication, engineering and contracting services to the offshore and onshore oil & gas and renewable energy industries.

GE Oil & Gas has been awarded a multi-million-dollar Frame Agreement by Oil and Natural Gas Corporation Limited (ONGC), Indias largest E&P company. Under the agreement, GE will provide an estimated 55 subsea wellheads (SG5) over next three years for the operators offshore drilling campaign, in shallow to medium waters offshore India.

Independent ROV service provider ROVOP is set to increase its Houston workforce as a result of further business growth including recent contract wins in the Gulf of Mexico region.

Seadrill has received a notice of termination from Pemex Exploracion y Servicios for the West Pegasus drilling contract effective Aug. 16. Seadrill has disputed the grounds for termination and is reviewing its legal options, the company said in a statement announcing the cancellation.

An Aberdeen-based well management firm has successfully completed the plug and abandonment of a platform well located offshore Italy.

Statoil and its partners have submitted the Plan for Development and Operation for the Byrding oil and gas discovery in the North Sea.

Lady Sponsor Gretchen H. Watkins, COO at Maersk Oil, has named Maersk Drillings newest asset at a ceremony in Invergordon, Scotland. The Maersk Highlander, a harsh environment jackup rig, is now ready for work.

Exxon Mobil Corp., Chevron Corp. and Hess Corp. have agreed to bid together for rights to drill for crude in Mexicos deepwater oil areas, according to a person with direct knowledge of the plans.

Offshore oil explorer Cobalt International Energy Inc. jumped the most in four years after an analyst upgraded the stock and said it could be a takeover target for bigger drillers.

Sparrows Group and SPIE Oil & Gas Services have strengthened their existing service portfolios by signing a global agreement to work in collaboration to support the energy sector.

Maersk Supply Service will reduce its fleet by up to 20 vessels over the next 18 months. The divestment plan is a response to vessels in lay-up, limited trading opportunities and the global over-supply of offshore supply ships.

Tullow Oil has announced first oil from the Tweneboa, Enyenra and Ntomme (TEN) fields offshore Ghana, to the FPSO Prof. John Evans Atta Mills.

Independent Oil & Gas (IOG) provided an update on the drilling of the appraisal well on the Skipper oil discovery, which lies in Block 9/21a in license P1609 in the North Sea, of which IOG is 100% owner and operator.

Inspectors of Lloyds Register are optimizing inspection regimes thanks to their accredited status, using the OVID web based inspection tool to reference inspection reports.

A drilling contract has been awarded to the Frigstad Shekou, which is the first of two ultra deepwater semisubmersible drilling rigs, which were ordered by Frigstad Deepwater Ltd in December 2012.

The petroleum industry, under the direction of the Norwegian Oil and Gas Association, today announced its ambition to implement CO2 reduction measures corresponding to 2.5 MMt on the Norwegian continental shelf (NCS) by 2030 compared with 2020.

Delek Drilling and Avner Oil Exploration, part of Israel’s leading integrated energy company, have signed a deal for the sale of 100% of their holdings in Karish and Tanin natural gas fields to Energean Oil & Gas.

GE Oil & Gas subsidiary, PT. VetcoGray Indonesia, has been awarded a field decommissioning contract with Premier Oil Indonesia, to support the shutdown of four subsea wells in Anoa field, offshore Indonesia.

JX Nippon Exploration and Production (UK) Limited has sold an 8.9% working interest in the Greater Mariner Area, including Mariner oil field, primarily located in UK license P335, to Siccar Point Energy UK Limited.

Aleksandar Milankovic, Ricardo Senne, Pablo E. Coronado, Halliburton

A high recovery rate and world record helped Petrobras achieve objectives, while saving 24 hr of deepwater drillship time, as well as operational costs.

See the original post here:

Offshore | World Oil Online

Posted in Offshore | Comments Off on Offshore | World Oil Online

Second Amendment: How Does It Work? Left Has No Idea

Posted: at 9:21 am

I genuinely want to be done with defending the Second Amendment from theregular barrage of its historically illiterate and inept detractorsthe people who say this amendment protects only the right of the militia to own weapons.

One friend and fellow gun rights activist said its best to just ignore such people, in the same way that you might ignore people who say triangles have four sides or that the Sun orbits the Earth. It is tempting to just stop engaging the dopeswho simply refuse to consider basic, objective historical facts.

But I actually think this might be a bad strategy, as it may allow the debunked and nonsensical militia reading of the Second Amendment to gain ground. With a Hillary Clinton presidency and Supreme Court on the way, we need an American population that is historically knowledgeable. That means fighting back against the corruption of American knowledge.

Anti-gun folks will cheerfully exploit (and in many cases encourage) the ignorance of the American body politic to get what they want. It is important to push back against that wherever and whenever possible. By way of example: at the Huffington Post this week, Daryl Sneath, a recreational grammarian, is trying very hard totake advantage of American historical ignorance:

One of those things [the Framers]knew about is the comma, the only purpose of which is clarity. Doubtless the writers were acutely aware of this grammatical truism (despite their apparent affinity for complex diction) when they drew their collective stylus southward (certainly aware too of that symbolic direction) making the little mark immediately following the phrasethe right of the people to keep and bear arms. As such, the subject of the predicateshall not be infringedis clearly notthe right of the people. No subject is ever separated from its predicate by a comma alone. Put more plainly, the principal clause (or declaration) of the whole amendment is this:A well regulated militia shall not be infringed.The middle bit modifies the main.

Leaving aside the dubious grammatical reading, as well as the utter travesty of ahistorical non-engagement with contemporaneous eighteenth- and nineteenth-century primary sources, just marvel at this: A well regulated militia shall not be infringed. What would such a right evenmeanin the context of extant constitutional structure and precedent? It would actually meannothing.

Sneath seems to suggest that the Second Amendment provides some sort of bulwark to protect state militias against congressional infringement. But this is objectively, factually false: Congress hascompletecontrol over state militiasthe federal governmentcan organize and abolish the militiawhenever itfeels like it, and for whatever reasonand no serious historical scholar has ever suggested that the Second Amendment somehow circumscribes this congressional power in any way. Put another way: Sneath is implying that the Second Amendment prohibits Congress from doingthe very thing Congress is fully empowered to do.

I am genuinely curious: is there any other constitutional right, or any other constitutional amendment, that is so consistently and so aggressively handled with such base and inexcusable stupidity, on so regular a basis, and on such an industrial scale?I am not sure. You dont usually see arguments of this idiotic magnitude when it comes to, say, the Fourth Amendment, or the Sixth. You certainly see dumb interpretations of the First Amendment, but thats usually a matter ofdegree, notkind:you will have people arguing that the First Amendment doesnt protect hate speech, for instance, but nobody ever argues that the First Amendment only applies to state governments, say, rather than to individual members of the body politic.

Only the Second Amendment is subject to such illiterate and ahistorical analyses. Onceyou realizethat, you can fully graspwhy: many people simply do not like guns, and they will lieor else keep themselves deliberately ignorantto prevent other people from having them.

This is not an isolated incident: anti-gun folks are very happy to resort to falsehoods to advance their cause. Recently the National Rifle Association put out an ad that claims Hillary Clinton doesnt believe in your right to keep a gun at home for self-defense. This is entirely true, but Glenn Kessler over at the Washington Post calls it false:

Clinton has said that she disagreed with the Supreme Courts decision inHeller, but she has made no proposals that would strip Americans of the right to keep a gun at home for self-defense. Clinton is certainly in favor of more gun regulations and tougher background checks, and a more nuanced ad could have made this case.Conjuring up a hypothetical Supreme Court justice ruling in a hypothetical case is simply not enough for such a sweeping claim.That tips the ads claim into the Four-Pinocchio category.

This is just a shameless mess.As I have argued before, Clintons disagreement with the Supreme Courts ruling inHelleris anunequivocal rejection of the right to keep a gun at home for self-defense.That is the very rightHellerdecided in favor of!To be againstHelleris to be against the individual right to own firearms. This is not up for debate.

Now, Clinton claims she merely disagrees withHellerinsofar as she believes cities and states should have the power to craft common-sense laws to keep their residents safe. But this is nonsense:Hellernot onlyallows for such laws, itexplicitly authorizes them.Given that Hillarys justification for opposingHelleris meaningless, we must assume she opposes it for its core substancenamely, that it affirms the individual right codified in the Second Amendment.

In other words, Hillary Clinton wants to take your guns away. Shes been honest about it; why cant our fact checkers?

Read the rest here:
Second Amendment: How Does It Work? Left Has No Idea

Posted in Second Amendment | Comments Off on Second Amendment: How Does It Work? Left Has No Idea

Human Cloning | The Center for Bioethics & Human Dignity

Posted: August 19, 2016 at 4:14 am

We live in a brave new world in which reproductive technologies are ravaging as well as replenishing families. Increasingly common are variations of the situation in which “baby’s mother is also grandma-and sister.”1 Sometimes extreme measures are necessary in order to have the kind of child we want.

This new eugenics is simply the latest version of the age-old quest to make human beings–in fact, humanity as a whole–the way we want them to be: perfect. It includes our efforts to be rid of unwanted human beings through abortion and euthanasia. It more recently is focusing on our growing ability to understand and manipulate our genetic code, which directs the formation of many aspects of who we are, for better and for worse.

We aspire to complete control over the code, though at this point relatively little is possible. This backdrop can help us understand the great fascination with human cloning today. It promises to give us a substantial measure of power over the genetic makeup of our offspring. We cannot control their code exactly, but the first major step in that direction is hugely appealing: You can have a child whose genetic code is exactly like your own. And you didn’t turn out so badly, did you?

Admittedly, in our most honest moments we would improve a few things about ourselves. So the larger agenda here remains complete genetic control. But human cloning represents one concrete step in that direction, and the forces pushing us from behind to take that step are tremendous. These forces are energized, as we will see, by the very ways we look at life and justify our actions. But before examining such forces, we need a clearer view of human cloning itself.

It was no longer ago than 1997 when the president of the United States first challenged the nation and charged his National Bioethics Advisory Commission2 to give careful thought to how the United States should proceed regarding human cloning. Attention to this issue was spurred by the reported cloning of a large mammal–a sheep–in a new way. The method involved not merely splitting an early-stage embryo to produce identical twins. Rather, it entailed producing a nearly exact genetic replica of an already existing adult.

The technique is called nuclear transfer or nuclear transplantation because it involves transferring the nucleus (and thus most of the genetic material) from a cell of an existing being to an egg cell in order to replace the egg cell’s nucleus. Stimulated to divide by the application of electrical energy, this egg–now embryo–is guided by its new genetic material to develop as a being who is genetically almost identical to the being from which the nucleus was taken. This process was reportedly carried out in a sheep to produce the sheep clone named Dolly3 but attention quickly shifted to the prospects for cloning human beings (by which I will mean here and throughout, cloning by nuclear transfer).

Quickly people began to see opportunities for profit and notoriety. By 1998, for example, scientist Richard Seed had announced intentions to set up a Human Clone Clinic–first in Chicago, then in ten to twenty locations nationally, then in five to six locations internationally.4 While the U.S. federal government was pondering how to respond to such initiatives, some of the states began passing legislation to outlaw human cloning research, and nineteen European nations acted quickly to sign a ban on human cloning itself.5 However, the European ban only blocks the actual implantation, nurture, and birth of human clones, and not also cloning research on human embryos that are never implanted. Such research has been slowed in the United States since the president and then Congress withheld federal government funds from research that subjects embryos to risk for non-therapeutic purposes.6 Moreover, a United Nations declaration co-sponsored by eighty-six countries in late 1998 signaled a broad worldwide opposition to research that would lead to human cloning.7

Yet there are signs of this protection for embryos weakening in the face of the huge benefits promised by stem cell research. Stem cells can treat many illnesses and can have the capacity to develop into badly needed body parts such as tissues and organs. One way to obtain stem cells is to divide an early stage embryo into its component cells–thereby destroying the embryonic human being. Under President Clinton, the National Institutes of Health decided that as long as private sources destroyed the embryos and produced the stem cells, the federal government would fund research on those cells.8 During 2001, President Bush prohibited federally-funded research on embryonic stem cells produced after the date his prohibition was announced. In 2002, his newly-formed Council on Bioethics raised serious questions about even this form of embryonic stem cell research, through the Council was divided on this matter.9 These developments underscore that there are a number of technological developments that are closely interrelated and yet have somewhat different ethical considerations involved. While embryo and stem cell research are very important issues, they are distinct ethically from the question of reproducing human beings through cloning. Reproduction by cloning is the specific focus of this essay.

While no scientifically verifiable birth of a human clone has yet been reported, the technology and scientific understanding are already in place to make such an event plausible at any time now. There is an urgent need to think through the relevant ethical issues. To begin with, is it acceptable to refer to human beings produced by cloning technology as “clones”? It would seem so, as long as there does not become a stigma attached to that term that is not attached to more cumbersome expressions like “a person who is the result of cloning” or “someone created through the use of somatic cell nuclear transfer.” We call someone from Italy an Italian, no disrespect intended. So it can be that a person “from cloning” is a clone. We must be ready to abandon this term, however, if it becomes a label that no longer meets certain ethical criteria.10

In order to address the ethics of human cloning itself, we need to understand why people would want to do it in the first place. People often respond to the prospect of human cloning in two ways. They are squeamish about the idea–a squeamishness Leon Kass has argued we should take very seriously.11 They also find something alluring about the idea. Such fascination is captured in a variety of films, including “The Boys from Brazil” (portraying the attempt to clone Adolf Hitler), “Bladerunner” (questioning whether a clone would be more like a person or a machine), and “Multiplicity” (presenting a man’s attempt to have enough time for his family, job, and other pursuits by producing several live adult replicas of himself). Popular discussions center on the wonderful prospects of creating multiple Mother Teresas, Michael Jordans, or other notable figures.

The greatest problem with creative media-driven discussions like this is that they often reflect a misunderstanding of the science and people involved. The film “Multiplicity” presents human replicas, not clones in the form that we are discussing them here. When an adult is cloned (e.g., the adult sheep from which Dolly was cloned), an embryo is created, not another adult. Although the embryo’s cells contain the same genetic code as the cells of the adult being cloned, the embryo must go through many years of development in an environment that is significantly different from that in which the adult developed. Because both our environment and our genetics substantially influence who we are, the embryo will not become the same person as the adult. In fact, because we also have a spiritual capacity to evaluate and alter either or both our environment and our genetics, human clones are bound to be quite different from the adults who provide their genetic code.

If this popular fascination with hero-duplication is not well founded, are there any more thoughtful ethical justifications for human cloning? Many have been put forward, and they cluster into three types: utility justifications, autonomy justifications, and destiny justifications. The first two types reflect ways of looking at the world that are highly influential in the United States and elsewhere today, so we must examine them carefully. They can readily be critiqued on their own terms. The third, while also influential, helpfully opens the door to theological reflection as well. I will begin by explaining the first two justifications. In the following sections I will then assess the first two justifications and carefully examine the third.

Utility justifications defend a practice based on its usefulness, or benefit. As long as it will produce a net increase in human well-being, it is warranted. People are well acquainted with the notion of assessing costs and benefits, and it is common to hear the argument that something will produce so much benefit that efforts to block it must surely be misguided.

Utility justifications are common in discussions of human cloning. Typical examples include:

The second type of justification appeals to the idea of autonomy, an increasingly popular appeal in this postmodern age, in which people’s personal experiences and values play a most important role in determining what is right and true for them. According to this justification, we ought to respect people’s autonomy as a matter of principle. People’s beliefs and values are too diverse to adopt any particular set of them as normative for everyone. Society should do everything possible to enhance the ability of individuals and groups to pursue what they deem most important.

Again, there are many forms that autonomy justifications can take. However, three stand out as particularly influential in discussions of human cloning:

Utility and autonomy are important ethical justifications. However, they do not provide a sufficient ethical basis for human cloning. We will examine them here carefully in turn.

While the concern for utility is admirable, there are many serious problems with this type of justification. Most significantly, it is “unworkable” and it is “dangerous.” It is unworkable because knowing how much utility cloning or any other practice has, with a reasonable level of precision, is simply impossible. We cannot know all of the ways that a practice will affect all people in the world infinitely into the future. For example, it is impossible to quantify accurately the satisfaction of every parent in future centuries who will choose cloning rather than traditional sexual reproduction in order to spare their children from newly discovered genetic problems that are now unknown. In fact, as sheep cloner Ian Wilmut was widely quoted as observing, shortly after announcing his cloning of Dolly, “Most of the things cloning will be used for have yet to be imagined.” The difficulty of comparing the significance of every foreseeable consequence on the same scale of value–including comparing each person’s subjective experiences with everyone else’s–only adds to the unworkability.

What happens in real life is that decision makers intuitively compare only those consequences they are most aware of and concerned about. Such an approach is an open invitation to bias and discrimination, intended and unintended. Even more dangerous is the absence of limits to what can be justified. There are no built-in protections for weak individuals or minority groups, including clones. People can be subjected to anything, the worst possible oppression or even death, if it is beneficial to the majority. Situations such as Nazi Germany and American slavery can be justified using this way of thinking.

When utility is our basis for justifying what is allowed in society, people are used, fundamentally, as mere means to achieve the ends of society or of particular people. It may be appropriate to use plants and animals in this way, within limits. Accordingly, most people do not find it objectionable to clone animals and plants to achieve products that will fulfill a purpose–better milk, better grain, and so forth. However, it is demeaning to “use” people in this way.

This demeaning is what bothers us about the prospect of producing a large group of human clones with low intelligence so that society can have a source of cheap menial labor. It is also what is problematic about producing clones to provide spare parts, such as vital transplantable organs for other people. Both actions fail to respect the equal and great dignity of all people by making some, in effect, the slaves of others. Even cloning a child who dies to remove the parents grief forces the clone to have a certain genetic makeup in order to be the parents’ child, thereby permanently subjecting the clone to the parents’ will. The irony of this last situation, though, is that the clone will not become the same child as was lost–both the child and the clone being the product of far more than their genetics. The clone will be demeaned by not being fully respected and accepted as a unique person, and the parents will fail to regain their lost child in the process.

To summarize: The utility justification is a substantially inadequate basis for defending a practice like cloning. In other words, showing that a good benefit, even a great benefit, will result is not a sufficient argument to justify an action. Although it is easy to forget this basic point when enticed by the promise of a wonderful benefit, we intuitively know it is true. We recognize that we could, for example, cut up one person, take her or his various organs for transplant, and save many lives as a result. But we do not go around doing that. We realize that if the action we take to achieve the benefit is itself horrendous, beneficial results are not enough to justify it.

As significant a critique as this is of a utility justification for human cloning, there is more to say. For even if it were an adequate type of justification, which it is not, it is far from clear that it would justify human cloning. To justify human cloning on the basis of utility, all the consequences of allowing this practice have to be considered, not only the benefits generated by the exceptional situations commonly cited in its defense. What are some of the consequences we need to be concerned about? There is only space here to note two of the many that weigh heavily against human cloning.

First, as suggested earlier, to allow cloning is to open the door to a much more frightening enterprise: genetically engineering people without their consent, not for their own benefit, but for the benefit of particular people or society at large. Cloning entails producing a person with a certain genetic code because of the attractiveness or usefulness of a person with that code. In this sense, cloning is just the tip of a much larger genetic iceberg. We are developing the genetic understanding and capability to shape the human genetic code in many ways. If we allow cloning, we legitimize in principle the entire enterprise of designing children to suit parental or social purposes. As one researcher at the U.S. Council on Foreign Relations has commented, Dolly is best understood as a drop in a towering wave (of genetic research) that is about to crash over us. The personal and social destructiveness of large-scale eugenic efforts (including but by no means limited to Nazi Germany’s) has been substantial, but at least it has been restricted to date by our limited genetic understanding and technology.12 Today the stakes are much higher.

The second of the many additional considerations that must be included in any honest utilitarian calculus involves the allocation of limited resources. To spend resources on the development and practice of human cloning is to not spend them on other endeavors that would be more beneficial to society. For many years now there have been extensive discussions about the expense of health care and the large number of people (tens of millions), even in the United States, that do not have health insurance.13 It has also long been established that such lack of insurance means that a significant number of people are going without necessary health care and are suffering or dying as a result.14 Another way of observing similar pressing needs in health care is to survey the specific areas that could most benefit from additional funds.15 In most of these areas, inadequate funding yields serious health consequences because there is no alternative way to produce the basic health result at issue.

Not only are the benefits of human cloning less significant than those that could be achieved by expending the same funds on other health care initiatives, but there are alternative ways of bringing children into the world that can yield at least one major benefit of cloning children themselves. If there were enough resources available to fund every technology needed or wanted by anyone, the situation would be different. But researching and practicing human cloning will result in serious suffering and even loss of life because other pressing health care needs cannot be met.

An open door to unethical genetic engineering technologies and a misallocation of limited resources, then, are among the numerous consequences of human cloning that would likely more than outweigh the benefits the practice would achieve. As previously argued, we would do better to avoid attempting to justify human cloning simply based on its consequences. But if we are tempted to do so, we must be honest and include all the consequences and not be swayed by exceptional cases that seem so appealing because of the special benefits they would achieve.

Many people today are less persuaded by utility justifications than they are by appeals to autonomy. While the concern for freedom and responsibility for one’s own life in this way of thinking is admirable, autonomy justifications are as deeply flawed as utility justifications. More specifically, they are selfish and they are dangerous.

The very term by which this type of justification is named underscores its selfishness. The word autonomy comes from two Greek words, auto (meaning “self”) and nomos (meaning “law”). In the context of ethics, appeals to autonomy literally signify that the self is its own ethical law that it generates its own standards of right and wrong. There is no encouragement in this way of looking at the world to consider the well-being of others, for that is irrelevant as long as it does not matter to me. Although in theory I should respect the autonomy of others as I live out my own autonomy, in practice an autonomous mindset predisposes me to be unconcerned about how my actions will affect others.

As long as the people making autonomous choices happen to have good moral character that predisposes them to be concerned about the well-being of everyone else, there will not be serious problems. In the United States to date, the substantial influence of Christianity–with its mandate to love others sacrificially–has prompted people to use their autonomous choices to further the interests of others alongside of their own. As Christian influences in public life, from public policy to public education, continue to be eradicated in the name of separation of church and state, the self-centeredness of an autonomy outlook will become increasingly evident. Consciously or unconsciously, selfish and other base motives arise within us continually, and without countervailing influences, there is nothing in an autonomy outlook to ensure that the well-being of others will be protected.

When autonomy rules, then, scientists, family members, and others are predisposed to act on the basis of their own autonomous perspectives, and the risk to others is real. Herein lies the danger of autonomy-based thinking, a danger that is similar to that attending a utility-oriented outlook. Protecting people’s choices is fine as long as all people are in a comparable position to make those choices. But if some people are in a very weak position economically or socially or physically, they may not be able to avail themselves of the same opportunities, even if under more equitable circumstances they would surely want to do so. In an autonomy-based approach, there is no commitment to justice, caring, or any other ethical standards that would safeguard those least able to stand up for themselves.

An autonomy justification is simply an insufficient basis for justifying a practice like human cloning. In other words, showing that a freedom would otherwise be curtailed is not a sufficient argument to justify an action. We have learned this lesson the hard way, by allowing scientific inquiry to proceed unfettered. The Nuremberg Code resulted from research atrocities that were allowed to occur because it was not recognized that there are other ethical considerations that can be more important than scientific and personal freedom (autonomy).16

While the autonomy justification itself is flawed, there is more to say about it as a basis for defending human cloning. For even if it were an adequate type of ethical justification–which it is not–it is far from clear that it would actually justify the practice. An honest, complete autonomy-based evaluation of human cloning would have to consider the autonomy of all persons involved, including the people produced through cloning, and not just the autonomy of researchers and people desiring to have clones. Of the many considerations that would need to be taken into account if the autonomy of the clones were taken seriously, space will only permit the examination of two here.

First, human cloning involves a grave risk to the clone’s life. There is no plausible way to undertake human cloning at this point without a major loss of human life. In the process of cloning the sheep Dolly, 276 failed attempts occurred, including the death of several so-called “defective” clones. An alternative process used to clone monkeys added the necessary destruction of embryonic life to these other risks. It involved transferring the genetic material from each of the cells in an eight-celled embryo to other egg cells in order to attempt to produce eight so-called clones (or, more properly, identical siblings). Subsequent mammal cloning has continued the large-scale fatalities and deformities that unavoidably accompany cloning research. Were these experimental technologies to be applied to human beings, the evidence and procedures themselves show that many human embryos, fetuses, and infants would be lost–and many others deformed–whatever the process. This tragedy would be compounded by the fact that it is unlikely human cloning research would be limited to a single location. Rather, similar mistakes and loss of human life would be occurring almost simultaneously at various private and public research sites.

Normally, experimentation on human beings is allowed only with their explicit consent. (Needless to say, it is impossible to obtain a clone’s consent to be brought into existence through cloning.) An exception is sometimes granted in the case of a child, including one still in the womb, who has a verifiable medical problem which experimental treatment may be able to cure or help. However, human cloning is not covered by this exception for two reasons. First, there is no existing human being with a medical problem in the situation in which a human cloning experiment would be attempted. Second, even if that were not an obstacle, there is typically no significant therapeutic benefit to the clone in the many scenarios for which cloning has been proposed. For the experiment to be ethical, there would need to be therapeutic benefit to the clone so huge as to outweigh the substantial likelihood of the death or deformity that occurred in the Dolly experiment. To proceed with human cloning at this time, then, would involve a massive assault on the autonomy of all clones produced, whether they lived or died.

There is also a second way that human cloning would conflict with the autonomy of the people most intimately involved in the practice, that is, the clones themselves. Human cloning would radically weaken the family structure and relationships of the clone and therefore be fundamentally at odds with their most basic interests. Consider the confusion that arises over even the most basic relationships involved. Are the children who result from cloning really the siblings or the children of their “parents”–really the children or the grandchildren of their “grandparents”? Genetics suggests one answer and age the other. Regardless of any future legal resolutions of such matters, child clones (not to mention others inside and outside the family) will almost certainly experience confusion. Such confusion will impair their psychological and social well being–in fact, their very sense of identity. A host of legal entanglements, including inheritance issues, will also result.

This situation is problematic enough where a clearly identified family is involved. But during the experimental phase in particular, identifying the parents of clones produced in a laboratory may be even more troublesome. Is the donor of the genetic material automatically the parent? What about the donor of the egg into which the genetic material is inserted? If the genetic material and egg are simply donated anonymously for experimental purposes, does the scientist who manipulates them and produces a child from them become the parent? Who will provide the necessary love and care for the damaged embryo, fetus, or child that results when mistakes are made and it is so much easier just to discard them?

As the U.S. National Bioethics Advisory Commission’s report has observed (echoed more recently by the report of the President’s Council on Bioethics), human cloning “invokes images of manufacturing children according to specification. The lack of acceptance this implies for children who fail to develop according to expectations, and the dominance it introduces into the parent-child relationship, is viewed by many as fundamentally at odds with the acceptance, unconditional love, and openness characteristic of good parenting.”17 “It just doesn’t make sense,” to quote Ian Wilmut, who objected strenuously to the notion of cloning humans after he succeeded in producing the sheep clone Dolly.18 He was joined by U.S. President Clinton, who quickly banned the use of federal funds for human cloning research, and by the World Health Organization, who summarily labeled human cloning ethically unacceptable.19 Their reaction resonates with many, who typically might want to “have” a clone, but would not want to “be” one. What is the difference? It is the intuitive recognition that while the option of cloning may expand the autonomy of the person producing the clone, it undermines the autonomy of the clone.

So the autonomy justification, like the utility justification, is much more problematic than it might at first appear to be. We would do better not even to attempt to justify human cloning by appealing to this type of justification because of its inherent shortcomings. But if we are to invoke it, we must be honest and pay special attention to the autonomy of the person most intimately involved in the cloning, the clone. Particular appeals to “freedom” or “choice” may seem persuasive. But if only the autonomy of people other than clones is in view, or only one limited aspect of a clone’s autonomy, then such appeals must be rejected.

As noted near the outset of the chapter, there is a third type of proposed justification for human cloning which moves us more explicitly into the realm of theological reflection: the destiny justification. While other theological arguments against cloning have been advanced in the literature to date,20 many of them are somehow related to the matter of destiny. According to this justification, it is part of our God-given destiny to exercise complete control over our reproductive process. In fact, Richard Seed, in one of his first in-depth interviews after announcing his intentions to clone human beings commercially, made this very argument.21 No less a theologian, President Clinton offered the opposite view when he issued the ban on human cloning. Rather than seeing cloning as human destiny, he rejected it as “playing God.”22 Whether or not we think it wise to take our theological cues from either of these individuals, what are we to make of the proposed destiny justification itself? Is human cloning in line with God’s purposes for us?

To begin with, there are indeed problems with playing God the way that proponents of human cloning would have us do. For example, God can take utility and autonomy considerations into account in ways that people cannot. God knows the future, including every consequence of every consequence of all our actions, people do not. God loves all persons equally, without bias, and is committed and able to understand and protect the freedom of everyone, people are not. Moreover, there are other ways that the pursuit of utility and autonomy are troubling from a theological perspective.

The utility of human cloning, first of all, is that we can gain some benefit by producing clones. But using other people without their consent for our ends is a violation of their status as beings created in the image of God. People have a God-given dignity that prevents us from using them as mere means to achieve our purposes. Knowing that people are created in the image of God (Gen. 1:26-27), biblical writers in both the Old and New Testaments periodically invoke this truth to argue that human beings should not be demeaned in various ways (e.g., Gen. 9:6; James 3:9). Since plants and animals are never said to be created in God’s image, it is not surprising that they can be treated in ways (including killing) that would never be acceptable if people were in view (cf. Gen. 9:3 with 9:6).

An autonomy-based justification of human cloning is no more acceptable than a utility-based justification from a theological perspective. Some Christian writers, such as Allen Verhey, have helpfully observed that autonomy, understood in a particular way, is a legitimate biblical notion. As he explains, under the sovereignty of God, acknowledging the autonomy of the person can help ensure respect for and proper treatment of people made in God’s image.23 There is a risk here, however, because the popular ethics of autonomy has no place for God in it. It is autonomy “over” God, not autonomy “under” God. The challenge is to affirm the critical importance of respect for human beings, and for their freedom and responsibility to make decisions that profoundly affect their lives, but to recognize that such freedom requires God. More specifically, such freedom requires the framework in which autonomy is under God, not over God, a framework in which respecting freedom is not just wishful or convenient thinking that gives way as soon as individuals or society as a whole have more to gain by disregarding it. It must be rooted in something that unavoidably and unchangeably ‘is.” In other words, it must be rooted in God, in the creation of human beings in the image of God.

God is the creator, and we worship God as such. Of course, people are creative as well, being the images of God that they are. So what is the difference between God’s creation of human beings, as portrayed in the book of Genesis, and human procreation as happens daily all over the world (also mandated by God in Genesis)? Creation is “ex nihilo,” out of nothing. That means, in the first sense, that God did not just rearrange already existing materials. God actually brought into being a material universe where nothing even existed before. However, God’s creation “ex nihilo” suggests something more. It suggests that there was no agenda outside of God that God was following–nothing outside of God that directed what were acceptable options. When it came to the human portion of creation, God created us to be the way God deemed best.

It is no accident that we call what we do when we have babies “procreation.” “Pro” means “for” or “forth.” To be sure, we do bring babies “forth.” But the deeper meaning here is “for.” We bring new human beings into the world “for” someone or something. To be specific, we continue the line of human beings for God, in accordance with God’s mandate to humanity at the beginning to “be fruitful and multiply” (Gen. 1:28). We also create for the people whom we help bring into being. We help give them life, and they are the ones most affected by our actions. What is particularly significant about this “procreation,” this “creation for,” is that by its very nature it is subject to an outside agenda, to God’s agenda primarily, and secondarily to the needs of the child being created.

In this light, the human cloning mindset is hugely problematic. With unmitigated pride it claims the right to create rather than procreate. It looks neither to God for the way that he has intended human beings to be procreated and raised by fathers and mothers who are the secondary, that is, genetic source of their life; nor does it look primarily to the needs of the one being procreated. As we have seen, it looks primarily to the cloner’s own preferences or to whatever value system one chooses to prioritize (perhaps the “good of society,” etc.). In other words, those operating out of the human cloning mindset see themselves as Creator rather than procreator. This is the kind of aspiring to be God for which God has consistently chastised people, and for which God has ultimately wreaked havoc on many a society and civilization.

Leon Kass has observed that we have traditionally used the word “procreation” for having children because we have viewed the world, and human life in particular, as created by God. We have understood our creative involvement in terms of and in relation to God’s creation.24 Today we increasingly orient more to the material world than to God. We are more impressed with the gross national product than with the original creation. So we more commonly talk in terms of re”production” rather than pro”creation.” In the process, we associate people more closely with things, with products, than with the God of creation. No wonder our respect for human life is deteriorating. We become more like that with which we associate. If we continue on this path, if our destiny is to clone ourselves, then our destiny is also, ultimately, to lose all respect for ourselves, to our peril.

Claims about utility, autonomy, or destiny, then, are woefully inadequate to justify human cloning. In fact, a careful look at any of these types of justification shows that they provide compelling reasons instead to reject human cloning. To stand up and say so may become more and more difficult in our “brave new world.” As the culture increasingly promotes production and self-assertion, it will take courage to insist in the new context of cloning that there is something more important. But such a brave new word, echoing the Word of old, is one that we must be bold to speak.

Read more:

Human Cloning | The Center for Bioethics & Human Dignity

Posted in Cloning | Comments Off on Human Cloning | The Center for Bioethics & Human Dignity

Historical Satanism – dpjs.co.uk/historical.html

Posted: at 4:10 am

Before Anton LaVey compiled the philosophy of Satanism and founded the Church of Satan in 1966, who upheld its values? It is always debated whether or not these people were or were not Satanists and what they would have thought of Satanism if it existed during their lives. In The Satanic Bible, Book of Lucifer 12, it name-drops many of these groups and mentions many specific people, times and dates. I do not want to quote it all here, so if you’re interested in more of the specifics buy the damned book from Amazon, already. These are the unwitting potential predecessors of Satanism.

The Satanic Bible opens with a few references to groups that are associated with historical Satanism.

In eighteenth-century England a Hell-Fire Club, with connections to the American colonies through Benjamin Franklin, gained some brief notoriety. During the early part of the twentieth century, the press publicized Aleister Crowley as the “wickedest man in the world”. And there were hints in the 1920s and ’30s of a “black order” in Germany.

To this seemingly old story LaVey and his organization of contemporary Faustians offered two strikingly new chapters. First, they blasphemously represented themselves as a “church”, a term previously confined to the branches of Christianity, instead of the traditional coven of Satanism and witchcraft lore. Second, they practiced their black magic openly instead of underground. […]

[Anton LaVey] had accumulated a library of works that described the Black Mass and other infamous ceremonies conducted by groups such as the Knights Templar in fourteenth-century France, the Hell-Fire club and the Golden Dawn in eighteenth- and nineteenth-century England.

Burton Wolfe’s introduction to “The Satanic Bible” by Anton LaVey (1969)

This page looks at some groups, some individuals, but is nowhere near a comprehensive look at the subject, just a small window into which you might see some of the rich, convoluted history of the dark, murky development of the philosophies that support Satanism.

There is a saying that history is written by the winners. The victors of a war are the ones who get to write the school books: they write that the defeated are always the enemy of mankind, the evil ones, the monsters. The victors are always fighting desperately for just causes. This trend is historically important in Satanism. As one religion takes over the ground and the demographics of a losing religion, the loser has its gods demonized and its holy places reclaimed. For example the Vatican was housed on an old Mithraist temple, and Gaelic spirits became monsters as Christianity brutalized Europe with its religious propaganda.

There are groups, therefore, that were wiped out by the Christians. The Spanish Inquisition forced, in duress and torture, many confessions out of its victims, confessions of every kind of devil worship. Likewise its larger wars against Muslims, science, freethought, etc, were all done under the guise of fighting against the devil. In cases where their victims left no records of their own we will never know what their true beliefs were. So the legacy of Christian violence has left us with many associations between various people and Devil Worship, and we know that most of these accounts are wrong, barbaric and the truth is grotesquely forced in them.

We know now that most the Christian Churches’ previous campaigns were unjustified. Various groups and individuals through have become called Satanists. Such claims are nearly always a result of rumours, mass paranoia and slanderous libel. The dark age victims of this kind of Christian paranoia were largely not actually Satanists, but merely those who didn’t believe what the orthodox Church wanted them to believe. Thus, history can be misleading especially when you rely on the religious views of one group, who are clearly biased against competing beliefs!

The Knights Templar were founded in 1118 in the growing shadow of the Dark Ages. They were the most powerful military religious order of the Middle Ages. They built Europe’s most impressive ancient Cathedrals and were the bankers “for practically every throne in Europe”1. Some historians trace the history of all globalised multinationals to the banking practices of the Knights Templar2. They had strong presence in multiple countries; Portugal, England, Spain, Scotland, Africa (i.e. Ethiopia) and France. They were rich and powerful, with members in royal families and the highest places including Kings. King John II of Portugal was once Grand Master of the Order. They explored the oceans, built roads and trade routes and policed them, created the first banking system, sanctioned castles, built glorious buildings, and had adequate forces to protect their prized holy places and objects. Their fleet was world-faring, and their masterly knightly battle skills were invaluable to any who could befriend them or afford their mercenary services.

The Knights Templar fell into disrepute with the powerful Catholic Church and the French kingdom, and the Catholics ran a long campaign against them, accusing them of devil worship, of immorality, subversion, and accused them of practicing magic and every kind of occult art. The organisation was finally destroyed and its members burned from 1310. Nowadays, although the accusations are thoroughly discredited, they are still equated with the Occult and sometimes with Satanism, sometimes even by practitioners of those arts themselves.

“The Knights Templar: 1. The Rise of the Knights Templar” by Vexen Crabtree (2004)

The Satanism-for-fun-and-games fad next appeared in England in the middle 18th Century in the form of Sir Francis Dashwood’s Order of the Medmanham Fanciscans, popularly called The Hell-Fire Club. While eliminating the blood, gore, and baby-fat candles of the previous century’s masses, Sir Francis managed to conduct rituals replete with good dirty fun, and certainly provided a colorful and harmless form of psychodrama for many of the leading lights of the period. An interesting sideline of Sir Francis, which lends a clue to the climate of the Hell-Fire Club, was a group called the Dilettanti Club, of which he was the founder.

“The Satanic Bible” by Anton LaVey (1969)

The Hell-Fire Clubs conjure up images of aristocratic rakes outraging respectability at every turn, cutting a swath through the village maidens and celebrating Black Masses. While all this is true, it is not the whole story. The author of this volume has assembled an account of the Clubs and of their antecedents and descendants. At the centre of the book is the principal brotherhood, known by the Hell-Fire name – Sir Francis Dashwood’s notorious Monks of Medmenham, with their strange rituals and initiation rites, library of erotica and nun companions recruited from the brothels of London. From this maverick group flow such notable literary libertines as Horace Walpole and Lord Byron. Pre-dating Medmenham are the figures of Rabelais and John Dee, both expounding philosophies of “do what you will” or “anything goes”. Geoffrey Ashe traces the influence of libertarian philosophies on the world of the Enlightenment, showing how they met the need for a secular morality at a time when Christianity faced the onslaught of rationalism and empiricism. He follows the libertarian tradition through de Sade and into the 20th century, with discussions of Aleister Crowley, Charles Manson and Timothy Leary, delving below the scandals to reveal the social and political impact of “doing your own thing” which has roots far deeper than the post-war permissive society.

Amazon Review of The Hell-fire Clubs: A History of Anti-morality by Geoffrey Ashe

An informal network of Hellfire Clubs thrived in Britain during the eighteenth century, dedicated to debauchery and blasphemy. With members drawn from the cream of the political, artistic and literary establishments, they became sufficiently scandalous to inspire a number of Acts of Parliament aimed at their suppression. Historians have been inclined to dismiss the Hellfire Clubs as nothing more than riotous drinking societies, but the significance of many of the nation’s most powerful and brilliant men dedicating themselves to Satan is difficult to ignore. That they did so with laughter on their lips, and a drink in their hands, does not diminish the gesture so much as place them more firmly in the Satanic tradition.

The inspiration for the Hellfire Clubs [also] drew heavily from profane literature – such as Gargantua, an unusual work combining folklore, satire, coarse humour and light-hearted philosophy written in the sixteenth century by a renegade monk named Francois Rabelais. One section of the book concerned a monk who […] has an abbey built that he names Thelema [which is] dedicated to the pleasures of the flesh. Only the brightest, most beautiful and best are permitted within its walls, and its motto is ‘Fait Ce Que Vouldras’ (‘Do What You Will’).

“Lucifer Rising” by Gavin Baddeley (1999)3

Gavin Baddeley’s book opens with a long, fascinating and awe-inspiring chapter on histories Satanic traditions, following such trends through enlightenment, the decadents, through art, aristocracy and nobility, before concentrating the rest of the book on modern rock and roll devilry. It is a highly recommended book!

The magical and occult elements of Satanism have parallels with previous groups and teachings. Frequent references and commentary are made on certain sources. None of those listed here were Satanists except possibly Crowley:

The Knights Templar (11th-14th Centuries; France, Portugal, Europe) have contributed some symbolism and methodology but not much in the way of teachings.

Chaos Magic has contributed magical theory and psychological techniques to magical practices.

Quantum Physics has contributed high-brow theory on such areas as how consciousness may be able manipulate events.

The New Age (1900s+) has contributed some of the less respectable pop-magic aspects to Satanism such as Tarot, Divination, etc. Although Satanism was in part a reaction against the new age, some aspects of it have been generally adopted.

John Dee and Kelly (17th Century) created the Enochian system of speech used for emoting (‘sonic tarot’) and pronounciation in any way the user sees fit. LaVey adopted the Enochian Keys for rituals and includes his translation of them in The Satanic Bible.

Aleister Crowley (1875-1947, England) was an infamous occultist and magician, and has lent a large portion of his techniques and general character to magical practice and psychology, as well as chunks of philosophy and teachings on magic and life in general.

The Kabballah, as the mother-text of nearly all the occult arts, has indirectly influenced Satanism, lending all kinds of esoteric thoughts, geometry, procedures, general ideas and some specifics to all occult practices.

See:

Friedrich Nietzsche, 1844 Oct 15 – 1900 Aug 25, was a German philosopher who challenged the foundations of morality and promoted life affirmation and individualism. He was one of the first existentialist philosophers. Some of Nietzsche’s philosophies have surfaced as those upheld by Satanists.

Life: 1875 – 1947. Scotland, United Kingdom.

Infamous occultist and hedonist and influential on modern Satanism. Some hate him and think him a contentless, drug-addled, meaningless diabolicist with little depth except obscurantism. Others consider him an eye-opening Satanic mystic who changed the course of history. His general attitude is one found frequently amongst Satanists and his experimental, extreme, party-animal life is either stupidly self-destructive or a model of candle-burning perfection, depending on what type of Satanist you ask.

Some Satanists are quite well-read of Crowley and his groups. His magical theories, techniques and style have definitely influenced the way many Satanists think about ritual and magic.

As far as Satanism is concerned, the closest outward signs of this were the neo-Pagan rites conducted by MacGregor Mathers’ Hermetic Order of the Golden Dawn, and Aleister Crowley’s later Order of the Silver Star (A… A… – Argentinum Astrum) and Order of Oriental Templars (O.T.O.), which paranoiacally denied any association with Satanism, despite Crowley’s self-imposed image of the beast of revelation. Aside from some rather charming poetry and a smattering of magical bric-a-brac, when not climbing mountains Crowley spent most of his time as a poseur par excellence and worked overtime to be wicked. Like his contemporary, Rev.(?) Mantague Summers, Crowley obviously spent a large part of his life with his tongue jammed firmly into his cheek, but his followers, today, are somehow able to read esoteric meaning into his every word.

Book of Air 12 “The Satanic Bible” by Anton LaVey (1969)

Links to other sites:

Europe has had a history of powerful indulgent groups espounding Satanic philosophies; with the occassional rich group emerging from the underground to terrorize traditionalist, stifling morals of their respective times, these groups have led progressive changes in society in the West. Satanists to this day employ shock tactics, public horror and outrage in order to blitzkreig their progressive freethought messages behind the barriers of traditionalist mental prisons.

When such movements surfaced in the USA in the guise of the Church of Satan, it was a little more commercialist than others. Previous European groups have also been successful businesses, the Knights Templar and resultant Masons, etc, being profound examples of the occassional success of left hand path commerce. The modern-day Church of Satan is a little more subdued as society has moved in a more acceptable, accepting, direction since the Hellfire Clubs. As science rules in the West, and occultism is public, there is no place for secretive initiatory Knights Templar or gnostic movements; the Church of Satan is a stable and quiet beacon rather than a reactionary explosion of decadence.

It is the first permanent non-European (but still Western) Satanic-ethos group to openly publish its pro-self doctrines, reflecting the general trends of society towards honesty and dissatisfaction with anti-science and anti-truth white light religions.

Popular press and popular opinion are the worst sources of information. This holds especially true with the case of Satanism. Especially given that the exterior of Satanism projects imagery that is almost intentionally confusing to anyone unintiated. From time to time public paranoia arises, especially in the USA, claiming some company, person or event is “Satanic”. The public are nearly always wrong and nearly always acting out of irrational fear, sheepish ignorance and gullibility. Public outcries are nearly always erroneous when they claim that a particular group, historical or present, are Satanic.

Similar to this is the relatively large Christian genre of writing that deals with everything unChristian. The likes of Dennis Wheatley, Eliphas Levi, etc, churn out countless books all based on the assumption that anything non-Christian is Satanic, and describe many religious practices as such. These books would be misleading if they had any plausibility, but thankfully all readers except their already-deluded Christian extremist audience cannot take them seriously. Nevertheless occasionally they contribute to public paranoia about Satanism.

In the press and sociology, the phenomenon of public paranoia about criminal activities of assumed Satanic groups is called Satanic Ritual Abuse (SRA) Panic. SRA claims are equal to UFO, abduction, faeries and monsters in both the character profile of the manics involved and the lack of all evidence (despite extensive searching!) to actually uncover such groups.

More:

Historical Satanism – dpjs.co.uk/historical.html

Posted in Modern Satanism | Comments Off on Historical Satanism – dpjs.co.uk/historical.html