Tag Archives: time

History of technology – Wikipedia, the free encyclopedia

Posted: August 27, 2016 at 7:13 pm

The history of technology is the history of the invention of tools and techniques and is similar to other sides of the history of humanity. Technology can refer to methods ranging from as simple as language and stone tools to the complex genetic engineering and information technology that has emerged since the 1980s.

New knowledge has enabled people to create new things, and conversely, many scientific endeavors are made possible by technologies which assist humans in travelling to places they could not previously reach, and by scientific instruments by which we study nature in more detail than our natural senses allow.

Since much of technology is applied science, technical history is connected to the history of science. Since technology uses resources, technical history is tightly connected to economic history. From those resources, technology produces other resources, including technological artifacts used in everyday life.

Technological change affects, and is affected by, a society’s cultural traditions. It is a force for economic growth and a means to develop and project economic, political and military power.

Many sociologists and anthropologists have created social theories dealing with social and cultural evolution. Some, like Lewis H. Morgan, Leslie White, and Gerhard Lenski, have declared technological progress to be the primary factor driving the development of human civilization. Morgan’s concept of three major stages of social evolution (savagery, barbarism, and civilization) can be divided by technological milestones, such as fire. White argued the measure by which to judge the evolution of culture was energy.[1]

For White, “the primary function of culture” is to “harness and control energy.” White differentiates between five stages of human development: In the first, people use energy of their own muscles. In the second, they use energy of domesticated animals. In the third, they use the energy of plants (agricultural revolution). In the fourth, they learn to use the energy of natural resources: coal, oil, gas. In the fifth, they harness nuclear energy. White introduced a formula P=E*T, where E is a measure of energy consumed, and T is the measure of efficiency of technical factors utilizing the energy. In his own words, “culture evolves as the amount of energy harnessed per capita per year is increased, or as the efficiency of the instrumental means of putting the energy to work is increased”. Russian astronomer Nikolai Kardashev extrapolated his theory, creating the Kardashev scale, which categorizes the energy use of advanced civilizations.

Lenski’s approach focuses on information. The more information and knowledge (especially allowing the shaping of natural environment) a given society has, the more advanced it is. He identifies four stages of human development, based on advances in the history of communication. In the first stage, information is passed by genes. In the second, when humans gain sentience, they can learn and pass information through by experience. In the third, the humans start using signs and develop logic. In the fourth, they can create symbols, develop language and writing. Advancements in communications technology translates into advancements in the economic system and political system, distribution of wealth, social inequality and other spheres of social life. He also differentiates societies based on their level of technology, communication and economy:

In economics productivity is a measure of technological progress. Productivity increases when fewer inputs (labor, energy, materials or land) are used in the production of a unit of output.[2] Another indicator of technological progress is the development of new products and services, which is necessary to offset unemployment that would otherwise result as labor inputs are reduced. In developed countries productivity growth has been slowing since the late 1970s; however, productivity growth was higher in some economic sectors, such as manufacturing.[3] For example, in employment in manufacturing in the United States declined from over 30% in the 1940s to just over 10% 70 years later. Similar changes occurred in other developed countries. This stage is referred to as post-industrial.

In the late 1970s sociologists and anthropologists like Alvin Toffler (author of Future Shock), Daniel Bell and John Naisbitt have approached the theories of post-industrial societies, arguing that the current era of industrial society is coming to an end, and services and information are becoming more important than industry and goods. Some extreme visions of the post-industrial society, especially in fiction, are strikingly similar to the visions of near and post-Singularity societies.

The following is a summary of the history of technology by time period and geography:












During most of the Paleolithic – the bulk of the Stone Age – all humans had a lifestyle which involved limited tools and few permanent settlements. The first major technologies were tied to survival, hunting, and food preparation. Stone tools and weapons, fire, and clothing were technological developments of major importance during this period.

Human ancestors have been using stone and other tools since long before the emergence of Homo sapiens approximately 200,000 years ago.[4] The earliest methods of stone tool making, known as the Oldowan “industry”, date back to at least 2.3 million years ago,[5] with the earliest direct evidence of tool usage found in Ethiopia within the Great Rift Valley, dating back to 2.5 million years ago.[6] This era of stone tool use is called the Paleolithic, or “Old stone age”, and spans all of human history up to the development of agriculture approximately 12,000 years ago.

To make a stone tool, a “core” of hard stone with specific flaking properties (such as flint) was struck with a hammerstone. This flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers.[7] These tools greatly aided the early humans in their hunter-gatherer lifestyle to perform a variety of tasks including butchering carcasses (and breaking bones to get at the marrow); chopping wood; cracking open nuts; skinning an animal for its hide; and even forming other tools out of softer materials such as bone and wood.[8]

The earliest stone tools were crude, being little more than a fractured rock. In the Acheulian era, beginning approximately 1.65 million years ago, methods of working these stone into specific shapes, such as hand axes emerged. This early Stone Age is described as Epipaleolithic or Mesolithic. The former is generally used to describe the early Stone Age in areas with limited glacial impact.

The Middle Paleolithic, approximately 300,000 years ago, saw the introduction of the prepared-core technique, where multiple blades could be rapidly formed from a single core stone.[7] The Upper Paleolithic, beginning approximately 40,000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely.[9]

The later Stone Age, during which the rudiments of agricultural technology were developed, is called the Neolithic period. During this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunnelling underground, the first steps in mining technology. The polished axes were used for forest clearance and the establishment of crop farming, and were so effective as to remain in use when bronze and iron appeared.

Stone Age cultures developed music, and engaged in organized warfare. Stone Age humans developed ocean-worthy outrigger canoe technology, leading to migration across the Malay archipelago, across the Indian Ocean to Madagascar and also across the Pacific Ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation.

Although Paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. Such evidence includes ancient tools,[10]cave paintings, and other prehistoric art, such as the Venus of Willendorf. Human remains also provide direct evidence, both through the examination of bones, and the study of mummies. Scientists and historians have been able to form significant inferences about the lifestyle and culture of various prehistoric peoples, and especially their technology.

The Stone Age developed into the Bronze Age after the Neolithic Revolution. The Neolithic Revolution involved radical changes in agricultural technology which included development of agriculture, animal domestication, and the adoption of permanent settlements. These combined factors made possible the development of metal smelting, with copper and later bronze, an alloy of tin and copper, being the materials of choice, although polished stone tools continued to be used for a considerable time owing to their abundance compared with the less common metals (especially tin).

This technological trend apparently began in the Fertile Crescent, and spread outward over time. These developments were not, and still are not, universal. The three-age system does not accurately describe the technology history of groups outside of Eurasia, and does not apply at all in the case of some isolated populations, such as the Spinifex People, the Sentinelese, and various Amazonian tribes, which still make use of Stone Age technology, and have not developed agricultural or metal technology.

The Iron age involved the adoption of iron smelting technology. It generally replaced bronze, and made it possible to produce tools which were stronger, lighter and cheaper to make than bronze equivalents. In many Eurasian cultures, the Iron Age was the last major step before the development of written language, though again this was not universally the case. It was not possible to mass manufacture steel because high furnace temperatures were needed, but steel could be produced by forging bloomery iron to reduce the carbon content in a controllable way. Iron ores were much more widespread than either copper or tin. In Europe, large hill forts were built either as a refuge in time of war, or sometimes as permanent settlements. In some cases, existing forts from the Bronze Age were expanded and enlarged. The pace of land clearance using the more effective iron axes increased, providing more farmland to support the growing population.

It was the growth of the ancient civilizations which produced the greatest advances in technology and engineering, advances which stimulated other societies to adopt new ways of living and governance.

The Egyptians invented and used many simple machines, such as the ramp to aid construction processes. The Indus Valley Civilization, situated in a resource-rich area, is notable for its early application of city planning and sanitation technologies. Ancient India was also at the forefront of seafaring technologya panel found at Mohenjodaro depicts a sailing craft. Indian construction and architecture, called ‘Vaastu Shastra’, suggests a thorough understanding of materials engineering, hydrology, and sanitation.

The peoples of Mesopotamia (Sumerians, Assyrians, and Babylonians) have been credited with the invention of the wheel, but this is no longer certain. They lived in cities from c. 4000BC,[11] and developed a sophisticated architecture in mud-brick and stone,[12] including the use of the true arch. The walls of Babylon were so massive they were quoted as a Wonder of the World. They developed extensive water systems; canals for transport and irrigation in the alluvial south, and catchment systems stretching for tens of kilometres in the hilly north. Their palaces had sophisticated drainage systems.[13]

Writing was invented in Mesopotamia, using cuneiform script. Many records on clay tablets and stone inscriptions have survived. These civilizations were early adopters of bronze technologies which they used for tools, weapons and monumental statuary. By 1200BC they could cast objects 5 m long in a single piece. The Assyrian King Sennacherib (704-681BC) claims to have invented automatic sluices and to have been the first to use water screws, of up to 30 tons weight, which were cast using two-part clay moulds rather than by the ‘lost wax’ process.[13] The Jerwan Aqueduct (c. 688BC) is made with stone arches and lined with waterproof concrete.[14]

The Babylonian astronomical diaries spanned 800 years. They enabled meticulous astronomers to plot the motions of the planets and to predict eclipses.[15]

The Chinese made many first-known discoveries and developments. Major technological contributions from China include early seismological detectors, matches, paper, sliding calipers, the double-action piston pump, cast iron, the iron plough, the multi-tube seed drill, the wheelbarrow, the suspension bridge, the parachute, natural gas as fuel, the compass, the raised-relief map, the propeller, the crossbow, the South Pointing Chariot and gunpowder.

Other Chinese discoveries and inventions from the Medieval period,include: block printing, movable type printing, phosphorescent paint, endless power chain drive and the clock escapement mechanism. The solid-fuel rocket was invented in China about 1150, nearly 200 years after the invention of gunpowder (which acted as the rocket’s fuel). Decades before the West’s age of exploration, the Chinese emperors of the Ming Dynasty also sent large fleets for maritime voyages, some reaching Africa.

Greek and Hellenistic engineers were responsible for myriad inventions and improvements to existing technology. The Hellenistic period in particular saw a sharp increase in technological advancement, fostered by a climate of openness to new ideas, the blossoming of a mechanistic philosophy, and the establishment of the Library of Alexandria and its close association with the adjacent museion. In contrast to the typically anonymous inventors of earlier ages, ingenious minds such as Archimedes, Philo of Byzantium, Heron, Ctesibius, and Archytas remain known by name to posterity.

Ancient Greek innovations were particularly pronounced in mechanical technology, including the ground-breaking invention of the watermill which constituted the first human-devised motive force not to rely on muscle power (besides the sail). Apart from their pioneering use of waterpower, Greek inventors were also the first to experiment with wind power (see Heron’s windwheel) and even created the earliest steam engine (the aeolipile), opening up entirely new possibilities in harnessing natural forces whose full potential would not be exploited until the Industrial Revolution. The newly devised right-angled gear and screw would become particularly important to the operation of mechanical devices. Thats when when the age of mechanical devices started.

Ancient agriculture, as in any period prior to the modern age the primary mode of production and subsistence, and its irrigation methods, were considerably advanced by the invention and widespread application of a number of previously unknown water-lifting devices, such as the vertical water-wheel, the compartmented wheel, the water turbine, Archimedes’ screw, the bucket-chain and pot-garland, the force pump, the suction pump, the double-action piston pump and quite possibly the chain pump.[16]

In music, the water organ, invented by Ctesibius and subsequently improved, constituted the earliest instance of a keyboard instrument. In time-keeping, the introduction of the inflow clepsydra and its mechanization by the dial and pointer, the application of a feedback system and the escapement mechanism far superseded the earlier outflow clepsydra.

The famous Antikythera mechanism, a kind of analogous computer working with a differential gear, and the astrolabe both show great refinement in astronomical science.

Greek engineers were also the first to devise automata such as vending machines, suspended ink pots, automatic washstands and doors, primarily as toys, which however featured many new useful mechanisms such as the cam and gimbals.

In other fields, ancient Greek inventions include the catapult and the gastraphetes crossbow in warfare, hollow bronze-casting in metallurgy, the dioptra for surveying, in infrastructure the lighthouse, central heating, the tunnel excavated from both ends by scientific calculations, the ship trackway, the dry dock and plumbing. In horizontal vertical and transport great progress resulted from the invention of the crane, the winch, the wheelbarrow and the odometer.

Further newly created techniques and items were spiral staircases, the chain drive, sliding calipers and showers.

The Romans developed an intensive and sophisticated agriculture, expanded upon existing iron working technology, created laws providing for individual ownership, advanced stone masonry technology, advanced road-building (exceeded only in the 19th century), military engineering, civil engineering, spinning and weaving and several different machines like the Gallic reaper that helped to increase productivity in many sectors of the Roman economy. Roman engineers were the first to build monumental arches, amphitheatres, aqueducts, public baths, true arch bridges, harbours, reservoirs and dams, vaults and domes on a very large scale across their Empire. Notable Roman inventions include the book (Codex), glass blowing and concrete. Because Rome was located on a volcanic peninsula, with sand which contained suitable crystalline grains, the concrete which the Romans formulated was especially durable. Some of their buildings have lasted 2000 years, to the present day.

The engineering skills of the Inca and the Mayans were great, even by today’s standards. An example is the use of pieces weighing upwards of one ton in their stonework placed together so that not even a blade can fit in-between the cracks. The villages used irrigation canals and drainage systems, making agriculture very efficient. While some claim that the Incas were the first inventors of hydroponics, their agricultural technology was still soil based, if advanced. Though the Maya civilization had no metallurgy or wheel technology, they developed complex writing and astrological systems, and created sculptural works in stone and flint. Like the Inca, the Maya also had command of fairly advanced agricultural and construction technology. Throughout this time period, much of this construction was made only by women, as men of the Maya civilization believed that females were responsible for the creation of new things. The main contribution of the Aztec rule was a system of communications between the conquered cities. In Mesoamerica, without draft animals for transport (nor, as a result, wheeled vehicles), the roads were designed for travel on foot, just like the Inca and Mayan civilizations

As earlier empires had done, the Muslim caliphates united in trade large areas that had previously traded little. The conquered sometimes paid lower taxes than in their earlier independence, and ideas spread even more easily than goods. Peace was more frequent than it had been. These conditions fostered improvements in agriculture and other technology as well as in sciences which largely adapted from earlier Greek, Roman and Persian empires, with improvements.

European technology in the Middle Ages may be best described as a symbiosis of traditio et innovatio. While medieval technology has been long depicted as a step backwards in the evolution of Western technology, sometimes willfully so by modern authors intent on denouncing the church as antagonistic to scientific progress (see e.g. Myth of the Flat Earth), a generation of medievalists around the American historian of science Lynn White stressed from the 1940s onwards the innovative character of many medieval techniques. Genuine medieval contributions include for example mechanical clocks, spectacles and vertical windmills. Medieval ingenuity was also displayed in the invention of seemingly inconspicuous items like the watermark or the functional button. In navigation, the foundation to the subsequent age of exploration was laid by the introduction of pintle-and-gudgeon rudders, lateen sails, the dry compass, the horseshoe and the astrolabe.

Significant advances were also made in military technology with the development of plate armour, steel crossbows, counterweight trebuchets and cannon. The Middle Ages are perhaps best known for their architectural heritage: While the invention of the rib vault and pointed arch gave rise to the high rising Gothic style, the ubiquitous medieval fortifications gave the era the almost proverbial title of the ‘age of castles’.

Papermaking, a 2nd-century Chinese technology, was carried to the Middle East when a group of Chinese papermakers were captured in the 8th century.[17] Papermaking technology was spread to Europe by the Umayyad conquest of Hispania.[18] A paper mill was established in Sicily in the 12th century. In Europe the fiber to make pulp for making paper was obtained from linen and cotton rags. Lynn White credited the spinning wheel with increasing the supply of rags, which led to cheap paper, which was a factor in the development of printing.[19]

The era is marked by such profound technical advancements like linear perceptivity, double shell domes or Bastion fortresses. Note books of the Renaissance artist-engineers such as Taccola and Leonardo da Vinci give a deep insight into the mechanical technology then known and applied. Architects and engineers were inspired by the structures of Ancient Rome, and men like Brunelleschi created the large dome of Florence Cathedral as a result. He was awarded one of the first patents ever issued in order to protect an ingenious crane he designed to raise the large masonry stones to the top of the structure. Military technology developed rapidly with the widespread use of the cross-bow and ever more powerful artillery, as the city-states of Italy were usually in conflict with one another. Powerful families like the Medici were strong patrons of the arts and sciences. Renaissance science spawned the Scientific Revolution; science and technology began a cycle of mutual advancement.

The invention of the movable cast metal type printing press, whose pressing mechanism was adapted from an olive screw press, (c. 1441) lead to a tremendous increase in the number of books and the number of titles published.

An improved sailing ship, the (nau or carrack), enabled the Age of Exploration with the European colonization of the Americas, epitomized by Francis Bacon’s New Atlantis. Pioneers like Vasco da Gama, Cabral, Magellan and Christopher Columbus explored the world in search of new trade routes for their goods and contacts with Africa, India and China to shorten the journey compared with traditional routes overland. They produced new maps and charts which enabled following mariners to explore further with greater confidence. Navigation was generally difficult, however, owing to the problem of longitude and the absence of accurate chronometers. European powers rediscovered the idea of the civil code, lost since the time of the Ancient Greeks.

The British Industrial Revolution is characterized by developments in the areas of textile manufacturing, mining, metallurgy and transport driven by the development of the steam engine. Above all else, the revolution was driven by cheap energy in the form of coal, produced in ever-increasing amounts from the abundant resources of Britain. Coal converted to coke gave the blast furnace and cast iron in much larger amounts than before, and a range of structures could be created, such as The Iron Bridge. Cheap coal meant that industry was no longer constrained by water resources driving the mills, although it continued as a valuable source of power. The steam engine helped drain the mines, so more coal reserves could be accessed, and the output of coal increased. The development of the high-pressure steam engine made locomotives possible, and a transport revolution followed.[20]

The 19th century saw astonishing developments in transportation, construction, manufacturing and communication technologies originating in Europe. The steam engine which had existed since the early 18th century, was practically applied to both steamboat and railway transportation. The Liverpool and Manchester Railway, the first purpose built railway line, opened in 1830, the Rocket locomotive of Robert Stephenson being one of its first working locomotives used. Telegraphy also developed into a practical technology in the 19th century to help run the railways safely.

Other technologies were explored for the first time, including the incandescent light bulb. The invention of the incandescent light bulb had a profound effect on the workplace because factories could now have second and third shift workers. Manufacture of ships’ pulley blocks by all-metal machines at the Portsmouth Block Mills instigated the age of mass production. Machine tools used by engineers to manufacture parts began in the first decade of the century, notably by Richard Roberts and Joseph Whitworth. The development of interchangeable parts through what is now called the American system of manufacturing began in the firearms industry at the U.S Federal arsenals in the early 19th century, and became widely used by the end of the century.

Shoe production was mechanized and sewing machines introduced around the middle of the 19th century. Mass production of sewing machines and agricultural machinery such as reapers occurred in the mid to late 19th century. Bicycles were mass-produced beginning in the 1880s.

Steam-powered factories became widespread, although the conversion from water power to steam occurred in England before in the U.S.

Steamships were eventually completely iron-clad, and played a role in the opening of Japan and China to trade with the West. The Second Industrial Revolution at the end of the 19th century saw rapid development of chemical, electrical, petroleum, and steel technologies connected with highly structured technology research.

The period from the last third of the 19th century until WW1 is sometimes referred to as the Second Industrial Revolution.

20th century technology developed rapidly. Broad teaching and implementation of the scientific method, and increased research spending contributed to the advancement of modern science and technology. New technology improved communication and transport, thus spreading technical understanding.

Mass production brought automobiles and other high-tech goods to masses of consumers. Military research and development sped advances including electronic computing and jet engines. Radio and telephony improved greatly and spread to larger populations of users, though near-universal access would not be possible until mobile phones became affordable to developing world residents in the late 2000s and early 2010s.

Energy and engine technology improvements included nuclear power, developed after the Manhattan project which heralded the new Atomic Age. Rocket development led to long range missiles and the first space age that lasted from the 1950s with the launch of Sputnik to the mid-1980s.

Electrification spread rapidly in the 20th century. At the beginning of the century electric power was for the most part only available to wealthy people in a few major cities such as New York, London, Paris, and Newcastle upon Tyne, but by the time the World Wide Web was invented in 1990 an estimated 62 percent of homes worldwide had electric power, including about a third of households in [21] the rural developing world.

Birth control also became widespread during the 20th century. Electron microscopes were very powerful by the late 1970s and genetic theory and knowledge were expanding, leading to developments in genetic engineering .

The first “test tube baby” Louise Brown was born in 1978, which led to the first successful gestational surrogacy pregnancy in 1985 and the first pregnancy by ICSI in 1991, which is the implanting of a single sperm into an egg. Preimplantation genetic diagnosis was first performed in late 1989 and led to successful births in July 1990. These procedures have become relatively common and are changing the concept of what it means to be a parent.

The massive data analysis resources necessary for running transatlantic research programs such as the Human Genome Project and the Large Electron-Positron Collider led to a necessity for distributed communications, causing Internet protocols to be more widely adopted by researchers and also creating a justification for Tim Berners-Lee to create the World Wide Web.

Vaccination spread rapidly to the developing world from the 1980s onward due to many successful humanitarian initiatives, greatly reducing childhood mortality in many poor countries with limited medical resources.

The US National Academy of Engineering, by expert vote, established the following ranking of the most important technological developments of the 20th century:[22]

In the early 21st century research is ongoing into quantum computers, gene therapy (introduced 1990), 3D printing (introduced 1981), nanotechnology (introduced 1985), bioengineering/biotechnology, nuclear technology, advanced materials (e.g., graphene), the scramjet and drones (along with railguns and high-energy laser beams for military uses), superconductivity, the memristor, and green technologies such as alternative fuels (e.g., fuel cells, self-driving electric & plug-in hybrid cars), augmented reality devices and wearable electronics, artificial intelligence, and more efficient & powerful LEDs, solar cells, integrated circuits, wireless power devices, engines, and batteries.

Perhaps the greatest research tool built in the 21st century is the Large Hadron Collider, the largest single machine ever built. The understanding of particle physics is expected to expand with better instruments including larger particle accelerators such as the LHC [23] and better neutrino detectors. Dark matter is sought via underground detectors and observatories like LIGO have started to detect gravitational waves.

Genetic engineering technology continues to improve, and the importance of epigenetics on development and inheritance has also become increasingly recognized.[24]

New spaceflight technology and spacecraft are also being developed, like the Orion and Dragon. New, more capable space telescopes are being designed. The International Space Station was completed in the 2000s, and NASA and ESA plan a manned mission to Mars in the 2030s. The Variable Specific Impulse Magnetoplasma Rocket (VASIMR) is an electro-magnetic thruster for spacecraft propulsion and is expected to be tested in 2015.

2004 saw the first manned commercial spaceflight when Mike Melvill crossed the boundary of space on June 21, 2004.

Originally posted here:

History of technology – Wikipedia, the free encyclopedia

Posted in Technology | Comments Off on History of technology – Wikipedia, the free encyclopedia

Human mitochondrial genetics – Wikipedia, the free …

Posted: August 25, 2016 at 4:19 pm

Human mitochondrial genetics is the study of the genetics of human mitochondrial DNA (the DNA contained in human mitochondria). The human mitochondrial genome is the entirety of hereditary information contained in human mitochondria. Mitochondria are small structures in cells that generate energy for the cell to use, and are hence referred to as the “powerhouses” of the cell.

Mitochondrial DNA (mtDNA) is not transmitted through nuclear DNA (nDNA). In humans, as in most multicellular organisms, mitochondrial DNA is inherited only from the mother’s ovum. There are theories, however, that paternal mtDNA transmission in humans can occur under certain circumstances.[1]

Mitochondrial inheritance is therefore non-Mendelian, as Mendelian inheritance presumes that half the genetic material of a fertilized egg (zygote) derives from each parent.

Eighty percent of mitochondrial DNA codes for mitochondrial RNA, and therefore most mitochondrial DNA mutations lead to functional problems, which may be manifested as muscle disorders (myopathies).

Because they provide 30 molecules of ATP per glucose molecule in contrast to the 2 ATP molecules produced by glycolysis, mitochondria are essential to all higher organisms for sustaining life. The mitochondrial diseases are genetic disorders carried in mitochondrial DNA, or nuclear DNA coding for mitochondrial components. Slight problems with any one of the numerous enzymes used by the mitochondria can be devastating to the cell, and in turn, to the organism.

In humans, mitochondrial DNA (mtDNA) forms closed circular molecules that contain 16,569,[2][3] DNA base pairs,[4] with each such molecule normally containing a full set of the mitochondrial genes. Each human mitochondrion contains, on average, approximately 5 such mtDNA molecules, with the quantity ranging between 1 and 15.[4] Each human cell contains approximately 100 mitochondria, giving a total number of mtDNA molecules per human cell of approximately 500.[4]

Because mitochondrial diseases (diseases due to malfunction of mitochondria) can be inherited both maternally and through chromosomal inheritance, the way in which they are passed on from generation to generation can vary greatly depending on the disease. Mitochondrial genetic mutations that occur in the nuclear DNA can occur in any of the chromosomes (depending on the species). Mutations inherited through the chromosomes can be autosomal dominant or recessive and can also be sex-linked dominant or recessive. Chromosomal inheritance follows normal Mendelian laws, despite the fact that the phenotype of the disease may be masked.

Because of the complex ways in which mitochondrial and nuclear DNA “communicate” and interact, even seemingly simple inheritance is hard to diagnose. A mutation in chromosomal DNA may change a protein that regulates (increases or decreases) the production of another certain protein in the mitochondria or the cytoplasm; this may lead to slight, if any, noticeable symptoms. On the other hand, some devastating mtDNA mutations are easy to diagnose because of their widespread damage to muscular, neural, and/or hepatic tissues (among other high-energy and metabolism-dependent tissues) and because they are present in the mother and all the offspring.

Mitochondrial genome mutations are passed on 100% of the time from mother to all her offspring. So, if a female has a mitochondrial trait, all offspring inherit it. However, if a male has a mitochondrial trait, no offspring inherit it. The number of affected mtDNA molecules inherited by a specific offspring can vary greatly because

It is possible, even in twin births, for one baby to receive more than half mutant mtDNA molecules while the other twin may receive only a tiny fraction of mutant mtDNA molecules with respect to wildtype (depending on how the twins divide from each other and how many mutant mitochondria happen to be on each side of the division). In a few cases, some mitochondria or a mitochondrion from the sperm cell enters the oocyte but paternal mitochondria are actively decomposed.

Genes in the human mitochondrial genome are as follows.

It was originally incorrectly believed that the mitochondrial genome contained only 13 protein-coding genes, all of them encoding proteins of the electron transport chain. However, in 2001, a 14th biologically active protein called humanin was discovered, and was found to be encoded by the mitochondrial gene MT-RNR2 which also encodes part of the mitochondrial ribosome (made out of RNA):

Unlike the other proteins, humanin does not remain in the mitochondria, and interacts with the rest of the cell and cellular receptors. Humanin can protect brain cells by inhibiting apoptosis. Despite its name, versions of humanin also exist in other animals, such as rattin in rats.

Mitochondrial rRNA is encoded by MT-RNR1 (12S) and MT-RNR2 (16S).

The following genes encode tRNA:

In humans, the light strand of mtDNA carries 28 genes and the heavy strand of mtDNA carries only 9 genes.[5] Eight of the 9 genes on the heavy strand code for mitochondrial tRNA molecules. Human mtDNA consists of 16,569 nucleotide pairs. The entire molecule is regulated by only one regulatory region which contains the origins of replication of both heavy and light strands. The entire human mitochondrial DNA molecule has been mapped[1][2].

The genetic code is, for the most part, universal, with few exceptions: mitochondrial genetics includes some of these. For most organisms the “stop codons” are “UAA”, “UAG”, and “UGA”. In vertebrate mitochondria “AGA” and “AGG” are also stop codons, but not “UGA”, which codes for tryptophan instead. “AUA” codes for isoleucine in most organisms but for methionine in vertebrate mitochondrial mRNA.

There are many other variations among the codes used by other mitochondrial m/tRNA, which happened not to be harmful to their organisms, and which can be used as a tool (along with other mutations among the mtDNA/RNA of different species) to determine relative proximity of common ancestry of related species. (The more related two species are, the more mtDNA/RNA mutations will be the same in their mitochondrial genome).

Using these techniques, it is estimated that the first mitochondria arose around 1.5 billion years ago. A generally accepted hypothesis is that mitochondria originated as an aerobic prokaryote in a symbiotic relationship within an anaerobic eukaryote.

Mitochondrial replication is controlled by nuclear genes and is specifically suited to make as many mitochondria as that particular cell needs at the time.

Mitochondrial transcription in Human is initiated from three promoters, H1, H2, and L (heavy strand 1, heavy strand 2, and light strand promoters). The H2 promoter transcribes almost the entire heavy strand and the L promoter transcribes the entire light strand. The H1 promoter causes the transcription of the two mitochondrial rRNA molecules.[6]

When transcription takes place on the heavy strand a polycistronic transcript is created. The light strand produces either small transcripts, which can be used as primers, or one long transcript. The production of primers occurs by processing of light strand transcripts with the Mitochondrial RNase MRP (Mitochondrial RNA Processing). The requirement of transcription to produce primers links the process of transcription to mtDNA replication. Full length transcripts are cut into functional tRNA, rRNA, and mRNA molecules.[citation needed]

The process of transcription initiation in mitochondria involves three types of proteins: the mitochondrial RNA polymerase (POLRMT), mitochondrial transcription factor A (TFAM), and mitochondrial transcription factors B1 and B2 (TFB1M, TFB2M). POLRMT, TFAM, and TFB1M or TFB2M assemble at the mitochondrial promoters and begin transcription. The actual molecular events that are involved in initiation are unknown, but these factors make up the basal transcription machinery and have been shown to function in vitro.[citation needed]

Mitochondrial translation is still not very well understood. In vitro translations have still not been successful, probably due to the difficulty of isolating sufficient mt mRNA, functional mt rRNA, and possibly because of the complicated changes that the mRNA undergoes before it is translated.[citation needed]

The Mitochondrial DNA Polymerase (Pol gamma, encoded by the POLG gene) is used in the copying of mtDNA during replication. Because the two (heavy and light) strands on the circular mtDNA molecule have different origins of replication, it replicates in a D-loop mode. One strand begins to replicate first, displacing the other strand. This continues until replication reaches the origin of replication on the other strand, at which point the other strand begins replicating in the opposite direction. This results in two new mtDNA molecules. Each mitochondrion has several copies of the mtDNA molecule and the number of mtDNA molecules is a limiting factor in mitochondrial fission. After the mitochondrion has enough mtDNA, membrane area, and membrane proteins, it can undergo fission (very similar to that which bacteria use) to become two mitochondria. Evidence suggests that mitochondria can also undergo fusion and exchange (in a form of crossover) genetic material among each other. Mitochondria sometimes form large matrices in which fusion, fission, and protein exchanges are constantly occurring. mtDNA shared among mitochondria (despite the fact that they can undergo fusion).[citation needed]

Mitochondrial DNA is susceptible to damage from free oxygen radicals from mistakes that occur during the production of ATP through the electron transport chain. These mistakes can be caused by genetic disorders, cancer, and temperature variations. These radicals can damage mtDNA molecules or change them, making it hard for mitochondrial polymerase to replicate them. Both cases can lead to deletions, rearrangements, and other mutations. Recent evidence has suggested that mitochondria have enzymes that proofread mtDNA and fix mutations that may occur due to free radicals. It is believed that a DNA recombinase found in mammalian cells is also involved in a repairing recombination process. Deletions and mutations due to free radicals have been associated with the aging process. It is believed that radicals cause mutations which lead to mutant proteins, which in turn led to more radicals. This process takes many years and is associated with some aging processes involved in oxygen-dependent tissues such as brain, heart, muscle, and kidney. Auto-enhancing processes such as these are possible causes of degenerative diseases including Parkinson’s, Alzheimer’s, and coronary artery disease.[citation needed]

Because mitochondrial growth and fission are mediated by the nuclear DNA, mutations in nuclear DNA can have a wide array of effects on mtDNA replication. Despite the fact that the loci for some of these mutations have been found on human chromosomes, specific genes and proteins involved have not yet been isolated. Mitochondria need a certain protein to undergo fission. If this protein (generated by the nucleus) is not present, the mitochondria grow but they do not divide. This leads to giant, inefficient mitochondria. Mistakes in chromosomal genes or their products can also affect mitochondrial replication more directly by inhibiting mitochondrial polymerase and can even cause mutations in the mtDNA directly and indirectly. Indirect mutations are most often caused by radicals created by defective proteins made from nuclear DNA.[citation needed]

In total, the mitochondrion hosts about 3000 different types of proteins, but only about 13 of them are coded on the mitochondrial DNA. Most of the 3000 types of proteins are involved in a variety of processes other than ATP production, such as porphyrin synthesis. Only about 3% of them code for ATP production proteins. This means most of the genetic information coding for the protein makeup of mitochondria is in chromosomal DNA and is involved in processes other than ATP synthesis. This increases the chances that a mutation that will affect a mitochondrion will occur in chromosomal DNA, which is inherited in a Mendelian pattern. Another result is that a chromosomal mutation will affect a specific tissue due to its specific needs, whether those may be high energy requirements or a need for the catabolism or anabolism of a specific neurotransmitter or nucleic acid. Because several copies of the mitochondrial genome are carried by each mitochondrion (2-10 in humans), mitochondrial mutations can be inherited maternally by mtDNA mutations which are present in mitochondria inside the oocyte before fertilization, or (as stated above) through mutations in the chromosomes.[citation needed]

Mitochondrial diseases range in severity from asymptomatic to fatal, and are most commonly due to inherited rather than acquired mutations of mitochondrial DNA. A given mitochondrial mutation can cause various diseases depending on the severity of the problem in the mitochondria and the tissue the affected mitochondria are in. Conversely, several different mutations may present themselves as the same disease. This almost patient-specific characterization of mitochondrial diseases (see Personalized medicine) makes them very hard to accurately recognize, diagnose and trace. Some diseases are observable at or even before birth (many causing death) while others do not show themselves until late adulthood (late-onset disorders). This is because the number of mutant versus wildtype mitochondria varies between cells and tissues, and is continuously changing. Because cells have multiple mitochondria, different mitochondria in the same cell can have different variations of the mtDNA. This condition is referred to as heteroplasmy. When a certain tissue reaches a certain ratio of mutant versus wildtype mitochondria, a disease will present itself. The ratio varies from person to person and tissue to tissue (depending on its specific energy, oxygen, and metabolism requirements, and the effects of the specific mutation). Mitochondrial diseases are very numerous and different. Apart from diseases caused by abnormalities in mitochondrial DNA, many diseases are suspected to be associated in part by mitochondrial dysfunctions, such as diabetes mellitus, forms of cancer and cardiovascular disease, lactic acidosis, specific forms of myopathy, osteoporosis, Alzheimer’s disease, Parkinsons’s disease, stroke, male infertility and which are also believed to play a role in the aging process.[citation needed]

Human mtDNA can also be used to help identify individuals.[7] Forensic laboratories occasionally use mtDNA comparison to identify human remains, and especially to identify older unidentified skeletal remains. Although unlike nuclear DNA, mtDNA is not specific to one individual, it can be used in combination with other evidence (anthropological evidence, circumstantial evidence, and the like) to establish identification. mtDNA is also used to exclude possible matches between missing persons and unidentified remains.[8] Many researchers believe that mtDNA is better suited to identification of older skeletal remains than nuclear DNA because the greater number of copies of mtDNA per cell increases the chance of obtaining a useful sample, and because a match with a living relative is possible even if numerous maternal generations separate the two. American outlaw Jesse James’s remains were identified using a comparison between mtDNA extracted from his remains and the mtDNA of the son of the female-line great-granddaughter of his sister.[9] Similarly, the remains of Alexandra Feodorovna (Alix of Hesse), last Empress of Russia, and her children were identified by comparison of their mitochondrial DNA with that of Prince Philip, Duke of Edinburgh, whose maternal grandmother was Alexandra’s sister Victoria of Hesse.[10] Similarly to identify Emperor Nicholas II remains his mitochondrial DNA was compared with that of James Carnegie, 3rd Duke of Fife, whose maternal great-grandmother Alexandra of Denmark (Queen Alexandra) was sister of Nicholas II mother Dagmar of Denmark (Empress Maria Feodorovna).[11]

Continue reading here:
Human mitochondrial genetics – Wikipedia, the free …

Posted in Human Genetics | Comments Off on Human mitochondrial genetics – Wikipedia, the free …

What Explains the Collapse of the USSR?

Posted: August 23, 2016 at 9:34 am

A Critical Analysis into the Different Approaches Explaining the Collapse of the Soviet Union: Was the Nature of the Regimes Collapse Ontological, Conjunctural or Decisional?


This investigation seeks to explore the different approaches behind the demise of the Soviet Union. It will draw from Richard Sakwas three approaches with regards to the collapse of the Soviet Union, namely of the ontological, decisional and conjunctural varieties. This dissertation will ultimately demonstrate the necessity of each of these if a complete understanding of the demise is to be acquired.

This dissertation will be split into three different areas of scrutiny with each analysing a different approach. The first chapter will question what elements of the collapse were ontological and will consist of delving into long-term socio-economic and political factors in order to grasp what structural flaws hindered the Soviet Union from its inception. Following this will be an analysis of the decisional approach, this time focusing on short-term factors and how the decisions of Gorbachev contributed to the fall. Finally, this investigation will examine the conjunctural approach, which will provide valuable insight as to how short-term political contingent factors played a leading role in the eventual ruin of the Soviet Union.


On December 26th, 1991, the Soviet Union was officially dissolved into fifteen independent republics after six years of political-economic crises. This unanticipated collapse of a super-power that had once shaped the foreign policies of East and West took the international community off-guard. Since the collapse, scholars have attempted to provide insight into the reasons behind the demise of the Soviet state. In 1998 Richard Sakwa published Soviet Politics in Perspective, which categorised the three main approaches adopted by scholars in the study of the collapse of the Union of Soviet Socialist Republics (USSR). These were the ontological, decisional and conjunctural approaches and will be the foci of this investigation. Ultimately, my aim is to prove that none of these approaches can thoroughly explain the collapse when viewed individually.

Instead, I will advance that all three are vital in order to acquire a thorough understanding of the Soviet collapse. To prove this, I will be analysing how each approach covers different angles of the fall, but before being able to answer this question of validity, I must begin by arranging each scholar I scrutinize into Sakwas three approaches. In my research I have discovered that the vast majority of scholars have no notion of such schools of thought, which increases the possibility of bias in secondary sources and makes my investigation all the more challenging. Once a solid theoretical basis is set I will then move onto investigating the legitimacy of each approach when considering historical events.

Research Questions

To provide the basis for my hypothesis, my analysis will be subdivided into three research questions.

The first one will address what ontological traits existed in the collapse of the Soviet Union. Following this, the second question will mirror the first by attempting to make sense of decisional aspects of the fall. Finally, my attention will turn to answering in what way was the collapse conjunctural in nature. Although the characteristics of these questions may seem basic it is important not to fall prey to appearances and bear in mind the complexity of each approach. Moreover, the arrangement and formulation of the research questions was carried out in this manner to provide an unbiased evaluation of each approach, eventually displaying the necessity of each in the explanation of fall.


The fall of the Soviet Union is a subject that has attracted vast amounts of literature from scholars all over the world. Although this presents a challenge when it comes to working through such a large topic it also helps the researcher elaborate solid explanations behind historical events. Consequently, I will be mainly employing qualitative data, supplemented by quantitative evidence; which will consist of both primary and secondary sources. The quantitative information will draw from various economists such as Lane, Shaffer and Dyker; these will mainly be used to ensure that qualitative explanations are properly backed by statistical data regarding socio-economic factors.

The majority of the qualitative data drawn will be from secondary sources written by contemporary scholars. A few primary sources such as official documents will also be analysed to provide further depth to analysis. Due to the vast amount of information concerning my topic, it is important to focus on literature aiding the question as one can easily deviate from the question regarding the three approaches. The other main challenge will also consist in avoiding to be drawn into deep analysis of the separate independence movements of the Soviet republics.

Theoretical Framework

Before being able to embark on a complete literature review, it is important to understand the theoretical framework that accompanies the analysis, namely Sakwas three approaches. Subsequently, I will then be able to show that all three of these approaches are necessary in explaining the downfall of the Soviet Union.

When looking at the different approaches elaborated by Sakwa, each advances a unique hypothesis as to why the Soviet Union collapsed. Although all three approaches are different in nature, some overlap or inter-connect at times. To begin with, the ontological approach argues that the Soviet Union dissolved because of certain inherent shortcomings of the system [] including [] structural flaws.[1] This approach enhances the premise that the collapse of the Soviet Union lies in long-term systemic factors that were present since the conception of the system. This view is countered by the conjunctural approach, which suggests

that the system did have an evolutionary potential that might have allowed it in time to adapt to changing economic and political circumstances. [] The collapse of the system [is] ascribed to contingent factors, including the strength of internal party opposition [and] the alleged opportunism of the Russian leadership under Boris Yeltsin.[2]

The final approach theorised by Sakwa is the decisional one, and advances the belief that

particular decisions at particular times precipitated the collapse, but that these political choices were made in the context of a system that could only be made viable through transformation of social, economic and political relations. This transformation could have been a long-term gradual process, but required a genuine understanding of the needs of the country.[3]

Although the decisional and conjunctural approaches are different in scope, they nevertheless both focus on the short-term factors of collapse, which at times may cause confusions. As both approaches analyse the same time frame, certain factors behind the collapse may be logically attributed to both. A relevant example may be seen when a contingent factor (factions within the Communist Party) affects the decisions of a leader (Gorbachev). This leads to ambiguities, as it is impossible to know whether certain outcomes should be explained in a conjunctural or decisional light. This type of ambiguity can also cast doubts on certain conjunctural phenomena with historical antecedents. In these cases it becomes unclear as to whether these phenomena are ontological (structural), as they existed since the systems conception or conjunctural as they present contingent obstacles to progress.

In most cases, when ambiguities arise, scholars may adopt a rhetoric that is inherently ontological, decisional or conjunctural and then base most of their judgements and analysis around it. Kalashnikov supplements this, stating that studies tend to opt for one factor as being most important in bringing about collapse [] [and] do not engage other standpoints.[4] This is a trait I have noticed in certain works that were written by scholars more inclined to analyse events through a certain approach, such as Kotkin with the ontological approach, Goldman with the decisional one, or Steele regarding the conjunctural approach. In my analysis, I will scrutinise the fall through the theoretical lens of each approach, and from this will prove the indispensability of each of these in the explanation of the downfall. The fact that certain approaches overlap is testament to the necessity of this theoretical categorisation.

Literature Review

The first approach to be investigated will be the ontological one: a school of thought espoused by scholars who focus on systemic long-term factors of collapse. Kotkin is one such author, providing valuable insight into the ontological dissolution of Soviet ideology and society, which will figure as the first element of analysis in that chapter. He advances the theory that the Soviet Union was condemned from an early age due to its ideological duty in providing a better alternative to capitalism. From its inception, the Soviet Union had claimed to be an experiment in socialism []. If socialism was not superior to capitalism, its existence could not be justified.[5] Kotkin elaborates that ideological credibility crumbled from the beginning as the USSR failed to fulfil expectations during Stalins post-war leadership. Kotkin goes on and couples ideological deterioration with emphasis on societal non-reforming tendency that flourished after the 1921 ban on factions, setting a precedent where reform was ironically seen as a form of anti-revolutionary dissidence.

Kenez and Sakwa also supplement the above argument with insight on the suppression of critical political thinking, notably in Soviet satellite states, showing that any possibility of reforming towards a more viable Communist rhetoric was stifled early on and continuously supressed throughout the 1950s and 60s. This characteristic of non-reform can be seen as an ontological centre-point, as after the brutal repression seen in Hungary (1956) and Czechoslovakia (1968), no feedback mechanism existed wherein leadership could comprehend the social, political and economic problems that were gradually amassing. The invasion of 1968 represented the destruction of the sources of renewal within the Soviet system itself.[6] Consequently, this led the Kremlin into a state of somewhat ignorance vis–vis the reality of life in the Soviet Union. Adding to the explanation of the Soviet Unions ontological demise, Sakwa links the tendency of non-reform to the overlapping of party and polity that occurred in the leadership structure of the USSR. The CPSU was in effect a parallel administration, shadowing the official departments of state: a party-state emerged undermining the functional adaptability of both.[7] Sakwa then develops that this led to the mis-modernisation of the command structure of the country, and coupled with non-reform, contributed to its demise. Furthermore, ontologically tending scholars also view the republican independence movements of the USSR as a factor destined to occur since the conception of the union.

The second section concerning the ontological approach analyses the economic factors of collapse. Here, Derbyshire, Kotkin and Remnick provide a quantitative and qualitative explanation of the failure of centralisation in the agricultural and industrial sectors. Derbyshire and Remnick also provide conclusive insight into ontological reasons for the failure of industrial and agricultural collectivization, which played a leading role in the overall demise of the Soviet Union.

Finally, in my third area of investigation, Remnick and Sakwa claim that the dissolution came about due to widespread discontent in individual republics regarding exploitation of their natural resources as well as Stalins detrimental policy of pitting different republics against each other.

Moscow had turned all of Central Asia into a vast cotton plantation [] [and in] the Baltic States, the official discovery of the secret protocols to the Nazi-Soviet pact was the key moment.[8]

Although I will explore how independence movements played a role in the dissolution, I will ensure the focus remains on the USSR as a whole, as it is easy to digress due to the sheer amount of information on independence movements. Upon this, although evidence proves that certain factors of collapse were long-term ontological ones, other scholars, namely Goldman and Galeotti go in another direction and accentuate that the key to understanding the downfall of the USSR lies in the analysis of short-term factors such as the decisional approach.

Dissimilar to the ontological approach, within the decisional realm, scholars more frequently ascribe the factors of the collapse to certain events or movements, which allows them to have minute precision in their explanations of the fall. Goldman is a full-fledged decisional scholar with the conviction that Gorbachev orchestrated the collapse through his lack of comprehensive approach,[9] a view espousing Sakwas definition of the decisional approach. In order to allow for a comprehensive analysis, this chapter will start off with an examination of Gorbachevs economic reforms in chronological order, allowing the reader to be guided through the decisions that affected the collapse. Goldman will be the main literary pillar of this section, supplemented by Sakwa and Galeotti. Having accomplished this, it will be possible to investigate how economic failure inter-linked with political decisions (Glasnost and Perestroika) outside of the Party created an aura of social turmoil. Here, Galeotti and Goldman will look into the events and more importantly, the decisions, that discredited Gorbachevs rule and created disillusion in Soviet society. My final section of the chapter will scrutinize the affects of Glasnost and Perestroika within the Communist Party, which will stand as a primordial step in light of the independence movements; seen as a by-product of Gorbachevs policies. Due to the inter-linked nature of the political, social and economic spheres, it will be possible to see how policy sectors affected each other in the collapse of the Soviet Union.

Overall, this chapter will end with an analysis of how Gorbachevs incoherence pushed certain republics onto the path of independence, which is perceived as a major factor behind the fall by Goldman.

In the chapter regarding the conjunctural approach, I will be looking into the key contingent factors that scholars believe are behind the fall of the Soviet Union. The first will be the conservatives of the Communist Party who obstructed the reform process since Brezhnevs rule, meaning that up until the collapse, reform efforts had run headlong into the opposition of entrenched bureaucratic interests who resisted any threat to their power.[10] Due to the broadness of this topic I will draw from two scholars, namely Kelley and Remnick, for supplementary insight. Moving on, I will also investigate the inception of the reformist left, a term encapsulating those within and outside the party striving to bring democratic reform to the USSR. Here the main conjunctural scholar used will be Steele, who explains that Gorbachevs hopes for this reformist left to support him against the Communist conservatives evaporated once Yeltsin took the lead and crossed the boundaries of socialist pluralism set by Gorbachev. A concept coined by the leader himself, which implied that there should be a wide exchange of views and organizations, provided they all accepted socialism.[11] This brought about enormous pressure and sapped social support from Gorbachev at a time when he needed political backing. Once the political scene is evaluated through conjunctural evidence, I will divide my chapter chronologically, first exploring the 1989 radicalisation of the political movements with the significant arrival of Yeltsin as the major obstacle to Gorbachevs reforms to the left. In this section I will be mainly citing Remnick due to his detailed accounts of events. Ultimately I will be attempting to vary my analysis with approach-specific scholars and more neutral ones who provide thorough accounts, such as Remnicks and Sakwas. The analysis will continue with insight in the 1990-1991 period of political turmoil and the effects it had on Gorbachevs reforms; I will be citing Galeotti, Remnick and Tedstrom as these provide varying viewpoints regarding political changes of the time. My chapter will then finally end with a scrutiny of Yeltsins Democratic Russia and the August 1991 Coup and how both of these independent action groups operated as mutual contingent factors in the dissolution of the Soviet Union.

Chapter One: Was the Collapse of the USSR Ontological in Nature?

When analysing the collapse of the USSR, it is undeniable that vital ontological problems took form during the early days of its foundation. Here I will analyse these flaws and demonstrate how the collapse occurred due to ontological reasons, hence proving the necessity of this approach. In order to provide a concrete answer I will begin by scrutinizing how the erosion of the Communist ideology acted as a systemic flaw where the Soviet Unions legitimacy was put into question. I will then analyse how a non-reformist tendency was created in society and also acted as an ontological flaw that would play a part in the fall. From there I will explore how ontological defects plagued the economic sector in the industrial and agricultural areas, leading the country to the brink of economic collapse. Finally I will analyse the independence movements, as certain scholars, especially Remnick and Kotkin, argue that these movements pushed towards ontological dissolution. It is imperative to recall that this chapter will analyse symptoms of the collapse that are of an ontological nature, namely long-term issues that manifested themselves in a negative manner on the longevity of the Soviet Union. As a result it is vital to bear in mind that the ontological factors to be analysed are usually seen as having all progressively converged together over the decades, provoking the cataclysmic collapse.

The Untimely Death of an Ideology

Since its early days, the Soviet Union was a political-economic experiment built to prove that the Communist-Socialist ideology could rival and even overtake Capitalism. It promoted itself as a superior model, and thus was condemned to surpassing capitalism if it did not want to lose its legitimacy. However, during Stalins tenure, the ideological legitimacy of the Soviet Union crumbled due to two reasons: the first one being the aforementioned premiers rule and the other being Capitalisms success, which both ultimately played a part in its demise.

The early leaders of the Communist Party of the Soviet Union (CPSU) such as Lenin, Trotsky, Kamenev, Bukharin, Zinoviev and Stalin all had different views regarding how to attain socio-economic prosperity, but Stalin would silence these after the 1921 to 1924 power struggle. Following this period, which saw the death of Lenin, Stalin emerged as the supreme leader of the Soviet Union. With the exile of Trotsky, and isolation of Zinoviev, Kamenev and Bukharin from the party, no effective opposition was left to obstruct the arrival of Stalins fledging dictatorship. Subsequently, Stalin was able to go about effectively appropriating the Communist ideology for himself; with his personality cult he became the sole curator of what was Communist or reactionary (anti-Communist). Subsequently, to protect his hold on power, he turned the Soviet Union away from Marxist Communist internationalism by introducing his doctrine of Socialism in One Country, after Lenins death in 1924.

Insisting that Soviet Russia could [] begin the building of socialism [] by its own efforts. [] [Thus treading on] Marxs view that socialism was an international socialist movement or nothing.[12]

As a result, the USSR under Stalin alienated the possibilities of ideological renewal with other Communist states and even went as far as to claim, that the interests of the Soviet Union were the interests of socialism.[13] Sakwa sees these actions as ones that locked the Soviet Union into a Stalinist mind-set early on and thus built the wrong ideological mechanisms that halted the advent of Communist ideology according to Marx. As a result, it is fair to acknowledge that when looking at ontological reasons for collapse, one of them can be mentioned as the Soviet Union being built upon an ambiguous ideological platform wherein it espoused elements of Communism but was severely tainted and handicapped by Stalinist rhetoric.

In addition to the debilitating effects Stalins political manipulations had on the ideological foundations of the USSR, capitalisms successful reform dealt a supplementary blow to Soviet ideological credibility.

Instead of a final economic crisis anticipated by Stalin and others, Capitalism experienced an unprecedented boom [] all leading capitalist countries embraced the welfare state [] stabilising their social orders and challenging Socialism on its own turf.[14]

Adding to the changing nature of capitalism was the onset of de-colonisation during the 1960s, taking away more legitimacy with every new independence agreement. By the end of the 1960s, the metamorphosis of capitalism had very much undermined the Soviet Unions ideological raison dtre, as the differences between capitalism in the Great Depression [which the USSR had moulded itself against,] and capitalism in the post-war world were nothing short of earth shattering.[15] Here the ontological approach generally elaborates that Capitalism and incoherent ideological foundations brought about the disproving of the very political foundations the Soviet state rested upon and thus any social unrest leading to the collapse during Gorbachevs rule can be interpreted as logical by-products of the previous point. From this, it is possible to better understand how the crumbling of the legitimacy of the Communist ideology was a fundamental ontological factor behind the collapse of the USSR. Building on this, I will now look into how the establishment of society during Stalins rule also played a role in the collapse due to the shaping of a non-reforming society.

The Foundations of a Non-Reforming Society

One defect that would remain etched in the Soviet political-economic mind-set was the ontological tendency for non-reform. This trait would plague the very infrastructure of the Soviet Union until its dying days. The emergence of such a debilitating characteristic appeared during the very inception of the Soviet Union with the Kronstadt Sailors Uprising. This uprising occurred during the Tenth Party Congress in 1921 and would have severe repercussion for the Soviet Unions future as Congress delegates [] accepted a resolution that outlawed factions within the Party.[16] Thus, by stifling critical thinking and opposing views, this would effectively cancel out a major source of reform and act as an ontological shortcoming for future Soviet political-economic progress. This non-reformist trait was reinforced during Stalins rule with the constant pressure the Communist Party exerted on agricultural and industrial planners. Here, the party demanded not careful planning [] but enthusiasm; the leaders considered it treason when economists pointed out irrationalities in their plans.[17] Subsequently, planners were forced into a habit of drawing up unmanageable targets, which were within the partys political dictate. This meant, central planners established planning targets that could only be achieved at enormous human cost and sacrifice. [] [and lacked] effective feedback mechanism[18], which would provide insight to the flaws that existed in their plans. In the short-run this would only hinder the economy, but in the long-term it would lock the Soviet Union in a tangent where it could not reform itself in accordance to existent problems[19], thus leading it to a practically technologically obsolete state with a backwards economy by the time it collapsed.

Nevertheless, repression of critical thinking did not limit itself to the economic realm; it also occurred in the social sector where calls for the reform of the Socialist ideology were mercilessly crushed in Hungary in 1956 and in Czechoslovakia in 1968. It is possible to see a link here with the previous section of this chapter with regards to Stalins hijacking of the Communist ideology. In the two social movements cited, both pushed towards a shift away from Stalinist rhetoric towards an actual adoption of Marxist Socialism. In Czechoslovakia this social push came under the name of Socialism with a Human Face and wanted to permit the dynamic development of socialist social relations, combine broad democracy with a scientific, highly qualified management, [and] strengthen the social order.[20] Although these were only Soviet satellite states, the fact that they were repressed showed that by the 1960s, the Soviet Unions non-reforming characteristic had consolidated itself to the point that any divergence from the official party line in the economic or social sectors was seen as high treason. This leads us to the ambiguous area of Soviet polity and how it jeopardised the existence of the USSR when merged with ontological non-reform.

Polity is the term I use here because it remains implausibly unclear as to who essentially governed the USSR during its sixty-nine years of existence. It seems that both the CPSU and the Soviet government occupied the same position of authority, thus creating

a permanent crisis of governance. [Wherein] the party itself was never designed as an instrument of government and the formulation that the party rules but the government governs allowed endless overlapping jurisdictions.[21]

Adding to the confusion was the CPSUs role in society, defined by Article Six of the USSRs 1977 Constitution: The leading and guiding force of the Soviet society and the nucleus of its political system, of all state organisations and public organisations, is the Communist Party of the Soviet Union.[22] From here a profound ambiguity is seen surrounding the role of politics in the social realm. Accordingly, these two traits would create a profound ontological factor for collapse when merged with the non-reforming tendency of society. Due to the fact that when a more efficient leadership mechanism was sought out, it was impossible to identify how and what elements of the polity had to be changed.

It is here that an inter-linkage of approaches can be identified as the politys ontological inability to reform according to Gorbachevs decisional re-shaping of society contributed to the demise of the USSR.

The one-party regime ultimately fell owing to its inability to respond to immense social changes that had taken place in Soviet society- ironically, social changes that the Party itself had set in motion.[23]

Because Soviet polity was ontologically ill defined, when time came to reform it, the notion of what was to be changed obstructed the reform process. From this analysis, it is possible to see how ontological weaknesses in the over-lapping areas of politics and the social sector seriously hindered the Soviet Union. In the following section I will explore how ontological defects were of similar importance in the economic realm and were also interwoven with previously explained shortcomings.

An Economy in Perpetual Crisis

When looking at the economic realm there are a number of weaknesses that took root from the early days of the Soviet Union, the first aspect of scrutiny will be the ontological failure of economic centralisation and its contribution to the fall. In both the agricultural and industrial sectors, the USSR was unable to progress towards economic prosperity due to its flawed centralised economy. Agriculturally, centralisation meant that peasants were compelled to fulfil farming quotas set by the ministry in Moscow on land that solely belonged to the state. Consequently this generated two problems, the first one being a lack of incentive from the farmers and secondly, the inability of central authorities to cope with the myriad of different orders that had to be issued.

Central planners in Moscow seldom know in advance what needs to be done in the different regions of the country. Because of this [] sometimes as much as 40 to 50 per cent of some crops rot in the field or in the distribution process.[24]

Worsening this was the partys non-reforming tendency, which meant that the Soviet Union protected its misconceived collective and state farming network and made up for its agricultural ineptness by importing up to 20 per cent of the grain it needed.[25] This patching-up of ontological agricultural problems would result in an unpredictable and inconsistent agricultural sector as the decades passed, thus rendering it unreliable. This can be seen in the post-war agricultural growth rates that continuously fluctuated from 13.8 per cent in 1955 to -1.5 per cent in 1959 and finally -12.8 per cent in 1963![26] Such a notoriously unpredictable agricultural sector [] consistently failed to meet planned targets[27] and would remain an unresolved problem until the fall of the regime.

As for the industrial sector, the situation was difficult; with the disappearance of a demand and supply mechanism, the central authorities were unable to properly satisfy the material demands of society. Moreover, because of centralisation, most factories were the sole manufacturers of certain products in the whole of the USSR, meaning that an enormous amount of time and money was wasted in transport-logistics costs. Without the demand and supply mechanism, the whole economy had to be planned by central authorities, which proved to be excruciatingly difficult.

Prices of inputs and outputs, the sources of supply, and markets for sale were strictly stipulated by the central ministries. [] [and] detailed regulation of factory level activities by remote ministries [] led to a dangerously narrow view of priorities at factory level.[28]

Consequently, central ministries frequently misallocated resources and factories took advantage of this by hoarding larger quantities of raw materials than they needed. Although the ontological failure of centralisation did not have as immediate effects as certain short-term conjunctural or decisional factors, its contribution to the fall can be seen in how, combined with the economic shortcomings to be highlighted hereon, it gradually deteriorated the economy of the country.

In addition to the failure of centralisation was the failure of agricultural collectivization, which would have an even greater negative effect on the Soviet Union. When looking at collectivization we can see how its affects were multi-layered, as it was a politically motivated campaign that would socially harm society and destroy the economy. Agriculturally, Stalin hindered the Soviet farming complex from its very beginnings by forcing collectivisation on farmers and publicly antagonising those who resisted as anti-revolutionary kulaks. After the winter of 1929, Stalin defined the meaning of kulak as anyone refusing to enter collectives. Kulaks were subsequently persecuted and sent to Siberian gulags, the attack on the kulaks was an essential element in coercing the peasants to give up their farms.[29] These repeated attacks came from a Bolshevik perception that peasants were regarded with suspicion as prone to petty-bourgeois individualist leanings.[30] Due to these traumatic acts of violence, the peasantry was entirely driven into collectivisation by 1937; however, this only bolstered peasant hatred of the government and can be seen as the basis for the agricultural problem of rural depopulation that gradually encroached the country-side. By the 1980s,

The legacy of collectivization was everywhere in the Soviet Union. In the Vologda region alone, there were more than seven thousand ruined villages [] For decades, the young had been abandoning the wasted villages in droves.[31]

This agricultural depopulation can be seen in how the number of collective farms gradually shrank from 235,500 in 1940 to merely 25,900 in 1981[32]; causing severe labour scarcity concerns to the agricultural sector.

Industrially, collectivisation was not widespread, although in the few cases it appeared, it brought about much suffering to yield positive results. The mining city of Magnitogorsk is a prime example where Stalinist planners

built an autonomous company town [] that pushed away every cultural, economic, and political development in the civilized world [and where] 90 per cent of the children [] suffered from pollution-related illnesses.[33]

While the West followed the spectacular expansion of Soviet industry from 1920 to 1975, this was at the cost of immense social sacrifice in the industrial and agricultural sectors, which were entirely geared towards aiding the industrial complex. In addition to this, much of Soviet industrial growth after Khrushchevs rule was fuelled by oil profits emanating from Siberia, peaking from 1973 to 1985 when energy exports accounted for 80% of the USSRs expanding hard currency earnings.[34]

Overall, ontological non-reform inter-linked with the failure of collectivisation and a deficient command structure would gradually weaken the economy to the brink of collapse in the 1980s. This elaboration was made clear in the 1983 Novosibirsk Report, which

argued that the system of management created for the old-style command economy of fifty years ago remained in operation in very different circumstances. It now held back the further development of the countrys economy.[35]

Nevertheless, ontological problems behind the fall did not only restrict themselves to the economic, political or social realms but also existed regarding the nationalities question.

A Defective Union

When looking at the fifteen different republics that comprised the USSR, one may ask how it was possible to unite such diverse nationalities together without the emergence of complications. The truth behind this is that many problems arose from this union even though the CPSU maintained, until the very end, the conviction that all republics and people were acquiescent of it. Gorbachevs statement in 1987 that

the nationalities issue has been resolved for our country [] reflected the partys most suicidal illusion, that it had truly created [] a multinational state in which dozens of nationalisms had been dissolved.[36]

Today certain scholars see the independence movements of the early 1990s as a result of the ontological malformation of the Soviet Unions identity. The most common argument expounds that the independence movements fuelling dissolution occurred due to two ontological reasons. The first one can be seen as a consequence of Stalins rule and as part of his policy of divide and rule, where the borders between ethno-federal units were often demarcated precisely to cause maximum aggravation between peoples.[37] This contributed to the Soviet Unions inability to construct a worthwhile federal polity and an actual Soviet nation-state. In addition to this was the ontological exploitation of central Soviet republics and prioritisation of the Russian state. This created long-term republican discontent that laid the foundations of independence movements: Everything that went wrong with the Soviet system over the decades was magnified in Central Asia,[38] Moscow had turned all of Central Asia into a vast cotton plantation [] destroying the Aral Sea and nearly every other area of the economy.[39]

Overall, it is possible to argue that the collapse occurred due to inherent flaws in the foundations of the Soviet Union. Ontological factors behind the collapse were an admixture of socio-political and economic weaknesses that gradually wore at the foundations of the USSR. The first area analysed was the demise of the Marxist ideology that up-held the legitimacy of the Soviet Union. I then scrutinized the non-reforming tendency that settled in Soviet society very early on. Such an area eventually brought me to inspect the ontological flaws in Soviet economy, which had close links with the previous section. Finally, I examined inherent flaws in the USSRs union and how these also played a role in the demise. While the ontological factors represent a substantial part of the explanation to the downfall, decisional and conjunctural factors must also be examined to fully grasp the collapse.

Chapter Two: Was the Collapse of the USSR Decisional in Nature?

Whilst long-term flaws in the foundations of the Soviet Union played a major role in its demise, it is important to acknowledge that most of Gorbachevs reforms also had drastic effects on the survival of the union. From hereon, I will explore how the decisional approach explains vital short-term factors behind the collapse and cannot be forgone when pondering this dissertations thesis-question. To begin with, I will analyse the failure of Gorbachevs two major economic initiatives known as Uskoreniye (acceleration of economic reforms) and Perestroika. This will then inevitably lead me to the scrutiny of his socio-political reforms under Glasnost and how imprudent decisions in this sector led to widespread unrest in the USSR. Finally I will look into how Gorbachevs decisional errors led to most republics to opt out of the Soviet Union. But before I start it is important to understand that although I will be separating the economic reforms (Uskoreniye and Perestroika), from socio-political ones (Glasnost), these were very much intertwined as Gorbachev saw them as mutually complementary.

A Botched Uskoreniye and an Ineffective Perestroika

By the time Gorbachev rose to power in March 1985, ontologically economic problems had ballooned to disproportionate levels. His initial approach to change was different to his predecessor; he took advice from field-experts and immediately set into motion economic Uskoreniye (acceleration). At this point, economic reform was indispensible as the collective agricultural sector lay in ruins with a lethargic 1.1 per cent output growth between 1981 and 1985, whilst industrial output growth fell from 8.5 per cent in 1966 to 3.7 per cent 1985.[40] Although Gorbachev could not permit himself mistakes, it is with Uskoreniye that the first decisional errors regarding the economy were committed and cost him much of his credibility. Under Abel Aganbegyans advisory, Gorbachev diverted Soviet funds to retool and refurbish the machinery industry, which was believed would accelerate scientific and technological progress. He supplemented this effort by reinforcing the centralisation of Soviet economy by creating super-ministries, that way planners could eliminate intermediate bureaucracies and concentrate on overall strategic planning.[41] Whereas these reforms did have some positive impacts, they were not far reaching enough to bring profound positive change to Soviet industrial production. Moreover, in the agricultural sector, Gorbachev initiated a crackdown on owners of private property in 1986, which led farmers to fear the government, and would disturb the success of future agricultural reforms. His error with Uskoreniye lay in the fact that he had aroused the population with his call for a complete overhaul of Soviet society, but in the economic realm at least, complete overhaul turned out for most part to be not much more than a minor lubrication job.[42] Realising his mistake, Gorbachev acquired the belief it was the economic system he had to change, and set out to do just that with his move towards Perestroika (Restructuring).

Gorbachev had at first tried simply to use the old machinery of government to reform. [] the main reason why this failed was that the old machinery [] were a very large part of the problem.[43]

Although the term Perestroika did exist prior to Gorbachevs tenure in office, it was he who remoulded it into a reform process that would attempt to totally restructure the archaic economic system. Unlike the first batch of economic reforms [] the second set seemed to reflect a turning away from the Stalinist economic system,[44] a move that startled the agricultural sector which had been subjected to repression the prior year. In 1987, Gorbachev legalised individual farming and the leasing of state land to farmers in an effort to enhance agronomic production. However, this reform was flawed due to the half-hearted nature of the endeavour, wherein farmers were allowed to buy land but it would remain state-owned. Therefore, due to Gorbachevs reluctance to fully privatise land, many prospective free farmers could see little point in developing farms that the state could snatch back at any time.[45] Adding to this social setback was the purely economic problem, since

without a large number of participants the private [] movements could never attain credibility. A large number of new sellers would produce a competitive environment that could hold prices down.[46]

Thus, due to Gorbachevs contradictory swift changes from agricultural repression to reluctant land leasing, his second agrarian reform failed.

Industrially, Gorbachev went even further in decisional miscalculations, without reverting his earlier move towards ultra-centralisation of the super-ministries, he embarked on a paradoxical semi-privatisation of markets. Gorbachevs 1987 Enterprise Law illustrates this as he attempted to transfer decision-making power from the centre to the enterprises themselves[47] through the election of factory managers by workers who would then decide what to produce and work autonomously. Adding to this, the 1988 Law on Cooperatives that legalized a wide range of small businesses[48] supplemented this move towards de-centralisation. Combined, it was anticipated that these reforms

would have introduced more motivation and market responsiveness [] in practice, it did nothing of the sort [] workers not surprisingly elected managers who offered an easy life and large bonuses.[49]

Moreover, the Enterprise Law contributed to the magnitude of the macro and monetary problems. [] [as] managers invariably opted to increase the share of expensive goods they produced,[50] which led to shortages of cheaper goods. Whilst, the law had reverse effects on workers, the blame lies with Gorbachev as no effort was put into the creation of a viable market infrastructure.

Without private banks from which to acquire investment capital, without a free market, [] without profit motive and the threat of closure or sacking, managers rarely had the incentive [] to change their ways.[51]

By going halfway in his efforts to create a market-oriented economy, Gorbachev destroyed his possibilities of success. The existing command-administrative economic system was weakened enough to be even less efficient, but not enough that market economics could begin to operate,[52] in effect, he had placed the economy in a nonsensical twilight zone. Consequently, the economy was plunged into a supply-side depression by 1991 since the availability of private and cooperative shops, which could charge higher prices, served to suck goods out of the state shops, which in turn caused labor unrest[53] and steady inflation. Here, Gorbachev began to feel the negative effects of his reforms, as mass disillusionment in his capability to lead the economy towards a superior model coupled with his emphasis on the abolition of repression and greater social freedom (Glasnost) tipped the USSR into a state of profound crisis.

The Success of Glasnost

Having understood Gorbachevs economical decisional errors with Perestroika, I will now set out to demonstrate how his simultaneous introduction of Glasnost in the social sector proved to be a fatal blow for the Soviet Union. Originally, Gorbachev set out to promote democratisation in 1987 as a complementary reform that would aid his economic ones, he saw Glasnost as a way to create nation of whistle-blowers who would work with him[54] against corruption. To the surprise of Soviet population, Gorbachev even encouraged socio-economic debates and allowed the formation of Neformaly, which were leisure organizations [and] up to a quarter were either lobby groups or were involved in issues [] which gave them an implicitly political function.[55] Gorbachev initiated this move at a time when the USSR was still searching for the correct reform process. Thus, the Neformaly movement was a way for him to strengthen the reform process without weakening the party by including the involvement of the public. But as Perestroika led to continuous setbacks, Gorbachev began to opt for more drastic measures with Glasnost, upholding his belief that the key lay in further democratisation. In November 1987, on the 70th anniversary of the October revolution, Gorbachev gave a speech purporting to Stalins crimes, which was followed by the resurgence of freedom of speech and gradual withdrawal of repression. Intellectually, politically and morally the speech would play a critical role in undermining the Stalinist system of coercion and empire.[56] At Gorbachevs behest, censorship was decreased and citizens could finally obtain truthful accounts regarding Soviet history and the outside world. However, this reform proved to be fairly detrimental as Soviet citizens were dismayed to find that their country actually lagged far behind the civilized countries. They were also taken aback by the flood of revelations about Soviet history.[57] While this did not trigger outbursts of unrest in amongst the population, it did have the cumulative impact of delegitimizing the Soviet regime in eyes of many Russians.[58] After his speech, Gorbachev continued his frenetic march towards democratisation with the astounding creation of a Congress of Peoples Deputies in 1989. Yet again, Gorbachev had found that the reform process necessitated CPSU support, however, conservatives at the heart of the party were continuously moving at cross-purpose to his reform efforts. Hence, by giving power to the people to elect deputies who would draft legislation, Gorbachev believed that he would be strengthening the government, [and] by creating this new Congress, he could gradually diminish the role of the Party regulars [conservatives].[59]

Instead of strengthening the government, Gorbachevs Glasnost of society pushed the USSR further along the path of social turmoil. In hindsight, it is possible to see that

the democracy Gorbachev had in mind was narrow in scope. [] Criticism [] would be disciplined [] and would serve to help, not hurt the reform process. [] His problems began when [] disappointment with his reforms led [] critics to disregard his notion of discipline.[60]

As soon as economic Perestroika failed to yield its promises, the proletariat began to speak out en masse, and instead of constructive openness, Gorbachev had created a Glasnost of criticism and disillusion. This was seen following the 1989 Congress, as social upheavals erupted when miners saw the politicians complain openly about grievances never aired before [61] and decided to do the same. In 1989, almost half the countrys coal miners struck,[62] followed by other episodes in 1991 when over 300,000 miners had gone out on strike.[63] Very quickly, Gorbachev also came to sourly regret his Neformaly initiative as workers, peasants, managers and even the military organized themselves in lobby groups, some of them asking the Kremlin to press forth with reforms and others asking to revert the whole reform process. Gorbachevs decisional error lay in his simultaneous initiation of Perestroika and Glasnost; as the latter met quick success whilst the economy remained in free-fall, society was plunged into a state of profound crisis.

Party Politics

Alongside his catastrophic reform of society and the economy, Gorbachev launched a restructuring of the CPSU, which he deemed essential to complement his economic reforms. In 1985, Gorbachev purged (discharged) elements of the CPSU nomenklatura, a term designating the key administrative government and party leaders.

Within a year, more than 20 to 30 % of the ranks of the Central Committee [] had been purged. Gorbachev expected that these purges would rouse the remaining members of the nomenklatura to support perestroika.[64]

This attack on the party served as an ultimatum to higher government and party officials who were less inclined on following Gorbachevs path of reform. Nevertheless, as economic and social turmoil ensued, Gorbachev went too far in his denunciation of the party, angering party members and causing amplified disillusionment within the proletariat. Examples of this are rife: behind the closed doors of the January 1987 Plenum of the Central Committee, Gorbachev [] accused the Party of resisting reform.[65] In 1988, Gorbachev also fashioned himself a scapegoat for economic failures: the Ligachev-led conservatives were strangling the reforms.[66] Up until 1988, this attack on the party nomenklatura did not have far-reaching repercussions, but as Gorbachev nurtured and strengthened the reformist faction of the CPSU, infighting between the conservatives and reformist began having two negative effects. The first one was widespread public loss of support for the party; this can be seen in the drop in Communist Party membership applications and rise in resignations. By 1988 the rate of membership growth had fallen to a minuscule 0.1 per cent, and then in 1989 membership actually fell, for the first time since 1954.[67] The other negative repercussion lay in how party infighting led to the inability of the CPSU to draft sensible legislation. This was due to Gorbachev continuously altering the faction he supported in order to prevent one from seizing power. Such a characteristic can be spotted in his legislative actions regarding the economy and social sector, which mirrored his incessant political shifts from the reformist faction to the conservative one. In 1990, Gorbachev opted for more de-centralisation and even greater autonomy in Soviet republics by creating the Presidential Council where heads of each republic were able to have a say in his decisions. However, he reversed course in 1991 with the creation of the Security Council where heads of republics now had to report to him directly, thus reasserting party control. Concerning the economy, Gorbachev acted similarly: as earlier explained, his first batch of reforms in 1986 stressed the need for centralisation with super-ministries, but he changed his mind the year after with his Cooperatives and Enterprise Laws and agricultural reforms. Gorbachev constantly

switched course [] [his] indecisiveness on the economy and the Soviet political system has generated more confusion than meaningful action. [] After a time, no one seemed to be complying with orders from the centre.[68]

In effect, it is possible to see here an overlapping of approaches since the way party infighting affected Gorbachevs reforms can be seen as a contingent factor that obstructed reform or a decisional error on Gorbachevs behalf for having reformed the party in such a manner.

Overall, this incoherence in his reform process can be seen as the result of his own decisional mistakes. Having succeeded in his Glasnost of society and the party, Gorbachev had allowed high expectation to flourish regarding his economic reforms, expectations that were gradually deceived. Amidst this social turmoil, economic downturn, party infighting and widespread disillusionment, Soviet republics began to move towards independence as the central command of the Kremlin progressively lost control and became evermore incoherent in its reforms.

The Death of the Union

As the Soviet Union descended into a state of socio-economic chaos, individual republics began to voice their plea to leave the union. This can be seen as having been triggered by the combination of three decisional errors on Gorbachevs behalf. The first one was his miscalculation of the outcome of Glasnost, as by 1990

all 15 republics began to issue calls for either economic sovereigntyor political independence. []Gorbachevs efforts to induce local groups to take initiative on their own were being implemented, but not always in the way he had anticipated.[69]

Originally, initiative had never been thought of as a topic that could lead to independence movements, instead Gorbachev had introduced this drive to stimulate workers and managers to find solutions that were akin to the problems felt in their factory or region. Adding to this mistake were Gorbachevs failed economic reforms with Perestroika, and as the Unions economic state degenerated, individual republics began to feel that independence was the key to their salvation. Gorbachevs

Read more here:

What Explains the Collapse of the USSR?

Posted in Socio-economic Collapse | Comments Off on What Explains the Collapse of the USSR?

The Shadow Brokers’ NSA hack is extremely weird – Business …

Posted: at 9:21 am

National Security Agency

Earlier this week, a group calling itself the “Shadow Brokers” announced that it was selling a number of cyber weapons auction-style that it claimed were hacked and stolen from an alleged NSA hacking group dubbed “The Equation Group.”

Beside the fact that the National Security Agency getting hacked is eyebrow-raising in itself, the leak of the data and the claim from this mystery group that it’s just trying to make money doesn’t seem to add up.

Here’s why.

According to ex-NSA insiders who spoke with Business Insider, the agency’s hackers don’t just put their exploits and toolkits online where they can potentially be pilfered. The more likely scenario for where the data came from, says ex-NSA research scientist Dave Aitel, is an insider who downloaded it onto a USB stick.

Instead of a “hack,” Aitel believes, it’s much more likely that this was a more classic spy operation that involved human intelligence.

“This idea that a group of unknown hackers are going to take on the NSA seems unlikely as well,” Aitel told Business Insider. “There’s a long arm and a long memory to the US intelligence community, and I don’t think anyone wants to be on the other end of that without good reason. I don’t necessarily think a million bitcoin is a good-enough reason.”

Paul Szoldra/Business Insider

One of the many strange things about this incident is the very public nature of what transpired. When a hacker takes over your computer, they don’t start activating your webcam or running weird programs because you’d figure out pretty quickly that something was up and you’d try to get rid of them.

The same is true for the NSA.

If the Shadow Brokers owned the NSA’s command and control server, then it would probably be a much better approach to just sit back, watch, and try to pivot to other interesting things that they might be able to find.

Instead, the group wrote on Pastebin, a website where you can store text, that “we follow Equation Group traffic. We find Equation Group source range. We hack Equation Group. We find many many Equation Group cyber weapons,” which immediately signals to this alleged NSA hacker group that they have a big problem.

Though this seems problematic, it’s probable that the group no longer has access to the server, so it no longer cares about getting back on it. Since the files are years old, this could be the case. But it’s still out of the ordinary since any claim like this can be later investigated by the victim, which will be going through everything trying to figure out who they are.

If this was some random hacking group, then it would’ve been better to keep their mouth shut, especially when their victim is the NSA.

Software exploits are digital gold for hackers, since they often give a key inside a system or network that no one has ever noticed before, and thus, hasn’t fixed. Which is why the marketplace for these “zero-day” exploits is so lucrative. We’re talking hundreds of thousands to millions of dollars for this kind of code.

Most of the time, an exploit is either found by a security research firm, which then writes about it and reports it to the company so it can fix the problem. Or, a hacker looking for cash will take that found exploit and sell it on the black market.

So it would make sense for a group like Shadow Brokers to want to sell their treasure trove, but going public with it is beyond strange.

“From my perspective, its extremely bizarre behavior,” an ex-NSA hacker who spoke on condition of anonymity told Business Insider. “Most groups who either identify or trade in exploits do one of two things. If you identify, like a security research firm [does] … they’ll typically publish their findings. They’re really in the best interest of the companies and users who use these products.”

The source added: “In the other scenarios, folks who sort of deal in the exploit markets. They quietly sell these things. To come out with this public auction is the more bizarre variance of that that I’ve ever seen. So it’s not clear what the intent here is.”

screenshot/The BBC

If you ask ex-NSA contractor Edward Snowden, the public leak and claims of the Shadow Brokers seem to have Russian fingerprints all over them, and it serves as a warning from Moscow to Washington. The message: If your policymakers keep blaming us for the DNC hack, then we can use this hack to implicate you in much more.

“That could have significant foreign policy consequences,” Snowden wrote on Twitter. “Particularly if any of those operations targeted US allies. Particularly if any of those operations targeted elections.”

Aitel seems to agree, though he criticized Snowden as being, at some level, a “voice piece” for Russian intelligence now, since he lives in asylum in Moscow.

“He has the same theory the DNC hack happened. The US political people got upset. They probably made the NSA do a covert response,” Aitel speculated. “This is another response to the NSA’s covert response. There’s a lot of sort of very public messages here going back and forth, which is interesting to look at.”

Aitel also doesn’t think that anyone is going to actually pony up the money required to win the auction. And that prediction is probably going to be right, since WikiLeaks claims that it already has the archive.

“We had already obtained the archive of NSA cyber weapons released earlier today,” its official Twitter account wrote, “and will release our own pristine copy in due course.”

The Shadow Brokers did not respond to an emailed request for comment.

Read more from the original source:
The Shadow Brokers’ NSA hack is extremely weird – Business …

Posted in NSA | Comments Off on The Shadow Brokers’ NSA hack is extremely weird – Business …

Germ Warfare Against America: Part I What Is Gulf War …

Posted: August 21, 2016 at 11:18 am

by Donald S. McAlvaney, Editor, McAlvaney Intelligence Advisor (MIA), August 1996

GWI is a communicable, moderately contagious and potentially lethal disease, resulting from a laboratory modified germ warfare agent called Mycoplasma fermentans (incognitus). [ED. NOTE: There were actually up to 15 such agents used in Desert Storm by Iraq only three have been identified at this writing: mycoplasma fermentans (incognitus), mycoplasma genitalia, and Brucella species.]. Myco- plasma fermentans (incognitus) is a biological which contains most of the (HIV) envelope gene, which was most likely inserted into it in germ warfare laboratories.

GWI spreads far more easily than AIDS, by sex, by casual contact, through perspiration, or by being close to someone who coughs. Your children can be infected at a playground or school. The Nicolsons, who have isolated the micro-organisms, say that it is airborne and moderately contagious.

Joyce Riley had an American Legion chapter leader call her in mid-95 who said, I was visiting the Desert Stormers at the VA Hospital and after two weeks I had the same illness they did just from visiting them at the VA. It sounds almost like tuberculosis-type contagion.

To illustrate the moderately contagious nature of the biologicals Saddam used, Dr. Garth Nicolson cited the case of a young woman who served in a transportation squad who contracted GWI while assigned to a graves registration unit during the hostilities. She is currently the sole survivor of the 16 members of her unit.

She has severe GWI, is partially paralyzed, has multiple chemical sensitiveness (which complicate treatment) and has the mycoplasmic infection. All of the other 15 members of her unit are dead from what we suspect were infectious diseases. These (graves registration) units had to deal with the registration and disposal of thousands of dead Iraqi soldiers who were, we strongly suspect, exposed to GWI.

GWI is the direct health consequence of prolonged exposure to low (non-lethal at the time of exposure) levels of chemical and biological agents released primarily by direct Iraqi attack via missiles, rockets, artillery, or aircraft munitions, and by fallout from allied bombings of Iraqi chemical warfare munitions facilities during the 38-day war.

The effects of these exposures were exacerbated by the harmful and synergistic side effects of unproven (untested) pyridostigmine bromide (PB) pills (nerve agent pre-treatment pills) forcibly administered to our troops; botulinum toxoid vaccines (also untested and experimental) forcibly administered to our troops; anthrax vaccines and several other experimental vaccines, all forcibly administered to our troops like so many laboratory guinea pigs.

Estimates of the number of vets who are sick are just that estimates. Estimates of 50 to 90,000 sick vets are now obsolete. Over 160,000 Gulf War vets have reported to the Gulf War Registry (kept by the Department of Defense which still maintains that the disease does not exist). Dr. Garth Nicolson estimates the number of veterans sick with GWI to be closer to 100,000 to 200,000 with approximately 15,000 dead. This does not include wives, children or other family members, friends or associates (secondary infectees) who are sick, disabled, dying or dead.

By August 15, 1991, 17,000 out of 100,000 reservists and National Guardsmen who served in the Gulf conflict had reported to the VA that they were ill. Four years later (in August 96) that number is likely to have tripled to 51,000, or over half of the total. Joyce Riley estimates that 1/2 of all Desert Stormers may now be positive for Mycoplasma fermentans (incognitus). Riley (and the Nicolsons) also estimate that a large percent of all GWI victims may ultimately die from the disease, or suicide.

On 7/31/96, Tony Flint, spokesperson for the British Gulf War Veterans Association, reported that the number of GW veterans deaths in U.K. is l.233 out of 51,000 Brits who participated. Of these deaths, 13% or 162 were from suicide. These are huge numbers of suicide victims who took their lives due to their lack of treatment and incredible pain levels.

Whole families are now ill. Nor do the above numbers include babies which are being born dead or severely deformed like the thalidomide babies of the 50s. Some of the baby deformities are Goldenhar syndrome, wherein babies are born with one or more limbs missing, a missing eye or other deformity. It is now estimated that a large percent of babies born to infected veterans are being born deformed or with birth problems.

The study done for former U.S. Senator Don Riegle (D-MI) concluded that 78% of wives of veterans who are sick are also likely to be sick, that 25% of their children born before the war are also likely to be sick, and that 65% of children born to sick Gulf War veterans after the war also are likely to be sick.

The Nicolsons, after listening to health complaints of many veterans of Desert Storm (including their step-daughter, then Staff Sergeant Sharron McMillan, who served with the Armys 101st Airborne Division-Air Assault, in the deep insertions into Iraq), concluded that the symptoms can be explained by aggressive, pathogenic mycoplasma and other microorganism infections.

Mycoplasmas are similar to bacteria. They are a group of small microorganisms, in between the size and complexity of cells and viruses, some of which can invade and burrow very deep into the cell and cause chromic infections. According to the Nicolsons, normal mycoplasma infections produce relatively benign diseases limited to particular tissue sites or organs, such as urinary tract or respiratory infections.

However, the types of mycoplasmas which the Nicolsons have detected in Desert Storm veterans are very pathogenic, colonize in a variety of organs and tissues, and are very difficult to treat. [ED. NOTE: The Nicolsons tested thousands of veterans blood samples (free-of-charge) while at the M.D. Anderson Center].

These mycoplasmas can be detected by a technique the Nicolsons developed called Gene Tracking, whereby the blood is separated into red and white blood cell fractions, and then further fractionated into nucleoproteins that bond to DNA, the genetic material in each cell. Finally, the purified nucleoproteins are probed to determine the presence of specific mycoplasma gene sequences. [ED. NOTE: Obviously this is no ordinary blood test and can only be understood or done by a small handful of pathologists or microbiologists in the world today].

As the Nicolsons wrote in a recent paper entitled Chronic Fatigue Illness and Desert Storm Were Biological Weapons Used Against Our Forces in the Gulf War?: In our preliminary study on a small number of Gulf War veterans and their families, we have found evidence of mycoplasmic infections in about one-half of the patients whose blood we have examined.

Not every Gulf War veteran had the same type of mycoplasma DNA sequences that came from mycoplasmas bound to or inside their white blood cells. Of particular importance, however, was our detection of highly unusual retroviral DNA sequences in the same samples by the same technique. These highly unusual DNA sequences included a portion of the HIV-1 (the AIDS-causing virus) genetic code, the HIV-1 envelope gene, but not the entire HIV-1 viral genomes.

The type of mycoplasma we identified was highly unusual and it almost certainly could not occur naturally. It has one gene from the HIV-1 virus but only one gene. This meant it was almost certainly an artificially modified microbe altered purposely by scientists to make them more pathogenic and more difficult to detect.

Thus these soldiers were not infected with the HIV-1 virus, because the virus cannot replicate with only one HIV-1 envelope gene that we detected. [ED. NOTE: But, infected soldiers do exhibit many of the symptoms of AIDS while testing HIV negative. Garth Nicolson says that Mycoplasma fermentans (incognitus) contains about 40% of the HIV virus which causes AIDS. He told this writer on 8/9/96 that some soldiers do test HIV-1 positive, but do not have the HIV virus only the envelope gene product].

Interestingly, the specific DNA sequence that we detected encodes a protein that, when expressed on the surface of the mycoplasma, would enable any myco-plasma to bind to many cell types in the body, and even enter those cells.

Thus this genetic manipulation could render a relatively benign mycoplasma much more invasive and pathogenic and capable of attacking many organ and tissue systems of the body.

Such findings suggest that the mycoplasmas that we have found in Gulf War veterans are not naturally occurring organisms, or to be more specific, they were probably genetically modified or engineered to be more invasive and pathogenic, or quite simply, more potent biological weapons.

In our rather small sample of Gulf War veterans, it seems that the soldiers that were involved in the deep insertions into Iraq and those that were near Saudi SCUD impact zones may be the ones at highest risk for contracting the mycoplasmas that we feel are a major culprit in the Desert Storm-associated chronic fatigue illness. Our preliminary research indicates that the types of mycoplasmas found in some of the Desert Storm veterans with the most severe chronic symptoms may have been altered, probably by genetic manipulation, suggesting strongly that biological weapons were used in Desert Storm.

We consider it quite likely that many of the Desert Storm veterans suffering from the symptoms (described below) may have been infected with microorganisms. Quite possibly aggressive pathogenic mycoplasmas and probably other pathogens such as pathogenic bacteria as well, and this type of multiple infection can produce the chronic symptoms even long after exposure. [ED. NOTE: Three to seven years later, Joyce Riley calls it a time-release form of illness].

[ED. NOTE: Joyce Riley and the Nicolsons believe that the microbe just described is only one of 10 to 15 different microbes or different types of germ warfare that could have been utilized].

Micotoxins are toxins that are associated with fungus. Fungi and micotoxins have long been a very secret carrier of germ warfare agents. Micotoxins are very difficult to destroy with temperature, weather, or anything else.

Mycoplasmas have for many years been studied as potential germ warfare agents. Add a recombinant DNA to the mycoplasma such as the HIV envelope gene, and youve got a very virulent form of disease that is going to be passed easily throughout the population.

Mycoplama fermentans (incognitus) (and the other 10 to 15 microbes the Nicolsons believe could have been used by Saddam) are easily manufactured and have been made for the past 15 years in America, Russia, Iraq, China, Israel and even in Libyas new biological (germ) warfare facilities.

One of the more ominous aspects of GWI is that the microorganism is communicable between humans and dogs and cats (and presumably other animals). Veterans pets are coming down with the GWI symptoms and dying. Remember one of the Nicolsons cats contracted it and died. So, the disease is contagious between species. As Joyce Riley has said, The fact that the disease is being transmitted from people to animals is almost unprecedented. To find an organism that can be transmitted to animals is truly frightening.

In England, a viral researcher friend says that he has treated a number of people with the human form of Mad Cow Disease which he says has many common characteristics with GWI. Remember, most of the cattle herd of England had to be destroyed because of Mad Cow Disease. The British researcher says he is presently seeing (and treating) dozens of new, never-before-seen viruses in the U.K.

There is a large list of signs and symptoms which can begin from six months to six or seven years from the time of exposure, and once they begin, can get progressively worse until the victim is partially or totally disabled, or dies. [ED. NOTE: With severe exposure to heavy doses of biologicals, the symptoms can show up in a few days]. These symptoms include (not listed in order of severity or frequency): (1) Chronic fatigue; (2) Frequent (or constant) throwing up and diarrhea; (3) Severe weight loss (wasting away) very similar to an AIDS patient; (4) Severe joint pains; (5) Headaches that dont go away; (6) Memory loss, concentration loss the brain begins to go; (7) Inability to sleep [ED. NOTE: Severe sleep disorders are one of the worst and most frequent symptoms. Victims often sleep in the day, awake at night, or dont sleep for days or weeks]; (8) A rash on the stomach, groin, back, face, arms often looks like a giant ring worm. Whole families often get the rash; (9) Lymph nodes begin to swell; (10) Nervous system problems begin to appear (Parkinson-like symptoms, numbness and tingling around the body which can degenerate into paralysis and death); (11) Night sweats; (12) Bizarre tumors many brain stem tumors; [ED. NOTE: the active duty tumor rate in the U.S. military has increased 600% since 1990, according to data obtained from the Veterans Admini-stration. This data is available from Joyce Riley at the American Gulf War Veterans Association, 3506 Highway 6 South #117, Sugarland, TX 77478-4401 (1-713-587-5437)]; (13) Bizarre personality changes (victims become violent, have wide mood swings, severe depression, they hibernate in a dark room, begin to drink heavily, use drugs, become violently angry. Denial is a major facet of the disease; (14) Cant work often go bankrupt; (15) A large number of victims (perhaps 50%) end up committing suicide. GWI victims are walking time bombs!

Many of the symptoms are similar to AIDS because they are both immuno- suppressive and attack the immune system. Most victims will have half to two-thirds of these symptoms (some more severe than others). Wives married to GWI victims are likely to get the disease via sex and other close contact, and their symptoms can even include cervical cancer, ovarian cysts, ovarian tumors, endometriosis, painful intercourse, chlamydia, and herpes (sexually transmitted diseases [STDs] but with no extra-marital sexual activity). About 90% of the wives of veterans who are sick with GWI are now complaining of these symptoms.

When Joyce Riley had the disease she had some of the above symptoms in addition to the following symptomology: (1) She felt like a part of the body (like a foot, a leg, a calf, an arm) was missing; (2) She felt like a pan of hot water had been splashed on her one side of her body burned; (3) She felt like a foot was in ice; (4) She had bone pain, muscle pain (like a cramp or charley horse that doesnt let up for weeks); (5) She had central nervous system symptoms (knife-like pain from the upper back to tailbone).

Bleeding and hemorrhaging are symptoms associated with GWI. In Ebola Zaire, the body bleeds out in about 48 hours. Ebola Riston (a variation of Ebola Zaire) takes about two years to cause death with severe bleeding. A number of Gulf War vets who have called Joyce Riley have told her that they are bleeding from every orifice of their body. And their doctors dont have a clue as to what is happening they just know they dont have long to live. [ED. NOTE: She gets dozens of calls each day].

The Ebola Riston virus is a version of the Ebola Zaire virus (which may have been laboratory produced) but it takes about two years or more to kill a victim, beginning with the onset of the symptoms, versus 48 hours for Ebola Zaire. [ED. NOTE: Readers of this report are strongly encouraged to buy and read the book, The Hot Zone and rent the movie Outbreak both of which deal with the Ebola Zaire virus. However, in the real world, Ebola did not come from an African monkey, cave or rain forest but probably from a biological warfare laboratory].

Lekoencephalopathy is similar to Mad Cow disease the brain dissolves! It is now spreading among the populace of England. 25 to 30-year-old paratroopers are now dying of lekoencephalopathy. Other symptoms of GWI include: recurring fever, menstrual disorders, stomach upsets and cramps, heart pain, kidney pain, thyroid problems, and in extreme cases, autoimmune-like disorders such as those that lead to paralysis.

Many GWI victims are getting medical diagnoses of MS (Multiple Sclerosis) or Guillian Barre Syndrome, and Amyotrophic Lateral Sclerosis (Lou Gehrings Disease), their neurological problems eventually lead to paralysis and death. Thousands of Gulf War vets are now being diagnosed as having MS when they really have GWI.

The reason for the autoimmune symptoms maybe related to the cell penetrating mycoplasmas and bacteria of GWI. When these microorganisms proliferate and leave the cell, they can take a piece of the cells membrane with it, resulting in host immune responses against the microorganisms as well as the normal parts of membrane associated with the microorganism. This type of response is called a concomitant immune response.

In August 95, researchers at the University of Glasgow released a report entitled, Neurological Dysfunction in Gulf War Syndrome, which was published in the March 96 issue of the Journal of Neurology, Neurosurgery and Psychiatry which said, The results between the two groups [Desert Storm vets and non-military control group] showed significant differences between the two groups in terms of nervous system function. The Gulf War veterans performed less well. They all displayed the classic symptoms of nerve damage.

Graves Disease (a disease of the thyroid) is another problem or symptom associated ith mycoplasma fermentans (incognitus) infection. If it settles in the wheart, then you can get a severe enlargement and necrosis (or degeneration) of the heart, and in some autopsies of GWI victims, the coroner says, their heart exploded.

The most severely affected (sickest) units in our military are the 101st Airborne, the 82nd Airborne, and the Big Red One out of Ft. Riley, Kansas, and the 3rd and 5th Special Forces.

[ED. NOTE: 99.9% of the medical doctors in America cant recognize GWI, dont believe it even exists because of the government and medical establishment saying it doesnt exist, would have no idea how to test for it and even less idea how to treat it. Most alternate medical practitioners are in the same boat although many of them would try detoxification and immune system therapy which would be helpful. These are answers (if the disease is not too far advanced) both in the tradition (mainline) medical area and in the alternate medicine field which will be discussed in Section VI below. If you or a family member reading this report are discouraged at this point, turn to Section VI on Methods of Treatment before continuing].

Life (11/95) featured a special report entitled: The Tiny Victims of Desert Storm, which described in heart-rending detail (with numerous photos) how the children of our veterans are being born with horrendous disfiguring birth defects. The article was subtitled, When our soldiers risked their lives in the Gulf, they never imagined that their children might suffer the consequences or that their country would turn its back on them.

In the months and years following Desert Storm, thousands of babies have been born to vets with horrible deformities (missing limbs, one eye, missing ears, incomplete or missing organs reminiscent of the Thalidomide babies of the 1950s but in far greater numbers. [ED. NOTE: Thalidomide was another experimental drug (administered to pregnant mothers) which went awry].

Meanwhile, the Department of Defense is working overtime to cover up the crisis with Gulf War babies, denying it exists, denying benefits or medical assistance to veterans with birth defected children, and even going so far as to censor the Life article cited above off of the Internet.

Dr. William Campbell Douglass is the editor of the Second Opinion newsletter and author of the book, Who Killed Africa (about how the World Health Organization smallpox inoculations may have triggered the AIDS epidemic in Africa). Dr. Douglass, a close friend of this writer, wrote in his January 1994 newsletter regarding Gulf War Illness: The symptoms are now having serious repercussions. Half or more of the babies born to Gulf War vets since the war have had some sort of birth defect or blood disorder.

Nation Magazine (1/95) estimates that 67% of babies being born to Gulf War vets who are ill are having serious birth problems. Over half of the babies now being born in Iraq today have deformities or major birth defects, according to reports Dr. Garth and Nancy Nicolson have received.

According to the Life Magazine article: In 1975, a landmark Swedish study concluded that low-level exposure to nerve and mustard gases could cause both chronic illness and birth defects. The Pentagon denies the presence of such chemicals during the Gulf War. [ED. NOTE: Even though over 18,000 chemical alarms sounded during the Gulf War] but the Czech and British governments say their troops detected both kinds of gas during the war. A 1994 report by the General Accounting Office says that: American soldiers were exposed to 21 potential reproductive toxicants, any of which might have harmed them or their future children.

A number of examples of babies born to Gulf War vets with devastating birth defects were cited in the Life Magazine article:

1) Kennedi Clark (Age 4) Born to Darrell (an Army paratrooper in the Gulf War) and Shona Clark. Kennedis face is grotesquely swollen sprinkled with red, knotted lumps. She was born without a thyroid. If not for daily hormone treatments, she would die. What disfigures her features, however, is another congenital condition: hemangiomas, benign tumors made of tangled red blood vessels. Since she was a few weeks old, they have been popping up all over on her eyelids, lips, etc.

(2) Lea Arnold (Age 4) Born to Richard and Lisa Arnold. Richard was a civilian helicopter mechanic (working for Lockheed) with the Armys 1st Cavalry Division during the Gulf War. Lea was born with spina bifida, a split in the backbone that causes paralysis and hydrocephalus (i.e. water on the brain). She needed surgery to remove three vertebrae. Today, she cannot move her legs or roll over. A shunt drains the fluid from her skull. Her upper body is so weak that she cannot push herself in a wheelchair on carpeting. To strengthen her bones, she spends hours in a contraption that holds her upright. Just about our whole world is centered around Lea, says Lisa Arnold. Huge medical bills and the unwillingness of insurance companies to cover pre-existing conditions force the family to live in poverty in order to qualify for Medicaid.

(3) Casey Minns (Age 3) Born to Army Sgt. Brad and Marilyn Minns. Casey was born with Goldenhar Syndrome, characterized by a lopsided head and spine. His left ear is missing, his digestive tract (i.e. esophagus) was disconnected. Trying to repair his damaged organs, surgeons at Walter Reed Army Medical Center damaged his vocal chords and colon, says Brad and Marilyn. His parents feed and remove his wastes through holes in his belly. His mother Marilyn, says, Sometimes it just overwhelms me, but I try to take it one day at a time.. its made worse by people who say that Gulf War Syndrome doesnt existtheyre turning their backs on us.

(4) Michael Ayers (Died at 5 Months of Age) Born to Glenn (a battery commander in the Gulf War) and Melanie Ayers. Michael was born with a mitral-valve defect in his heart. He sweat constantly until the night h woke up screaming, his arms and legs ice-cold. he died that night of congestive heart failure. As Life Magazine wrote: After Michaels death, Melanie sealed off his bedroom; she tried to close herself off as well. But soon she began to encounter a shocking number of other parents whose post-Gulf War children had been born with abnormalities. All of them were desperate to know what had gone wrong and whether they would ever again be able to bear healthy babies. With Kim Sullivan, an artillery captains wife whose infant son, Matthew, had died of a rare liver cancer, Melanie founded an informal network of fellow sufferers. Kim is here. So is Connie Hanson, wife of an Army sergeant her son, Jayce, was born with multiple deformities. Army Sgt. John Mabus has brought along his babies Zachary and Andrew who suffer from an incomplete fusion of the skull. The people in this room have turned to one another because they can no longer rely upon the military.

(5) Cedrick Miller (Age 4) Born to Steve (a former Army medic in the Gulf War) and Bianca Miller. Cedrick was born with his trachea and esophagus fused; despite surgery, his inability to hold down solid food has kept his weight to 20 pounds. His internal problems include hydrocephalus and a heart in the wrong place. Cedrick suffers, like Casey Minns, from Goldenhars Syndrome. The left half of his face is shrunken, with a missing ear and blind eye.

(6) Jayce Hanson (Age 4) Born to Paul (a Gulf War vet) and Connie Hanson. Jayce was born with hands and feet attached to twisted stumps. He also had a hole in his heart, a hemophilia-like blood condition, and underdeveloped ear canals ..a cherubic, rambunctious blond, hes the unofficial poster boy of the Gulf War babies seen by millions in People Magazine. But since his last major public appearance, he has undergone a change. His lower legs are missing. Doctors recently amputated his legs at the knees to make it easier to fit him with prosthetics. Hell say once in a while, My feet are gone, says his mother Connie, but he has been a real trooper.

(7) Alexander Albuck (Age 3) Born to Lieutenant and Kelli Albuck after two miscarriages. Alexander was born with underdeveloped lungs, Strep B infection, spinal meningitis, cranial hemorrhage, collapsed heart valve, calcium deposits in the kidneys, bleeding ulcers, cerebral palsy, vision and hearing impairments, bronchia pulmonary dysphasia, etc. Having exhausted the lifetime limit on their health insurance in the first three months, the Albucks because responsible for paying for his treatment. The first bill they received was for $154,319!

There are thousands of young children like Kennedi, Lea, Casey, Michael, Cedrick, Jayce, and Alexander (the tiny victims of Desert Storm) who have been born to Gulf War vets with horrible birth defects or who have died from these deformities. The government (especially the Defense Department) denies that the problem exists and no government medical or financial assistance is forthcoming unless a parent is still in the military (and over 2/3 of the Gulf War vets have been separated from duty since Operation Desert Storm).

As Life wrote: For parents of these children, the going is grim. They are denied insurance coverage for pre-existing conditions. They are being driven into poverty. Some join the welfare line so Medicaid will help with the impossible burden. You could be a millionaire, and there is no way you could take care of one of these children, says Lisa Arnold.

Because the U.S. government and military will not help, a Gulf War Baby Registry has been formed (in Orlando, Florida) by Dr. Betty Bekdeci to track as best as possible the birth defected children. Call 1-800-313-2232 for more information.

Read more here:

Germ Warfare Against America: Part I What Is Gulf War …

Posted in Germ Warfare | Comments Off on Germ Warfare Against America: Part I What Is Gulf War …

Trump foes miss the mark on Clinton’s Second Amendment …

Posted: August 19, 2016 at 4:08 am

Donald Trump keeps saying that Hillary Clinton wants to essentially abolish the Second Amendment. But the media fact checkers are having none of it. Last week, CNN called his accusation persistent and false. At the same time, a Washington Post editorial also called the claim absurd.

In his analysis for CNN, Eric Bradner acknowledges Clintons support for many different types of gun control — a 25 percent tax on handguns, an assault weapons ban, repeal of laws allowing permitted concealed handguns, and background checks on the private transfer of guns. Clinton also has supported increased fees and a variety of regulations that her husband imposed. Thanks to Bill Clintons regulations, the number of licensed firearms dealers from 248,155 in 1992 to 67,479 in 2000 — a 73 percent reduction.

The media picks and chooses when to take Clinton at her word. CNN pointed to a recent Fox News Sunday appearance where Hillary Clinton claimed: “I’m not looking to repeal the Second Amendment. I’m not looking to take people’s guns away.” The Washington Post noted a statement from her campaign website about how gun ownership is part of the fabric of many law-abiding communities.

But in June, ABCs George Stephanopoulos pushed Clinton twice on whether people have a right to own guns. But that’s not what I asked. I said do you believe that their conclusion that an individual’s right to bear arms is a constitutional right? Clinton could only say: If it is a constitutional right . . . .

Similarly, in New York Cityin the fall, she told donors: The Supreme Court is wrong on the Second Amendment, and I am going to make that case every chance that I get. In Maryland in April, Chelsea Clinton promised that her mom would appoint to the Supreme Court justices who would overturn past decisions that struck down gun-control measures. But the only lawsthat the Supreme Court evaluated were complete gun bans and a law that made it a crime to use a gun.

Washington, D.C., had a complete handgun ban in place until 2008. It was also a felony, punishable by five years in prison, to put a bullet in the chamber of a gun. This amounted to a complete gun ban on using guns for self-defense. The U.S. Supreme Courts ruling in District of Columbia v. Heller struck down that ban.

Clinton told Stephanopoulos her opinion of this ruling: I think that for most of our history, there was a nuanced reading of the Second Amendment until the decision by the late Justice Scalia. She continued, There was no argument until then that localities and states and the federal government had a right, as we do with every amendment, to impose reasonable regulation.

Clinton went on to talk about her push for expanded background checks, an issue that was irrelevant to Scalias decision in Heller. Instead, the question is why was D.C.s local gun ban a reasonable regulation. Why should people be imprisoned for five years for defending their families?

In McDonald v. City of Chicago (2010), Supreme Court Justice Stephen Breyer wrote in his dissent: “I can find nothing in the Second Amendments text, history, or underlying rationale that could warrant characterizing it as fundamental insofar as it seeks to protect the keeping and bearing of arms for private self-defense purposes. Ruth Bader Ginsburg and Sonia Sotomayor signed on to Breyers opinion.

Breyer and Ginsburg were both appointed by President Bill Clinton. Sotomayor was Obamas first nominee to the Supreme Court. Obamas second nominee, Elana Kagan, would clearly have voted the same way had she been on the court at the time of McDonald. Indeed, Kagan served in Bill Clintons administration and helped lead the Presidents gun control initiatives.

The Washington Post dismisses all this talk about the Supreme Court by saying that appointing Justices to the court would not be anything like abolishing an amendment, which no court can do. And it is true that the court cant simply remove the amendment from the Constitution. But the media is appearing to be deliberately obtuse. If the court reverses Heller and McDonald and changes its interpretation of the Second Amendment as Hillary promises, what will really be left of the Second Amendment?

The media might not like to admit it, but The War on Guns is real. If Hillary wins in November, she will appoint Scalias successor and the Supreme Court will overturn the Heller and McDonald decisions. Make no mistake about it, the government will again be able to ban guns. Her claim that she isn’t looking to take people’s guns away is not consistent with her promise to overturn existing Supreme Court decisions.

John R. Lott, Jr. is a columnist forFoxNews.com. He is an economist and was formerly chief economist at the United States Sentencing Commission. Lott is also a leading expert on guns and op-eds on that issue are done in conjunction with the Crime Prevention Research Center. He is the author of nine books including “More Guns, Less Crime.” His latest book is “The War on Guns: Arming Yourself Against Gun Control Lies (August 1, 2016). Follow him on Twitter@johnrlottjr.

Read the original post:
Trump foes miss the mark on Clinton’s Second Amendment …

Posted in Second Amendment | Comments Off on Trump foes miss the mark on Clinton’s Second Amendment …

Where To Get Cyberpunk Clothing | Neon Dystopia

Posted: August 10, 2016 at 9:16 pm

These days its difficult to find decent cyberpunk clothing unless you are willing to pay a shitload of money and search through the millions of clothes that have nothing to do with cyberpunk, yet still claim to be. Its a problem with the current dystopian western society weve found ourselves in no terminals to hack into with our brain stem but plenty of clothes that are goth, steampunk, rave or industrial that have little relation tocyberpunk clothing or the cyberpunk attitude. The other option you have is making the clothes yourself but for that you would need to be talented and, for ease, lets assume for the moment that you arent (or if you want to feel better about yourself, lets say you cant build a raid server or port scan companies in Japan at the same time as sewing pfft).

The point is this; you want to go out and you want to change the worlds perception of fashion while at the same time remaining under the radar in the crowd as you get to the club to pick up another unsavoury job from your employer.

In the early days of public internet it was perfectly acceptable for cyberpunks to fit into the almost-cybergoth scene; wearing minimal black clothing, nails painted black and earning money from rich goths willing to pay for a little bit of hacking done from your Windows 98 laptop. This idea isnt too far-fetched it was stolen from reality by the creators of The Matrix.I was doing gigs like this before the film came out while I was visiting the same club they used for the down the rabbit hole scene (Hellfire in Chippendale, Sydney) all while having a high paying job at a software/internet company where I first saw the trailer for The Matrix. I admit; I saw myself more like Lenny from Strange Days totes cooler than anyone from theMatrix films. After this time, if you wore a long black or brown leather jacket people would call out to you, Hey Matrix idiot making you no longer anonymous. Thanks, Matrix you fuckfaces.

Fashion has caught up somewhat since those fucking days in the 90s but the idea of what cyberpunk fashion is has strayed in the public consciousness mostly because people dont understand the cyberpunk ethos or where it comes from. What impresses me are the costumes in cyberpunk films like Total Recall (2013) and, more recently, in games like Deus Ex: Human Revolution and especially Remember Me. Nilins costume is outrageously gorgeous.

So how do you track down the ultimate cyberpunk fashion for that specific cyberpunk style? I was getting to that.

Start with the outrageously expensive places like Plastic Wrap (http://www.plastikwrap.com/) and Google cyberpunk clothing to get some ideas of what you would like to wear. Then, hit the markets (yes, I mean real life markets). Theres bound to be several places that you never thought of to go to buy cyberpunk or dystopian clothes because obviously retail is too expensive and buying low quality, overpriced shit online was the only way to get the cool shit. Well you were wrong.

Most young people trying to get a foot in the fashion industry are making some of the coolest shit and selling it at markets to get a leg up in the industry but what that means for you is you can buy awesome unique pieces that ultimately can fuel your dream outfit for your dark corner of our dystopia. I have been blown away at some of the functional and cyberpunkclothesIve been able to find of late in markets in Sydney. Wherever you are in the world there are bound to be similar places, you just need to find out where your local markets (usually in cities) are located.

There is also a heap of cool clothing waiting to be found in second-hand clothing stores. You just gotta look and usually its as cheap as a hooker in Chiba City, Japan Im not kidding.

Remember three things when searching for cyberpunk clothing:

If you just cant find anything outside, here are some potential online sources for decent cyberpunk clothing:

Cryoflesh http://www.cryoflesh.com

While promoting itself as Urban Future Wear theres clearly a lot of goth and rave wear to sift through with some interesting accessories. Reasonably cheaper than most online stores but difficult to put together a full outfit from this one site and still remain true to the cyberpunk ethos.

Cyberdog http://shop.cyberdog.net/

Cyberdog has come a long way since its inception but still focuses more on rave culture than actual cyberpunk clothing. Everything is in pounds so dont forget how expensive that makes everything.

Plastik Wrap/Plastic Army http://www.plastikwrap.com/

Plastik Wrap have been around for a long time and built up their brand and even had some costumes featured in Total Recall 2013 unfortunately this also makes them one of the most expensive brands out there. They have some amazing pieces but use them for reference only.

Eva Zolinar https://www.etsy.com/au/shop/ZOLNAR/ Via Etsy, Eva Zolinar has been creating some very interesting pieces that fit right into a cyberpunk underground. While some of the more detailed pieces are extremely expensive some of the smaller pieces and accessories are quite cool average out to the price of some of the pieces on cryoflesh.

Futurestate http://www.futurstate.com/

With a much more Industrial sometime borderline steampunk edge Futurestate does have some interesting torso pieces and jackets especially for men again the prices are right up there but its worthwhile for looking at the hoodies and jackets.

Siskatank http://www.siskatank.com/

Very expensive printed clothing.

Immoral Fashion http://www.immoralfashion.com.au/

An Australian fashion site with some amazing pieces and surprisingly low prices. Pants tops and jackets are all high quality from here. Again you are wading through steampunk and goth clothing but its all high quality.

Neurolab (non corporeal clothing) http://www.neurolab-inc.com/blog/en/category/categories/clothes-categories/

If you are fan of Second Life, which I am not, you might want to check out Neurolabs clothing and gear. Warning: this is strictly clothing for your avatar in second life not real life clothing.

There you have it plenty of advice and resources to get yourself going. If you cant find yourself anything to wear above, well, I guess youll have to learn to sew.

Read more:

Where To Get Cyberpunk Clothing | Neon Dystopia

Posted in Cyberpunk | Comments Off on Where To Get Cyberpunk Clothing | Neon Dystopia

Cyberpunk – a short story by Bruce Bethke

Posted: at 9:16 pm

Cyberpunk a short story by Bruce Bethke


In the early spring of 1980 I wrote a little story about a bunch of teenage hackers. From the very first draft this story had a name, and lo, the name was–

And you can bet any body part you’d care to name that, had I had even the slightest least inkling of a clue that I would still be answering questions about this word nearly 18 years later, I would have bloody well trademarked the damned thing!

Nonetheless, I didn’t, and as you’re probably aware, the c-word has gone on to have a fascinating career all its own. At this late date I am not trying to claim unwarranted credit or tarnish anyone else’s glory. (Frankly, I’d much rather people were paying attention to what I’m writing now –e.g., my Philip K. Dick Award-winning novel, Headcrash, Orbit Books, 5.99 in paperback.) But for those folks who are obsessed with history, here, in tightly encapsulated form, is the story behind the story.

The invention of the c-word was a conscious and deliberate act of creation on my part. I wrote the story in the early spring of 1980, and from the very first draft, it was titled “Cyberpunk.” In calling it that, I was actively trying to invent a new term that grokked the juxtaposition of punk attitudes and high technology. My reasons for doing so were purely selfish and market-driven: I wanted to give my story a snappy, one-word title that editors would remember.

Offhand, I’d say I succeeded.

How did I actually create the word? The way any new word comes into being, I guess: through synthesis. I took a handful of roots –cyber, techno, et al– mixed them up with a bunch of terms for socially misdirected youth, and tried out the various combinations until one just plain sounded right.

IMPORTANT POINT! I never claimed to have invented cyberpunk fiction! That honor belongs primarily to William Gibson, whose 1984 novel, Neuromancer, was the real defining work of “The Movement.” (At the time, Mike Swanwick argued that the movement writers should properly be termed neuromantics, since so much of what they were doing was clearly Imitation Neuromancer.)

Then again, Gibson shouldn’t get sole credit either. Pat Cadigan (“Pretty Boy Crossover”), Rudy Rucker (Software), W.T. Quick (Dreams of Flesh and Sand), Greg Bear (Blood Music), Walter Jon Williams (Hardwired), Michael Swanwick (Vacuum Flowers)…the list of early ’80s writers who made important contributions towards defining the trope defies my ability to remember their names. Nor was it an immaculate conception: John Brunner (Shockwave Rider), Anthony Burgess (A Clockwork Orange), and perhaps even Alfred Bester (The Stars My Destination) all were important antecedents of the thing that became known as cyberpunk fiction.

Me? I’ve been told that my main contribution was inventing the stereotype of the punk hacker with a mohawk. That, and I named the beast, of course.

[Note: If you want to find out more about the etymology of cyberpunk — and quite a few other things, too — take a look at Bruce’s web page. Alternatively, why not just scroll down and read the story itself?]

The snoozer went off at seven and I was out of my sleepsack, powered up, and on-line in nanos. That’s as far as I got. Soon’s I booted and got–

–on the tube I shut down fast. Damn! Rayno had been on line before me, like always, and that message meant somebody else had gotten into our Net– and that meant trouble by the busload! I couldn’t do anything more on term, so I zipped into my jumper, combed my hair, and went downstairs.

Mom and Dad were at breakfast when I slid into the kitchen. “Good Morning, Mikey!” said Mom with a smile. “You were up so late last night I thought I wouldn’t see you before you caught your bus.”

“Had a tough program to crack,” I said.

“Well,” she said, “now you can sit down and have a decent breakfast.” She turned around to pull some Sara Lees out of the microwave and plunk them down on the table.

“If you’d do your schoolwork when you’re supposed to you wouldn’t have to stay up all night,” growled Dad from behind his caffix and faxsheet. I sloshed some juice in a glass and poured it down, stuffed a Sara Lee into my mouth, and stood to go.

“What?” asked Mom. “That’s all the breakfast you’re going to have?”

“Haven’t got time,” I said. “I gotta get to school early to see if the program checks.” Dad growled something more and Mom spoke to quiet him, but I didn’t hear much ’cause I was out the door.

I caught the transys for school, just in case they were watching. Two blocks down the line I got off and transferred going back the other way, and a coupla transfers later I wound up whipping into Buddy’s All-Night Burgers. Rayno was in our booth, glaring into his caffix. It was 7:55 and I’d beat Georgie and Lisa there.

“What’s on line?” I asked as I dropped into my seat, across from Rayno. He just looked up at me through his eyebrows and I knew better than to ask again.

At eight Lisa came in. Lisa is Rayno’s girl, or at least she hopes she is. I can see why: Rayno’s seventeen–two years older than the rest of us–he wears flash plastic and his hair in The Wedge (Dad blew a chip when I said I wanted my hair cut like that) and he’s so cool he won’t even touch her, even when she’s begging for it. She plunked down in her seat next to Rayno and he didn’t blink.

Georgie still wasn’t there at 8:05. Rayno checked his watch again, then finally looked up from his caffix. “The compiler’s been cracked,” he said. Lisa and I both swore. We’d worked up our own little code to keep our Net private. I mean, our Olders would just blow boards if they ever found out what we were really up to. And now somebody’d broken our code.

“Georgie’s old man?” I asked.

“Looks that way.” I swore again. Georgie and I started the Net by linking our smartterms with some stuff we stored in his old man’s home business system. Now my Dad wouldn’t know an opsys if he crashed on one, but Georgie’s old man–he’s a greentooth. A tech-type. He’d found one of ours once before and tried to take it apart to see what it did. We’d just skinned out that time.

“Any idea how far in he got?” Lisa asked. Rayno looked through her, at the front door. Georgie’d just come in.

“We’re gonna find out,” Rayno said.

Georgie was coming in smiling, but when he saw that look in Rayno’s eyes he sat down next to me like the seat was booby-trapped.

“Good morning Georgie,” said Rayno, smiling like a shark.

“I didn’t glitch!” Georgie whined. “I didn’t tell him a thing!”

“Then how the Hell did he do it?”

“You know how he is, he’s weird! He likes puzzles!” Georgie looked to me for backup. “That’s how come I was late. He was trying to weasel me, but I didn’t tell him a thing! I think he only got it partway open. He didn’t ask about the Net!”

Rayno actually sat back, pointed at us all, and smiled. “You kids just don’t know how lucky you are. I was in the Net last night and flagged somebody who didn’t know the secures was poking Georgie’s compiler. I made some changes. By the time your old man figures them out, well…”

I sighed relief. See what I mean about being cool? Rayno had us outlooped all the time!

Rayno slammed his fist down on the table. “But Dammit Georgie, you gotta keep a closer watch on him!”

Then Rayno smiled and bought us all drinks and pie all the way around. Lisa had a cherry Coke, and Georgie and I had caffix just like Rayno. God, that stuff tastes awful! The cups were cleared away, and Rayno unzipped his jumper and reached inside.

“Now kids,” he said quietly, “it’s time for some serious fun.” He whipped out his microterm. “School’s off!”

I still drop a bit when I see that microterm–Geez, it’s a beauty! It’s a Zeilemann Nova 300, but we’ve spent so much time reworking it, it’s practically custom from the motherboard up. Hi-baud, rammed, rammed, ported, with the wafer display folds down to about the size of a vid casette; I’d give an ear to have one like it. We’d used Georgie’s old man’s chipburner to tuck some special tricks in ROM and there wasn’t a system in CityNet it couldn’t talk to.

Rayno ordered up a smartcab and we piled out of Buddy’s. No more riding the transys for us, we were going in style! We charged the smartcab off to some law company and cruised all over Eastside.

Riding the boulevards got stale after awhile, so we rerouted to the library. We do a lot of our fun at the library, ’cause nobody ever bothers us there. Nobody ever goes there. We sent the smartcab, still on the law company account, off to Westside. Getting past the guards and the librarians was just a matter of flashing some ID and then we zipped off into the stacks.

Now, you’ve got to ID away your life to get on the libsys terms–which isn’t worth half a scare when your ID is all fudged like ours is–and they watch real careful. But they move their terms around a lot, so they’ve got ports on line all over the building. We found an unused port, and me and Georgie kept watch while Rayno plugged in his microterm and got on line.

“Get me into the Net,” he said, handing me the term. We don’t have a stored opsys yet for Netting, so Rayno gives me the fast and tricky jobs.

Through the dataphones I got us out of the libsys and into CityNet. Now, Olders will never understand. They still think a computer has got to be a brain in a single box. I can get the same results with opsys stored in a hundred places, once I tie them together. Nearly every computer has got a dataphone port, CityNet is a great linking system, and Rayno’s microterm has the smarts to do the job clean and fast so nobody flags on us. I pulled the compiler out of Georgie’s old man’s computer and got into our Net. Then I handed the term back to Rayno.

“Well, let’s do some fun. Any requests?” Georgie wanted something to get even with his old man, and I had a new routine cooking, but Lisa’s eyes lit up ’cause Rayno handed the term to her, first.

“I wanna burn Lewis,” she said.

“Oh fritz!” Georgie complained. “You did that last week!”

“Well, he gave me another F on a theme.”

“I never get F’s. If you’d read books once in a–”

“Georgie,” Rayno said softly, “Lisa’s on line.” That settled that. Lisa’s eyes were absolutely glowing.

Lisa got back into CityNet and charged a couple hundred overdue books to Lewis’s libsys account. Then she ordered a complete fax sheet of Encyclopedia Britannica printed out at his office. I got next turn.

Georgie and Lisa kept watch while I accessed. Rayno was looking over my shoulder. “Something new this week?”

“Airline reservations. I was with my Dad two weeks ago when he set up a business trip, and I flagged on maybe getting some fun. I scanned the ticket clerk real careful and picked up the access code.”

“Okay, show me what you can do.”

Accessing was so easy that I just wiped a couple of reservations first, to see if there were any bells and whistles.

None. No checks, no lockwords, no confirm codes. I erased a couple dozen people without crashing down or locking up. “Geez,” I said, “There’s no deep secures at all!”

“I been telling you. Olders are even dumber than they look. Georgie? Lisa? C’mon over here and see what we’re running!”

Georgie was real curious and asked a lot of questions, but Lisa just looked bored and snapped her gum and tried to stand closer to Rayno. Then Rayno said, “Time to get off Sesame Street. Purge a flight.”

I did. It was simple as a save. I punched a few keys, entered, and an entire plane disappeared from all the reservation files. Boy, they’d be surprised when they showed up at the airport. I started purging down the line, but Rayno interrupted.

“Maybe there’s no bells and whistles, but wipe out a whole block of flights and it’ll stand out. Watch this.” He took the term from me and cooked up a routine in RAM to do a global and wipe out every flight that departed at an :07 for the next year. “Now that’s how you do these things without waving a flag.”

“That’s sharp,” Georgie chipped in, to me. “Mike, you’re a genius! Where do you get these ideas?” Rayno got a real funny look in his eyes.

“My turn,” Rayno said, exiting the airline system.

“What’s next in the stack?” Lisa asked him.

“Yeah, I mean, after garbaging the airlines . . .” Georgie didn’t realize he was supposed to shut up.

“Georgie! Mike!” Rayno hissed. “Keep watch!” Soft, he added, “It’s time for The Big One.”

“You sure?” I asked. “Rayno, I don’t think we’re ready.”

“We’re ready.”

Georgie got whiney. “We’re gonna get in big trouble–”

“Wimp,” spat Rayno. Georgie shut up.

We’d been working on The Big One for over two months, but I still didn’t feel real solid about it. It almost made a clean if/then/else; if The Big One worked/then we’d be rich/else . . . it was the else I didn’t have down.

Georgie and me scanned while Rayno got down to business. He got back into CityNet, called the cracker opsys out of OurNet, and poked it into Merchant’s Bank & Trust. I’d gotten into them the hard way, but never messed with their accounts; just did it to see if I could do it. My data’d been sitting in their system for about three weeks now and nobody’d noticed. Rayno thought it would be really funny to use one bank computer to crack the secures on other bank computers.

While he was peeking and poking I heard walking nearby and took a closer look. It was just some old waster looking for a quiet place to sleep. Rayno was finished linking by the time I got back. “Okay kids,” he said, “this is it.” He looked around to make sure we were all watching him, then held up the term and stabbed the RETURN key. That was it. I stared hard at the display, waiting to see what else was gonna be. Rayno figured it’d take about ninety seconds.

The Big One, y’see, was Rayno’s idea. He’d heard about some kids in Sherman Oaks who almost got away with a five million dollar electronic fund transfer; they hadn’t hit a hangup moving the five mil around until they tried to dump it into a personal savings account with a $40 balance. That’s when all the flags went up.

Rayno’s cool; Rayno’s smart. We weren’t going to be greedy, we were just going to EFT fifty K. And it wasn’t going to look real strange, ’cause it got strained through some legitimate accounts before we used it to open twenty dummies.

If it worked.

The display blanked, flickered, and showed:

I started to shout, but remembered I was in a library. Georgie looked less terrified. Lisa looked like she was going to attack Rayno.

Rayno just cracked his little half smile, and started exiting. “Funtime’s over, kids.”

“I didn’t get a turn,” Georgie mumbled.

Rayno was out of all the nets and powering down. He turned, slow, and looked at Georgie through those eyebrows of his. “You are still on The List.”

Georgie swallowed it ’cause there was nothing else he could do. Rayno folded up the microterm and tucked it back inside his jumper.

We got a smartcab outside the library and went off to someplace Lisa picked for lunch. Georgie got this idea about garbaging up the smartcab’s brain so that the next customer would have a real state fair ride, but Rayno wouldn’t let him do it. Rayno didn’t talk to him during lunch, either.

After lunch I talked them into heading up to Martin’s Micros. That’s one of my favorite places to hang out. Martin’s the only Older I know who can really work a computer without blowing out his headchips, and he never talks down to me, and he never tells me to keep my hands off anything. In fact, Martin’s been real happy to see all of us, ever since Rayno bought that $3000 vidgraphics art animation package for Lisa’s birthday.

Martin was sitting at his term when we came in. “Oh, hi Mike! Rayno! Lisa! Georgie!” We all nodded. “Nice to see you again. What can I do for you today?”

“Just looking,” Rayno said.

“Well, that’s free.” Martin turned back to his term and punched a few more IN keys. “Damn!” he said to the term.

“What’s the problem?” Lisa asked.

“The problem is me,” Martin said. “I got this software package I’m supposed to be writing, but it keeps bombing out and I don’t know what’s wrong.”

Rayno asked, “What’s it supposed to do?”

“Oh, it’s a real estate system. Y’know, the whole future-values-in-current-dollars bit. Depreciation, inflation, amortization, tax credits–”

“Put that in our tang,” Rayno said. “What numbers crunch?”

Martin started to explain, and Rayno said to me, “This looks like your kind of work.” Martin hauled his three hundred pounds of fat out of the chair, and looked relieved as I dropped down in front of the term. I scanned the parameters, looked over Martin’s program, and processed a bit. Martin’d only made a few mistakes. Anybody could have. I dumped Martin’s program and started loading the right one in off the top of my head.

“Will you look at that?” Martin said.

I didn’t answer ’cause I was thinking in assembly. In ten minutes I had it in, compiled, and running test sets. It worked perfect, of course.

“I just can’t believe you kids,” Martin said. “You can program easier than I can talk.”

“Nothing to it,” I said.

“Maybe not for you. I knew a kid grew up speaking Arabic, used to say the same thing.” He shook his head, tugged his beard, looked me in the face, and smiled. “Anyhow, thanks loads, Mike. I don’t know how to . . .” He snapped his fingers. “Say, I just got something in the other day, I bet you’d be really interested in.” He took me over to the display case, pulled it out, and set it on the counter. “The latest word in microterms. The Zeilemann Starfire 600.”

I dropped a bit! Then I ballsed up enough to touch it. I flipped up the wafer display, ran my fingers over the touch pads, and I just wanted it so bad! “It’s smart,” Martin said. “Rammed, rammed, and ported.”

Rayno was looking at the specs with that cold look in his eye. “My 300 is still faster,” he said.

“It should be,” Martin said. “You customized it half to death. But the 600 is nearly as fast, and it’s stock, and it lists for $1400. I figure you must have spent nearly 3K upgrading yours.”

Go here to see the original:

Cyberpunk – a short story by Bruce Bethke

Posted in Cyberpunk | Comments Off on Cyberpunk – a short story by Bruce Bethke

Dave’s Philosophy – Ethics: Ethical Egoism & Altruism

Posted: at 9:10 pm

The word egoism derives from the Latin ego which means I. Egoism is the idea of being selfish and always putting your own needs first without regard for the needs of others. Someone who is a complete egoist does not care at all about other people, but only about their own goals, interests, and their own benefit. An egoist will not necessarily be greedy and selfish in an obvious sense, for example, they may be polite and friendly, even happy to share and help others. However, their motive will always be their own gain, for example, they will help others in order to be helped in return at a later date, and they will obey laws because this helps to bring about peace and security in society, something which they will benefit from. Their motivation for seemingly thoughtful and caring actions will not be actual concern for others, but intelligent self-concern and prudence. There are two different kinds of egoism, so it is necessary to describe their differences: (i) Psychological Egoism; (ii) Ethical Egoism.

Psychological Egoism is not a moral theory, but aims to be a psychological theory about human motivation. Psychological Egoism holds that all of us are completely selfish and are hardwired to only think of our own needs. It is literally impossible for us to genuinely care about other people. Whenever we perform an action it is always with our own good in mind. When you help a friend it is so they will help you back. If a man gives money to a cancer charity this is not because he really cares about those who are suffering from cancer, but so that he can feel good about himself, or so that there will be good health care available for him to use if he is unlucky enough to get cancer. Ethical Egoism, on the other hand, states that it is possible for people to genuinely care about other people (to be altruistic), but that they should not bother caring about others. Instead people ought to be selfish and think only about their own needs. This article will focus purely on Ethical Egoism.


The word altruism derives from the French autres which means others. A person who is altruistic cares about and is motivated by the needs of other people. Altruistic actions are selfless, they are done for the sake of other people and not for any personal gain, perhaps even sacrificing your own needs and desires for the sake of others. Many people argue that actions can only be moral if they are done for the sake of helping others rather than yourself. It is often thought that we have a natural inclination to be selfish, so that learning to think of others is an admirable thing to do. Mother Teresa is often seen as an example of altruism, she was a Catholic nun who dedicated her life to helping the poor in India. A Psychological Egoist would say that she really did this for her own benefit, to feel good about herself or get in to heaven. An Ethical Egoist may view her care for others as genuine, but see it as foolish, because she should have been looking after her own needs, not other peoples needs.

There is a common assumption that you are either selfish in your actions, or selfless. This is perhaps too simplistic, for most of us probably have a complicated mixture of selfish desires and selfless desires. Many philosophers argue that egoism and altruism do not totally exclude each other, you do not have to lose all care for yourself in order to care properly about other people. Jesus said love thy neighbour as thyself which is clearly demonstrating a balance between your own needs and those of others yes you should care about and look after yourself, but you should also recognise the humanity in other people and care about them too: you should not hurt them and where possible you should help them.

Ethical Egoism

Ethical Egoism does not deny the possibility of altruism: Ethical Egoists would admit that it is perfectly possible to care about other people. However, according to the Ethical Egoist you ought not to care about the needs or welfare of others, you should only care about and act on your own needs and interests. This means that Ethical Egoism is a Normative Ethical theory stating how people should act, and stating that you should act selfishly. The theory turns conventional morality on its head by saying it is good to be selfish: people are capable of being altruistic but they should not bother caring for others. Of course it makes sense to help other people and not to be outwardly greedy, to share for example, but only because this is the best way of achieving what you want for yourself in the long term.

Ethical Egoism is a teleological theory according to which the correct action a person should take is the action that has the best consequence for that person themselves, regardless of the effects on other people. As Michael Palmer puts it:

“Egoism maintains that each person ought to act to maximise his or her own long-term good or well-being. An egoist, in other words, is someone who holds that their one and only obligation is to themselves and their only duty is to serve their own self-interest If an action produces benefits for them, they should do it; if it doesnt, then it is morally acceptable for them not to do it.”

Michael Palmer, Moral Problems, page 34.

An Ethical Egoist only cares about his own needs and desires, and would view himself as having value, whilst others are not of value to him. This is very similar to the way that a commercial companys only concern is its own profits these companies exist to expand as much as they can, to conquer as much of the market as they can, and to overtake their rivals or even put them out of business. If a company takes actions which benefit its rivals at its own expense then from an economists point of view we would automatically call it mismanaged and condemn its actions as foolish. This is what the Ethical Egoist does to all actions which are altruistic, he condemns them as foolish: people should look after number one and not be burdened with the needs of others. Of course, this doesnt mean that people should go out looting shops, stealing cars, killing enemies and generally doing what they want, because as Thomas Hobbes pointed out, such actions would lead to anarchy and wouldnt be good for anybody. Rather, Ethical Egoists should live in peace with one another, help each other, and work together, because that is the best way for the individual to get the good living conditions he is after. You do not steal from others so they will not steal from you, and so on.

Ethical Egoism & hedonism

In many cases Ethical Egoists are also hedonists, which means that they view pleasure or happiness as the ultimate goal of life, to be specific, their own happiness and pleasure. Generally Ethical Egoists will recommend acting with long term interests in mind rather than seeking short term advantages, for example, instead of going out with friends all the time in your teenage years it would be better to spend more time working for school in order to get good qualifications and a good job in the future, which will bring a happy life rather than just a happy couple of years. Hedonists view pleasure as an intrinsic good, something which is good in and of itself, and they view pain or discomfort as intrinsically bad, however, hedonists argue that sometimes pain or discomfort will have to be accepted in order to achieve a good pleasurable thing. Exercise may be hard work and sometimes painful, and dieting will mean missing out on pleasurable experiences, but the health benefits will make the effort worth it. This is what is know as an instrumental good, something which is not good in itself but which leads to something else which is good. Another example is work; many people find it unpleasant and boing, so work is a bad thing to them. However, work means that you to get paid and so it helps you to get the pleasurable things you want: food, clothing, a house, trips to the cinema, etc. This means work is an instrumental good. For the average Ethical Egoist the goal of life is their own personal long-term pleasure, and achieving this will mean treating others well, but not because they care for others, rather, because it is an instrumental good that will allow them to have a pleasurable life.


The Greek philosopher Epicurus (341-270 BCE) was a hedonist and stated pleasure is our first and kindred good. It is the starting point of every choice and of every aversion. It is from his name that we derive the word epicurean which means someone who revels in the delights of food, which is ironic because Epicurus himself had a very plain diet since he suffered from stomach problems. Taking a line somewhat similar to Buddhism, Epicurus argued that true pleasure was the absence of pain in the body and trouble in the soul and so he actually advocated a simple life where people try to give up desiring all the things they cannot have. He did not think that a life of sex, drink, and good food was a truly pleasurable life because he held that the greater the upside is the greater the downside will be also, for example, the more you drink the bigger the hangover is. Instead Epicurus advocated a life of sober reasoning and knowledge.

Epicurus also argued that a life cannot be truly pleasurable unless it is also a life of prudence, honour, and justice, which indicates an important idea that the happiness of the individual is dependent on the happiness of his community, so we must therefore treat others well. Epicurus would have said that the best way to be happy is to have friends and to act honourably towards other people.

Adam Smith

Adam Smith (1723 1790) was a philosopher and economist, and was a champion of private property and free market economy. He took the view that intentionally serving your own interests will bring benefits for all. Philip Stokes gives the following example: suppose that Jones, in seeking his own fortune, decides to set up and run his own business, manufacturing some common item of everyday need. In seeking only to provide for his own fortune, Jones entrepreneurial enterprise has a number of unintended benefits to others. First he provides a livelihood for the people in his employ, thus benefiting them directly. Second, he makes more readily available some common item which previously had been more difficult or more expensive to obtain for his customers. Smith argued that a free market and competition would ensure that businesses kept their prices at competitive rates, helping to make customers better off as well as the business owners. Through selfish action everyone is better off, therefore, capitalist selfishness is the key to universal happiness and prosperity for all.

However, arguably the consequences of businesses acting in a self-interested way is not necessarily benefits for all; we need only look at the appalling conditions of people working in factories during the Industrial Revolution to see that this is so. Today the people of industrialised countries have a much more comfortable lifestyle, but most of the rest of the world still languishes in poverty and exploitation, and it is precisely through their subjugation that we have our high standard of living: we have so much material wealth because we exploit those who are powerless and poor, we give them the choice of working in dire conditions to make us cheep goods or starving. Arguably, the factories have not improved, they have just moved.

James Rachels

James Rachels discusses Ethical Egoism, but he does not endorse it, and in fact gives reasons to reject it. None the less, his discussion of Ethical Egoism is very enlightening. He states that the idea that we have duties to others is a common assumption. We are often made to think that there is a natural obligation towards others because they are people and because our own actions could help or harm them. One argument for Ethical Egoism is that this simply is not so, we have no specific reason to think of others as important, we have no specific obligations towards them, whereas on the other hand, we have a self-evident duty to look after ourselves.

One argument for Ethical Egoism that he considers is that altruism is self defeating. According to this perspective each individual person is in the best position to serve their own interests, whilst others are not. I know intimately what I need, whereas others, if they try to take an interest in my life and help me, may not know what is best and should therefore mind their own business and not interfere. There is a sense in which helping others is an intrusion on their privacy, and similarly, there is the view that charity towards others is degrading: it robs them of their individual dignity and self-respect. The offer of charity says, in effect, that they are not competent to care for themselves. Rachels rejects this argument as ridiculous as it is perfectly clear what a starving man needs, especially if he is actually asking for help. Also, arguing that we shouldnt interfere because it invades another persons dignity hardly seems like a valid egoistic argument, as it appeals to the needs of other people.

Next Rachels considers Thomas Hobbes (1588 1679). Hobbes believed that selfishness was natural (he was a Psychological Egoist), and therefore that Ethical Egoism was the only theory that made any sense. Rather than saying that Ethical Egoism runs counter to our common sense morality, Hobbes argued that it actually explains and underpins it. When we treat others well, help them, and do our best not to harm them, it is all done in order to create the kind of stable society which is best for our own personal needs. By not killing or stealing from others we ensure that we ourselves will not be killed or stolen from. By putting welfare measures in place we ensure that we ourselves will be helped in times of trouble. Hobbes takes the view that when we join society we make a promise not to hurt others and to help them when they are in need, and we make this promise so that we in turn are not hurt and so that we may be aided in times of need. What Hobbes has tried to do, then, is say that Ethical Egoism is not counter to our common morality, it is the foundation of our common morality.

Ayn Rand

Another famous egoist is Ayn Rand (1905 1982), however, her version of Ethical Egoism is very different from the average case of acting in self-interest. For Rand it is important to be a hardworking and creative person and to be as independent as you can. In her view people should work hard to satisfy their needs, they should not expect others to give them a hand-out or a free ride. If you work hard and achieve a good life for yourself, such as having wealth for example, then you have earned what you possess and no one should have the right to demand that you give it away to those less fortunate or successful than yourself. She views altruism as a moral philosophy founded on leeching, she sees it as a philosophy which tells people that they ought to give up all they have, and all their own interests, to satisfy the needs of others. In her view people should strive to fulfil their own needs and not be parasitical upon those who are more successful than themselves.

Interestingly, Rand also rejects those who get into positions of power and leech off of those below them, people such as tyrants and gang leaders. This is what marks her Ethical Egoism as different from that of the average Egoist; whereas the average Ethical Egoist will say that it is fine to abuse others to get what you want, all that matters is your own gain, Rand believes that this is wrong you should work hard to get what you have, not steal it from others in some way. If you have worked hard and been creative then you have a right to be proud of yourself and to reap the rewards. In her view those who label this kind of independence and self-motivation as selfish are doing so because they wish to force creative and useful people to share with them. The following quote is from her novel, The Fountainhead:

“The first right on Earth is the right of the ego. Mans first duty is to himself. His moral law is never to place his prime goal within the persons of others. His moral obligation is to do what he wishes, provided his wish does not depend primarily upon other men. A man thinks and works alone. A man cannot rob, exploit or rule alone. Rulers of men are not egoist. They create nothing. They exist entirely through the persons of others. Their goal is their subjects, in activity of enslaving. They are as dependent as the beggar, the social worker and the bandit. The form of dependence does not matter.”

Ayn Rand, The Fountainhead

Criticisms of Ethical Egoism

As you may imagine, there are many criticisms of Ethical Egoism, the most obvious simply being the insistence that selfish actions do not have moral worth. Read these criticisms and consider how an Ethical Egoist might respond to defend their view:

1) Anything can be justified, so long as you can profit from it and get away with it.

It is clear that if everyone were to adopt Ethical Egoism then, in general, life would function admirably well, people would help each other because team work produces the best results for every individual, and people would not harm each other because everyone is better off in a world where they feel safe and protected. However, what if the opportunity arises for a person to gain from harming another person and get away with it? Suppose, for example, that I am good with computers and know how to hack websites and hide my trail; why not commit some fraud and live a millionaire lifestyle? Or what if I was in a secluded place and came across a man asleep on a bench with a briefcase full of cash; why not kill him and take the cash and run? And why stop at one killing if I can profit from many, perhaps becoming a drugs baron, living in luxury safe and secure whilst people die to line my pockets? If Ethical Egoism is true then it becomes morally correct to hurt others when you can gain from it, just so long as you can get away with it. Surely it is the very point of morality to hold our selfish and violent urges at bay, and yet, Ethical Egoism gives them clear justification as and when you can get away with it.

However, James Rachels claims that this attack against Ethical Egoism is ineffective because it simply presumes that Ethical Egoism is false; the criticism assumes that it is wrong to hurt others for personal gain, but this is essentially just assuming Ethical Egoism is false. Surely an Ethical Egoist would just accept that it was right to hurt others for gain, as Hobbes put it, in a war or conflict the cardinal virtues are force and fraud violence and trickery.

2) Ethical Egoism cannot provide answers for moral conflicts

Kurt Baier argues that the reason why we need morality is in order for it to settle conflicts of interest, however, Ethical Egoism does not provide a means to resolve these conflicts and actually exacerbates them, thus, it is an insufficient moral theory. Imagine, for example, that Kate and Bruce are getting divorced and are arguing over who should have custody of their children. Surely moral rules should be in place to establish who is the best parent to care for the children, who is most deserving of the custody, and so on: morality is there to resolve the problem. However, under Ethical Egoism a judge has no reason to care who the children end up with because neither option is particularly in his interests, unless one side offers a bribe of course. Moreover, Ethical Egoism would actually exacerbate the problem by encouraging both Kate and Bruce to argue all the more in pursuit of their own desires: each ought to do whatever they can to get their own way, without any care or concern for the effects on the other party, or even their children. So we see that rather than resolving the conflict Ethical Egoism will actually make it worse. Baier states that the Ethical Egoist solution to the conflict is for each side to up their game in their efforts to win custody, for Kate to seek to liquidate Bruce (either kill him or somehow make him ineligible to win) and for Bruce to attempt the same with Kate. This escalates the conflict and so is the exact opposite of what morality is meant to do.

James Rachels argues that this attack is not completely successful against Ethical Egoism because it is based on the assumption that morality exists to resolve conflicts in order to create harmony, a view which and Egoist might not agree with. An Egoist might say that life is essentially a long series of conflicts in which each person is struggling to come out on top. For the Egoist morality is not about amicably resolving conflicts and compromising, the good man is the one who wins and gets what he wants.

3) Ethical Egoism is arbitrary, like racism

James Rachels rejects Ethical Egoism on the basis that it makes unjustifiable and arbitrary distinctions between people. There are numerous ethical perspectives which create distinctions between groups of people, for example, racism. Racism works by dividing the people of the world in to two groups, those of my race and those not of my race. Next it asserts that one group (your own) is superior in some way to the other group. This is then used to justify unequal treatments of those who are not of your race. In the past white racists have asserted that non-whites are intellectually inferior, or morally inferior, and this meant that it was acceptable for whites to get better treatment than non-whites, and it was acceptable for non-whites to do the menial jobs, or to be slaves, or to have their countries invaded. In reality there are no important genetic or cultural differences between the races which would justify saying that one group was superior to the other in any way. We reject racism, xenophobia, and other prejudices such as sexism because we see them as groundless: there is no valid reason to make a division between one superior group and another inferior group. Rachels argues that if we look closely at Ethical Egoism it makes the same mistake:

“Ethical Egoism is a moral theory of the same type [as racism]. It advocates that each of us divide the world into two categories ourselves and all the rest and that we regard the interests of those in the first group as more important than the interests of those in the second group. But each of us can ask, what is the difference between me and everyone else that justifies placing myself in this special category? Am I more intelligent? Do I enjoy my life more? Are my accomplishments greater? Do I have needs or abilities that are so different from the needs or abilities of others? In short, what makes me so special? Failing an answer it turns out that Ethical Egoism is an arbitrary doctrine, in the same way that racism is arbitrary. And this, in addition to explaining why Ethical Egoism is unacceptable, also sheds some light on the question of why we should care about others.”

James Rachel, Ethical Egoism

Rachels rejects Ethical Egoism because it takes the view that an individual is, from his own perspective, more important than others, even to the point where he might willingly sacrifice millions for his own needs, but there is no rational basis for an individual to think of himself as being any more important than any others. Thus, Ethical Egoism is baseless and we must recognise that others and their needs are just as important as ourselves and our own needs. Yes it is normal to seek your own happiness, but this cannot justify treating others like they have little or no value, because these other people are no different from ourselves.

Summary and Conclusion

Whether or not people have a duty to help others, or at least not to harm them, is a key question in Normative Ethics. Ethical Egoists argue that you should only care about yourself, and ignore the needs of others. This means that it would be acceptable to hurt other people for your own benefit, so long as you can get away with it. James Rachels argues that it is illogical to think of yourself as being more important than anyone else, indeed, that this is equivalent to racism. Is he correct, or is selfishness a good thing?

More here:

Dave’s Philosophy – Ethics: Ethical Egoism & Altruism

Posted in Ethical Egoism | Comments Off on Dave’s Philosophy – Ethics: Ethical Egoism & Altruism

Incompatibilism – Wikipedia, the free encyclopedia

Posted: at 9:05 pm

Incompatibilism is the view that a deterministic universe is completely at odds with the notion that people have a free will; that there is a dichotomy between determinism and free will where philosophers must choose one or the other. This view is pursued in at least three ways: libertarians deny that the universe is deterministic, the hard determinists deny that any free will exists, and pessimistic incompatibilists (hard indeterminists) deny both that the universe is determined and that free will exists. Some of these incompatibilistic views have more trouble than the others in dealing with the standard argument against free will.

Incompatiblism is contrasted with compatibilism, which rejects the determinism/free will dichotomy. Compatibilists maintain free will by defining it as more of a ‘freedom to act’a move that has been met with some criticism.

Metaphysical Libertarianism argues that free will is real and that determinism is false. Such dualism risks an infinite regress however;[1] if any such mind is real, an objection can still be raised using the standard argument against free will that it is shaped by a higher power (a necessity or chance). Libertarian Robert Kane (among others) presented an alternative model:

Robert Kane (editor of the Oxford Handbook of Free Will) is a leading incompatibilist philosopher in favour of free will. Kane seeks to hold persons morally responsible for decisions that involved indeterminism in their process. Critics maintain that Kane fails to overcome the greatest challenge to such an endeavor: “the argument from luck”.[2] Namely, if a critical moral choice is a matter of luck (indeterminate quantum fluctuations), then on what grounds can we hold a person responsible for their final action? Moreover, even if we imagine that a person can make an act of will ahead of time, to make the moral action more probable in the upcoming critical moment, this act of ‘willing’ was itself a matter of luck.

Libertarianism in the philosophy of mind is unrelated to the like-named political philosophy. It suggests that we actually do have free will, that it is incompatible with determinism, and that therefore the future is not determined. For example, at this moment, one could either continue reading this article if one wanted, or cease. Under this assertion, being that one could do either, the fact of how the history of the world will continue to unfold is not currently determined one way or the other.

One famous proponent of this view was Lucretius, who asserted that the free will arises out of the random, chaotic movements of atoms, called “clinamen”. One major objection to this view is that science has gradually shown that more and more of the physical world obeys completely deterministic laws, and seems to suggest that our minds are just as much part of the physical world as anything else. If these assumptions are correct, incompatibilist libertarianism can only be maintained as the claim that free will is a supernatural phenomenon, which does not obey the laws of nature (as, for instance, maintained by some religious traditions).

However, many libertarian view points now rely upon an indeterministic view of the physical universe, under the assumption that the idea of a deterministic, “clockwork” universe has become outdated since the advent of quantum mechanics[citation needed]. By assuming an indeterministic universe libertarian philosophical constructs can be proposed under the assumption of physicalism.

There are libertarian view points based upon indeterminism and physicalism, which is closely related to naturalism.[3] A major problem for naturalistic libertarianism is to explain how indeterminism can be compatible with rationality and with appropriate connections between an individual’s beliefs, desires, general character and actions. A variety of naturalistic libertarianism is promoted by Robert Kane,[4][5] who emphasizes that if our character is formed indeterministically (in “self-forming actions”), then our actions can still flow from our character, and yet still be incompatibilistically free.

Alternatively, libertarian view points based upon indeterminism have been proposed without the assumption of naturalism. At the time C. S. Lewis wrote Miracles,[6]quantum mechanics (and physical indeterminism) was only in the initial stages of acceptance, but still Lewis stated the logical possibility that, if the physical world was proved to be indeterministic, this would provide an entry (interaction) point into the traditionally viewed closed system, where a scientifically described physically probable/improbable event could be philosophically described as an action of a non-physical entity on physical reality (noting that, under a physicalist point of view, the non-physical entity must be independent of the self-identity or mental processing of the sentient being). Lewis mentions this only in passing, making clear that his thesis does not depend on it in any way.

Others may use some form of Donald Davidson’s anomalous monism to suggest that although the mind is in fact part of the physical world, it involves a different level of description of the same facts, so that although there are deterministic laws under the physical description, there are no such laws under the mental description, and thus our actions are free and not determined.[7]

Those who reject free will and accept Determinism are variously known as “hard determinists”, hard incompatibilists, free will skeptics, illusionists, or impossibilists. They believe that there is no ‘free will’ and that any sense of the contrary is an illusion.[8] Of course, hard determinists do not deny that one has desires, but say that these desires are causally determined by an unbroken chain of prior occurrences. According to this philosophy, no wholly random, spontaneous, mysterious, or miraculous events occur. Determinists sometimes assert that it is stubborn to resist scientifically motivated determinism on purely intuitive grounds about one’s own sense of freedom. They reason that the history of the development of science suggests that determinism is the logical method in which reality works.

William James said that philosophers (and scientists) have an “antipathy to chance.”[9] Absolute chance, a possible implication of quantum mechanics and the indeterminacy principle, implies a lack of causality[citation needed]. This possibility often disturbs those who assume there must be a causal and lawful explanation for all events.

Since many believe that free will is necessary for moral responsibility, this may imply disastrous consequences for their theory of ethics.

As something of a solution to this predicament, it has been suggested that, for the sake of preserving moral responsibility and the concept of ethics, one might embrace the so-called “illusion” of free will. This, despite thinking that free will does not exist according to determinism. Critics argue that this move renders morality merely another “illusion”, or else that this move is simply hypocritical.

The Determinist will add that, even if denying free will does mean morality is incoherent, such an unfortunate result has no effect on the truth. Note, however, that hard determinists often have some sort of ‘moral system’ that relies explicitly on determinism. A Determinist’s moral system simply bears in mind that every agent’s actions in a given situation are, in theory, predicted by the interplay of environment and upbringing. For instance, the Determinist may still punish undesirable behaviours for reasons of behaviour modification or deterrence.

While hard determinism clearly opposes the concept of free will, some have suggested that free will might also be incompatible with non-determinism (often on the basis of lack of control associated with pure randomness).[10][11][12] This is hard incompatibilism, and has been used as an argument against Libertarian incompatibilism.

Under the assumption of naturalism and indeterminism, where there only exists the natural world and that the natural world is indeterministicevents are not predetermined (e.g., for quantum mechanical reasons) and any event has a probability assigned to itno event can be determined by a physical organism’s perceived free will, nor can any event be strictly determined by anything at all.

Hard incompatibilism differs from hard determinism in that it does not commit to the truth of determinism.[13] By and large, supporters of hard incompatibilism accept both libertarian critiques of compatibilism and compatibilist critiques of libertarianism.

In recent years researchers in the field of experimental philosophy have been working on determining whether ordinary people, who aren’t experts in this field, naturally have compatibilist or incompatibilist intuitions about determinism and moral responsibility.[14] Some experimental work has even conducted cross-cultural studies.[15] The debate about whether people naturally have compatibilist or incompatibilist intuitions has not come out overwhelmingly in favor of one view or the other. Still, there has been some evidence that people can naturally hold both views. For instance, when people are presented with abstract cases which ask if a person could be morally responsible for an immoral act when they could not have done otherwise, people tend to say no, or give incompatibilist answers, but when presented with a specific immoral act that a specific person committed, people tend to say that that person is morally responsible for their actions, even if they were determined (that is, people also give compatibilist answers).[16]

See original here:
Incompatibilism – Wikipedia, the free encyclopedia

Posted in Libertarianism | Comments Off on Incompatibilism – Wikipedia, the free encyclopedia