Tag Archives: post

Should privacy legislation influence how courts interpret the …

Posted: September 18, 2016 at 8:12 am

I recently posted a revised draft of my forthcoming article, The Effect of Legislation on Fourth Amendment Interpretation, and I thought I would blog a bit about it. The article considers a recurring question in Fourth Amendment law: When courts are called on to interpret the Fourth Amendment, and there is privacy legislation on the books that relates to the governments conduct, should the existence of legislation have any effect on how the Fourth Amendment is interpreted? And if it should have an effect, what effect should it have?

I was led to this question by reading a lot of cases in which the issue came up and was answered in very different ways by particularly prominent judges. When I assembled all the cases, I found that judges had articulated three different answers. None of the judges seemed aware that the question had come up in other cases and had been answered differently there. Each of the three answers seemed plausible, and each tapped into important traditions in constitutional interpretation. So you have a pretty interesting situation: Really smart judges were running into the same question and answering it in very different ways, each rooted in substantial traditions, with no one approach predominating and no conversation about which approach was best. It seemed like a fun issue to explore in an article.

In this post Ill summarize the three approaches courts have taken. I call the approaches influence, displacement and independence. For each approach, Ill give one illustrative case. But theres a lot more where that came from: For more details on the three approaches and the cases supporting them, please read the draft article.

1. Influence. In the influence cases, legislation is considered a possible standard for judicial adoption under the Fourth Amendment. The influence cases rest on a pragmatic judgment: If courts must make difficult judgment calls about how to balance privacy and security, and legislatures have done so already in enacting legislation, courts can draw lessons from the thoughtful judgment of a co-equal branch. Investigative legislation provides an important standard for courts to consider in interpreting the Fourth Amendment. Its not binding on courts, but its a relevant consideration.

The Supreme Courts decision in United States v. Watsonis an example of the influence approach. Watson considered whether it is constitutionally reasonable for a postal inspector to make a public arrest for a felony offense based on probable cause but without a warrant. A federal statute expressly authorized such warrantless arrests. The court ruled that the arrests were constitutional without a warrant and that the statute was constitutional. Justice Whites majority opinion relied heavily on deference to Congresss legislative judgment. According to Justice White, the statute authorizing the arrests represents a judgment by Congress that it is not unreasonable under the Fourth Amendment for postal inspectors to arrest without a warrant provided they have probable cause to do so. That judgment was entitled to presumptive deference as the considered judgment of a co-equal branch. Because there is a strong presumption of constitutionality due to an Act of Congress, the court stated, especially when it turns on what is reasonable, then obviously the Court should be reluctant to decide that a search thus authorized by Congress was unreasonable and that the Act was therefore unconstitutional.

2. Displacement. In the displacement cases, the existence of legislation counsels against Fourth Amendment protection that might interrupt the statutory scheme. Because legislatures can often do a better job at balancing privacy and security in new technologies as compared to courts, courts should reject Fourth Amendment protection as long as legislatures are protecting privacy adequately to avoid interfering with the careful work of the legislative branch. The existence of investigative legislation effectively preempts the field and displaces Fourth Amendment protection that may otherwise exist.

Justice Alitos concurrence in Riley v. Californiais an example of the displacement approach. Riley held that the government must obtain a search warrant before searching a cellphone incident to a suspects lawful arrest. Justice Alito concurred, agreeing with the majority only in the absence of adequate legislation regulating cellphone searches. I would reconsider the question presented here, he wrote, if either Congress or state legislatures, after assessing the legitimate needs of law enforcement and the privacy interests of cell phone owners, enact legislation that draws reasonable distinctions based on categories of information or perhaps other variables.

The enactment of investigative legislation should discourage judicial intervention, Justice Alito reasoned, because [l]egislatures, elected by the people, are in a better position than we are to assess and respond to the changes that have already occurred and those that almost certainly will take place in the future. Although Fourth Amendment protection was necessary in the absence of legislation, the enactment of legislation might be reason to withdraw Fourth Amendment protection to avoid the very unfortunate result of federal courts using the blunt instrument of the Fourth Amendment to try to protect privacy in emerging technologies.

3. Independence. In the independence cases, courts treat legislation as irrelevant to the Fourth Amendment. Legislatures are free to supplement privacy protections by enacting statutes, of course. But from the independence perspective, legislation sheds no light on what the Fourth Amendment requires. Courts must independently interpret the Fourth Amendment, and what legislatures have done has no relevance.

An example of independence is Virginia v. Moore, where the Supreme Court decided whether the search incident to a lawful arrest exception incorporates the state law of arrest. Moore was arrested despite a state law saying his crime could not lead to arrest; the question was whether the state law violation rendered the arrest unconstitutional. According to the court, whether state law made the arrest lawful was irrelevant to the Fourth Amendment. It was the courts duty to interpret the Fourth Amendment, and what the legislature decided about when arrests could be made was a separate question. History suggested that the Fourth Amendment did not incorporate statutes. And the states decision of when to make arrests was not based on the Fourth Amendment and was based on other considerations, such as the costs of arrests and whether the legislature valued privacy more than the Fourth Amendment required. Constitutionalizing the state standard would only frustrate the states efforts to achieve those goals, as it would mean los[ing] control of the regulatory scheme and might lead the state to abandon restrictions on arrest altogether. For that reason, the statute regulating the police was independent of the Fourth Amendment standard.

Those are the three approaches. The next question is, which is best? Ill offer some thoughts on that in my next post.

See the original post here:
Should privacy legislation influence how courts interpret the …

Posted in Fourth Amendment | Comments Off on Should privacy legislation influence how courts interpret the …

DNA repair – Wikipedia, the free encyclopedia

Posted: September 8, 2016 at 6:32 am

DNA damage resulting in multiple broken chromosomes

DNA repair is a collection of processes by which a cell identifies and corrects damage to the DNA molecules that encode its genome. In human cells, both normal metabolic activities and environmental factors such as radiation can cause DNA damage, resulting in as many as 1 million individual molecular lesions per cell per day.[1] Many of these lesions cause structural damage to the DNA molecule and can alter or eliminate the cell’s ability to transcribe the gene that the affected DNA encodes. Other lesions induce potentially harmful mutations in the cell’s genome, which affect the survival of its daughter cells after it undergoes mitosis. As a consequence, the DNA repair process is constantly active as it responds to damage in the DNA structure. When normal repair processes fail, and when cellular apoptosis does not occur, irreparable DNA damage may occur, including double-strand breaks and DNA crosslinkages (interstrand crosslinks or ICLs).[2][3] This can eventually lead to malignant tumors, or cancer as per the two hit hypothesis.

The rate of DNA repair is dependent on many factors, including the cell type, the age of the cell, and the extracellular environment. A cell that has accumulated a large amount of DNA damage, or one that no longer effectively repairs damage incurred to its DNA, can enter one of three possible states:

The DNA repair ability of a cell is vital to the integrity of its genome and thus to the normal functionality of that organism. Many genes that were initially shown to influence life span have turned out to be involved in DNA damage repair and protection.[4]

The 2015 Nobel Prize in Chemistry was awarded to Tomas Lindahl, Paul Modrich, and Aziz Sancar for their work on the molecular mechanisms of DNA repair processes.[5][6]

DNA damage, due to environmental factors and normal metabolic processes inside the cell, occurs at a rate of 10,000 to 1,000,000 molecular lesions per cell per day.[1] While this constitutes only 0.000165% of the human genome’s approximately 6 billion bases (3 billion base pairs), unrepaired lesions in critical genes (such as tumor suppressor genes) can impede a cell’s ability to carry out its function and appreciably increase the likelihood of tumor formation and contribute to tumour heterogeneity.

The vast majority of DNA damage affects the primary structure of the double helix; that is, the bases themselves are chemically modified. These modifications can in turn disrupt the molecules’ regular helical structure by introducing non-native chemical bonds or bulky adducts that do not fit in the standard double helix. Unlike proteins and RNA, DNA usually lacks tertiary structure and therefore damage or disturbance does not occur at that level. DNA is, however, supercoiled and wound around “packaging” proteins called histones (in eukaryotes), and both superstructures are vulnerable to the effects of DNA damage.

DNA damage can be subdivided into two main types:

The replication of damaged DNA before cell division can lead to the incorporation of wrong bases opposite damaged ones. Daughter cells that inherit these wrong bases carry mutations from which the original DNA sequence is unrecoverable (except in the rare case of a back mutation, for example, through gene conversion).

There are several types of damage to DNA due to endogenous cellular processes:

Damage caused by exogenous agents comes in many forms. Some examples are:

UV damage, alkylation/methylation, X-ray damage and oxidative damage are examples of induced damage. Spontaneous damage can include the loss of a base, deamination, sugar ring puckering and tautomeric shift.

In human cells, and eukaryotic cells in general, DNA is found in two cellular locations inside the nucleus and inside the mitochondria. Nuclear DNA (nDNA) exists as chromatin during non-replicative stages of the cell cycle and is condensed into aggregate structures known as chromosomes during cell division. In either state the DNA is highly compacted and wound up around bead-like proteins called histones. Whenever a cell needs to express the genetic information encoded in its nDNA the required chromosomal region is unravelled, genes located therein are expressed, and then the region is condensed back to its resting conformation. Mitochondrial DNA (mtDNA) is located inside mitochondria organelles, exists in multiple copies, and is also tightly associated with a number of proteins to form a complex known as the nucleoid. Inside mitochondria, reactive oxygen species (ROS), or free radicals, byproducts of the constant production of adenosine triphosphate (ATP) via oxidative phosphorylation, create a highly oxidative environment that is known to damage mtDNA. A critical enzyme in counteracting the toxicity of these species is superoxide dismutase, which is present in both the mitochondria and cytoplasm of eukaryotic cells.

Senescence, an irreversible process in which the cell no longer divides, is a protective response to the shortening of the chromosome ends. The telomeres are long regions of repetitive noncoding DNA that cap chromosomes and undergo partial degradation each time a cell undergoes division (see Hayflick limit).[10] In contrast, quiescence is a reversible state of cellular dormancy that is unrelated to genome damage (see cell cycle). Senescence in cells may serve as a functional alternative to apoptosis in cases where the physical presence of a cell for spatial reasons is required by the organism,[11] which serves as a “last resort” mechanism to prevent a cell with damaged DNA from replicating inappropriately in the absence of pro-growth cellular signaling. Unregulated cell division can lead to the formation of a tumor (see cancer), which is potentially lethal to an organism. Therefore, the induction of senescence and apoptosis is considered to be part of a strategy of protection against cancer.[12]

It is important to distinguish between DNA damage and mutation, the two major types of error in DNA. DNA damages and mutation are fundamentally different. Damages are physical abnormalities in the DNA, such as single- and double-strand breaks, 8-hydroxydeoxyguanosine residues, and polycyclic aromatic hydrocarbon adducts. DNA damages can be recognized by enzymes, and, thus, they can be correctly repaired if redundant information, such as the undamaged sequence in the complementary DNA strand or in a homologous chromosome, is available for copying. If a cell retains DNA damage, transcription of a gene can be prevented, and, thus, translation into a protein will also be blocked. Replication may also be blocked or the cell may die.

In contrast to DNA damage, a mutation is a change in the base sequence of the DNA. A mutation cannot be recognized by enzymes once the base change is present in both DNA strands, and, thus, a mutation cannot be repaired. At the cellular level, mutations can cause alterations in protein function and regulation. Mutations are replicated when the cell replicates. In a population of cells, mutant cells will increase or decrease in frequency according to the effects of the mutation on the ability of the cell to survive and reproduce. Although distinctly different from each other, DNA damages and mutations are related because DNA damages often cause errors of DNA synthesis during replication or repair; these errors are a major source of mutation.

Given these properties of DNA damage and mutation, it can be seen that DNA damages are a special problem in non-dividing or slowly dividing cells, where unrepaired damages will tend to accumulate over time. On the other hand, in rapidly dividing cells, unrepaired DNA damages that do not kill the cell by blocking replication will tend to cause replication errors and thus mutation. The great majority of mutations that are not neutral in their effect are deleterious to a cell’s survival. Thus, in a population of cells composing a tissue with replicating cells, mutant cells will tend to be lost. However, infrequent mutations that provide a survival advantage will tend to clonally expand at the expense of neighboring cells in the tissue. This advantage to the cell is disadvantageous to the whole organism, because such mutant cells can give rise to cancer. Thus, DNA damages in frequently dividing cells, because they give rise to mutations, are a prominent cause of cancer. In contrast, DNA damages in infrequently dividing cells are likely a prominent cause of aging.[13]

Single-strand and double-strand DNA damage

Cells cannot function if DNA damage corrupts the integrity and accessibility of essential information in the genome (but cells remain superficially functional when non-essential genes are missing or damaged). Depending on the type of damage inflicted on the DNA’s double helical structure, a variety of repair strategies have evolved to restore lost information. If possible, cells use the unmodified complementary strand of the DNA or the sister chromatid as a template to recover the original information. Without access to a template, cells use an error-prone recovery mechanism known as translesion synthesis as a last resort.

Damage to DNA alters the spatial configuration of the helix, and such alterations can be detected by the cell. Once damage is localized, specific DNA repair molecules bind at or near the site of damage, inducing other molecules to bind and form a complex that enables the actual repair to take place.

Cells are known to eliminate three types of damage to their DNA by chemically reversing it. These mechanisms do not require a template, since the types of damage they counteract can occur in only one of the four bases. Such direct reversal mechanisms are specific to the type of damage incurred and do not involve breakage of the phosphodiester backbone. The formation of pyrimidine dimers upon irradiation with UV light results in an abnormal covalent bond between adjacent pyrimidine bases. The photoreactivation process directly reverses this damage by the action of the enzyme photolyase, whose activation is obligately dependent on energy absorbed from blue/UV light (300500nm wavelength) to promote catalysis.[14] Photolyase, an old enzyme present in bacteria, fungi, and most animals no longer functions in humans,[15] who instead use nucleotide excision repair to repair damage from UV irradiation. Another type of damage, methylation of guanine bases, is directly reversed by the protein methyl guanine methyl transferase (MGMT), the bacterial equivalent of which is called ogt. This is an expensive process because each MGMT molecule can be used only once; that is, the reaction is stoichiometric rather than catalytic.[16] A generalized response to methylating agents in bacteria is known as the adaptive response and confers a level of resistance to alkylating agents upon sustained exposure by upregulation of alkylation repair enzymes.[17] The third type of DNA damage reversed by cells is certain methylation of the bases cytosine and adenine.

When only one of the two strands of a double helix has a defect, the other strand can be used as a template to guide the correction of the damaged strand. In order to repair damage to one of the two paired molecules of DNA, there exist a number of excision repair mechanisms that remove the damaged nucleotide and replace it with an undamaged nucleotide complementary to that found in the undamaged DNA strand.[16]

Double-strand breaks, in which both strands in the double helix are severed, are particularly hazardous to the cell because they can lead to genome rearrangements. Three mechanisms exist to repair double-strand breaks (DSBs): non-homologous end joining (NHEJ), microhomology-mediated end joining (MMEJ), and homologous recombination.[16] PVN Acharya noted that double-strand breaks and a “cross-linkage joining both strands at the same point is irreparable because neither strand can then serve as a template for repair. The cell will die in the next mitosis or in some rare instances, mutate.”[2][3]

In NHEJ, DNA Ligase IV, a specialized DNA ligase that forms a complex with the cofactor XRCC4, directly joins the two ends.[21] To guide accurate repair, NHEJ relies on short homologous sequences called microhomologies present on the single-stranded tails of the DNA ends to be joined. If these overhangs are compatible, repair is usually accurate.[22][23][24][25] NHEJ can also introduce mutations during repair. Loss of damaged nucleotides at the break site can lead to deletions, and joining of nonmatching termini forms insertions or translocations. NHEJ is especially important before the cell has replicated its DNA, since there is no template available for repair by homologous recombination. There are “backup” NHEJ pathways in higher eukaryotes.[26] Besides its role as a genome caretaker, NHEJ is required for joining hairpin-capped double-strand breaks induced during V(D)J recombination, the process that generates diversity in B-cell and T-cell receptors in the vertebrate immune system.[27]

MMEJ starts with short-range end resection by MRE11 nuclease on either side of a double-strand break to reveal microhomology regions.[28] In further steps,[29] PARP1 is required and may be an early step in MMEJ. There is pairing of microhomology regions followed by recruitment of flap structure-specific endonuclease 1 (FEN1) to remove overhanging flaps. This is followed by recruitment of XRCC1LIG3 to the site for ligating the DNA ends, leading to an intact DNA.

DNA double strand breaks in mammalian cells are primarily repaired by homologous recombination (HR) and non-homologous end joining (NHEJ).[30] In an in vitro system, MMEJ occurred in mammalian cells at the levels of 1020% of HR when both HR and NHEJ mechanisms were also available.[28] MMEJ is always accompanied by a deletion, so that MMEJ is a mutagenic pathway for DNA repair.[31]

Homologous recombination requires the presence of an identical or nearly identical sequence to be used as a template for repair of the break. The enzymatic machinery responsible for this repair process is nearly identical to the machinery responsible for chromosomal crossover during meiosis. This pathway allows a damaged chromosome to be repaired using a sister chromatid (available in G2 after DNA replication) or a homologous chromosome as a template. DSBs caused by the replication machinery attempting to synthesize across a single-strand break or unrepaired lesion cause collapse of the replication fork and are typically repaired by recombination.

Topoisomerases introduce both single- and double-strand breaks in the course of changing the DNA’s state of supercoiling, which is especially common in regions near an open replication fork. Such breaks are not considered DNA damage because they are a natural intermediate in the topoisomerase biochemical mechanism and are immediately repaired by the enzymes that created them.

A team of French researchers bombarded Deinococcus radiodurans to study the mechanism of double-strand break DNA repair in that bacterium. At least two copies of the genome, with random DNA breaks, can form DNA fragments through annealing. Partially overlapping fragments are then used for synthesis of homologous regions through a moving D-loop that can continue extension until they find complementary partner strands. In the final step there is crossover by means of RecA-dependent homologous recombination.[32]

Translesion synthesis (TLS) is a DNA damage tolerance process that allows the DNA replication machinery to replicate past DNA lesions such as thymine dimers or AP sites.[33] It involves switching out regular DNA polymerases for specialized translesion polymerases (i.e. DNA polymerase IV or V, from the Y Polymerase family), often with larger active sites that can facilitate the insertion of bases opposite damaged nucleotides. The polymerase switching is thought to be mediated by, among other factors, the post-translational modification of the replication processivity factor PCNA. Translesion synthesis polymerases often have low fidelity (high propensity to insert wrong bases) on undamaged templates relative to regular polymerases. However, many are extremely efficient at inserting correct bases opposite specific types of damage. For example, Pol mediates error-free bypass of lesions induced by UV irradiation, whereas Pol introduces mutations at these sites. Pol is known to add the first adenine across the T^T photodimer using Watson-Crick base pairing and the second adenine will be added in its syn conformation using Hoogsteen base pairing. From a cellular perspective, risking the introduction of point mutations during translesion synthesis may be preferable to resorting to more drastic mechanisms of DNA repair, which may cause gross chromosomal aberrations or cell death. In short, the process involves specialized polymerases either bypassing or repairing lesions at locations of stalled DNA replication. For example, Human DNA polymerase eta can bypass complex DNA lesions like guanine-thymine intra-strand crosslink, G[8,5-Me]T, although can cause targeted and semi-targeted mutations.[34] Paromita Raychaudhury and Ashis Basu[35] studied the toxicity and mutagenesis of the same lesion in Escherichia coli by replicating a G[8,5-Me]T-modified plasmid in E. coli with specific DNA polymerase knockouts. Viability was very low in a strain lacking pol II, pol IV, and pol V, the three SOS-inducible DNA polymerases, indicating that translesion synthesis is conducted primarily by these specialized DNA polymerases. A bypass platform is provided to these polymerases by Proliferating cell nuclear antigen (PCNA). Under normal circumstances, PCNA bound to polymerases replicates the DNA. At a site of lesion, PCNA is ubiquitinated, or modified, by the RAD6/RAD18 proteins to provide a platform for the specialized polymerases to bypass the lesion and resume DNA replication.[36][37] After translesion synthesis, extension is required. This extension can be carried out by a replicative polymerase if the TLS is error-free, as in the case of Pol , yet if TLS results in a mismatch, a specialized polymerase is needed to extend it; Pol . Pol is unique in that it can extend terminal mismatches, whereas more processive polymerases cannot. So when a lesion is encountered, the replication fork will stall, PCNA will switch from a processive polymerase to a TLS polymerase such as Pol to fix the lesion, then PCNA may switch to Pol to extend the mismatch, and last PCNA will switch to the processive polymerase to continue replication.

Cells exposed to ionizing radiation, ultraviolet light or chemicals are prone to acquire multiple sites of bulky DNA lesions and double-strand breaks. Moreover, DNA damaging agents can damage other biomolecules such as proteins, carbohydrates, lipids, and RNA. The accumulation of damage, to be specific, double-strand breaks or adducts stalling the replication forks, are among known stimulation signals for a global response to DNA damage.[38] The global response to damage is an act directed toward the cells’ own preservation and triggers multiple pathways of macromolecular repair, lesion bypass, tolerance, or apoptosis. The common features of global response are induction of multiple genes, cell cycle arrest, and inhibition of cell division.

After DNA damage, cell cycle checkpoints are activated. Checkpoint activation pauses the cell cycle and gives the cell time to repair the damage before continuing to divide. DNA damage checkpoints occur at the G1/S and G2/M boundaries. An intra-S checkpoint also exists. Checkpoint activation is controlled by two master kinases, ATM and ATR. ATM responds to DNA double-strand breaks and disruptions in chromatin structure,[39] whereas ATR primarily responds to stalled replication forks. These kinases phosphorylate downstream targets in a signal transduction cascade, eventually leading to cell cycle arrest. A class of checkpoint mediator proteins including BRCA1, MDC1, and 53BP1 has also been identified.[40] These proteins seem to be required for transmitting the checkpoint activation signal to downstream proteins.

DNA damage checkpoint is a signal transduction pathway that blocks cell cycle progression in G1, G2 and metaphase and slows down the rate of S phase progression when DNA is damaged. It leads to a pause in cell cycle allowing the cell time to repair the damage before continuing to divide.

Checkpoint Proteins can be separated into four groups: phosphatidylinositol 3-kinase (PI3K)-like protein kinase, proliferating cell nuclear antigen (PCNA)-like group, two serine/threonine(S/T) kinases and their adaptors. Central to all DNA damage induced checkpoints responses is a pair of large protein kinases belonging to the first group of PI3K-like protein kinases-the ATM (Ataxia telangiectasia mutated) and ATR (Ataxia- and Rad-related) kinases, whose sequence and functions have been well conserved in evolution. All DNA damage response requires either ATM or ATR because they have the ability to bind to the chromosomes at the site of DNA damage, together with accessory proteins that are platforms on which DNA damage response components and DNA repair complexes can be assembled.

An important downstream target of ATM and ATR is p53, as it is required for inducing apoptosis following DNA damage.[41] The cyclin-dependent kinase inhibitor p21 is induced by both p53-dependent and p53-independent mechanisms and can arrest the cell cycle at the G1/S and G2/M checkpoints by deactivating cyclin/cyclin-dependent kinase complexes.[42]

The SOS response is the changes in gene expression in Escherichia coli and other bacteria in response to extensive DNA damage. The prokaryotic SOS system is regulated by two key proteins: LexA and RecA. The LexA homodimer is a transcriptional repressor that binds to operator sequences commonly referred to as SOS boxes. In Escherichia coli it is known that LexA regulates transcription of approximately 48 genes including the lexA and recA genes.[43] The SOS response is known to be widespread in the Bacteria domain, but it is mostly absent in some bacterial phyla, like the Spirochetes.[44] The most common cellular signals activating the SOS response are regions of single-stranded DNA (ssDNA), arising from stalled replication forks or double-strand breaks, which are processed by DNA helicase to separate the two DNA strands.[38] In the initiation step, RecA protein binds to ssDNA in an ATP hydrolysis driven reaction creating RecAssDNA filaments. RecAssDNA filaments activate LexA autoprotease activity, which ultimately leads to cleavage of LexA dimer and subsequent LexA degradation. The loss of LexA repressor induces transcription of the SOS genes and allows for further signal induction, inhibition of cell division and an increase in levels of proteins responsible for damage processing.

In Escherichia coli, SOS boxes are 20-nucleotide long sequences near promoters with palindromic structure and a high degree of sequence conservation. In other classes and phyla, the sequence of SOS boxes varies considerably, with different length and composition, but it is always highly conserved and one of the strongest short signals in the genome.[44] The high information content of SOS boxes permits differential binding of LexA to different promoters and allows for timing of the SOS response. The lesion repair genes are induced at the beginning of SOS response. The error-prone translesion polymerases, for example, UmuCD’2 (also called DNA polymerase V), are induced later on as a last resort.[45] Once the DNA damage is repaired or bypassed using polymerases or through recombination, the amount of single-stranded DNA in cells is decreased, lowering the amounts of RecA filaments decreases cleavage activity of LexA homodimer, which then binds to the SOS boxes near promoters and restores normal gene expression.

Eukaryotic cells exposed to DNA damaging agents also activate important defensive pathways by inducing multiple proteins involved in DNA repair, cell cycle checkpoint control, protein trafficking and degradation. Such genome wide transcriptional response is very complex and tightly regulated, thus allowing coordinated global response to damage. Exposure of yeast Saccharomyces cerevisiae to DNA damaging agents results in overlapping but distinct transcriptional profiles. Similarities to environmental shock response indicates that a general global stress response pathway exist at the level of transcriptional activation. In contrast, different human cell types respond to damage differently indicating an absence of a common global response. The probable explanation for this difference between yeast and human cells may be in the heterogeneity of mammalian cells. In an animal different types of cells are distributed among different organs that have evolved different sensitivities to DNA damage.[46]

In general global response to DNA damage involves expression of multiple genes responsible for postreplication repair, homologous recombination, nucleotide excision repair, DNA damage checkpoint, global transcriptional activation, genes controlling mRNA decay, and many others. A large amount of damage to a cell leaves it with an important decision: undergo apoptosis and die, or survive at the cost of living with a modified genome. An increase in tolerance to damage can lead to an increased rate of survival that will allow a greater accumulation of mutations. Yeast Rev1 and human polymerase are members of [Y family translesion DNA polymerases present during global response to DNA damage and are responsible for enhanced mutagenesis during a global response to DNA damage in eukaryotes.[38]

DNA repair rate is an important determinant of cell pathology

Experimental animals with genetic deficiencies in DNA repair often show decreased life span and increased cancer incidence.[13] For example, mice deficient in the dominant NHEJ pathway and in telomere maintenance mechanisms get lymphoma and infections more often, and, as a consequence, have shorter lifespans than wild-type mice.[47] In similar manner, mice deficient in a key repair and transcription protein that unwinds DNA helices have premature onset of aging-related diseases and consequent shortening of lifespan.[48] However, not every DNA repair deficiency creates exactly the predicted effects; mice deficient in the NER pathway exhibited shortened life span without correspondingly higher rates of mutation.[49]

If the rate of DNA damage exceeds the capacity of the cell to repair it, the accumulation of errors can overwhelm the cell and result in early senescence, apoptosis, or cancer. Inherited diseases associated with faulty DNA repair functioning result in premature aging,[13] increased sensitivity to carcinogens, and correspondingly increased cancer risk (see below). On the other hand, organisms with enhanced DNA repair systems, such as Deinococcus radiodurans, the most radiation-resistant known organism, exhibit remarkable resistance to the double-strand break-inducing effects of radioactivity, likely due to enhanced efficiency of DNA repair and especially NHEJ.[50]

Most life span influencing genes affect the rate of DNA damage

A number of individual genes have been identified as influencing variations in life span within a population of organisms. The effects of these genes is strongly dependent on the environment, in particular, on the organism’s diet. Caloric restriction reproducibly results in extended lifespan in a variety of organisms, likely via nutrient sensing pathways and decreased metabolic rate. The molecular mechanisms by which such restriction results in lengthened lifespan are as yet unclear (see[51] for some discussion); however, the behavior of many genes known to be involved in DNA repair is altered under conditions of caloric restriction.

For example, increasing the gene dosage of the gene SIR-2, which regulates DNA packaging in the nematode worm Caenorhabditis elegans, can significantly extend lifespan.[52] The mammalian homolog of SIR-2 is known to induce downstream DNA repair factors involved in NHEJ, an activity that is especially promoted under conditions of caloric restriction.[53] Caloric restriction has been closely linked to the rate of base excision repair in the nuclear DNA of rodents,[54] although similar effects have not been observed in mitochondrial DNA.[55]

The C. elegans gene AGE-1, an upstream effector of DNA repair pathways, confers dramatically extended life span under free-feeding conditions but leads to a decrease in reproductive fitness under conditions of caloric restriction.[56] This observation supports the pleiotropy theory of the biological origins of aging, which suggests that genes conferring a large survival advantage early in life will be selected for even if they carry a corresponding disadvantage late in life.

Defects in the NER mechanism are responsible for several genetic disorders, including:

Mental retardation often accompanies the latter two disorders, suggesting increased vulnerability of developmental neurons.

Other DNA repair disorders include:

All of the above diseases are often called “segmental progerias” (“accelerated aging diseases”) because their victims appear elderly and suffer from aging-related diseases at an abnormally young age, while not manifesting all the symptoms of old age.

Other diseases associated with reduced DNA repair function include Fanconi anemia, hereditary breast cancer and hereditary colon cancer.

Because of inherent limitations in the DNA repair mechanisms, if humans lived long enough, they would all eventually develop cancer.[57][58] There are at least 34 Inherited human DNA repair gene mutations that increase cancer risk. Many of these mutations cause DNA repair to be less effective than normal. In particular, Hereditary nonpolyposis colorectal cancer (HNPCC) is strongly associated with specific mutations in the DNA mismatch repair pathway. BRCA1 and BRCA2, two famous genes whose mutations confer a hugely increased risk of breast cancer on carriers, are both associated with a large number of DNA repair pathways, especially NHEJ and homologous recombination.

Cancer therapy procedures such as chemotherapy and radiotherapy work by overwhelming the capacity of the cell to repair DNA damage, resulting in cell death. Cells that are most rapidly dividing most typically cancer cells are preferentially affected. The side-effect is that other non-cancerous but rapidly dividing cells such as progenitor cells in the gut, skin, and hematopoietic system are also affected. Modern cancer treatments attempt to localize the DNA damage to cells and tissues only associated with cancer, either by physical means (concentrating the therapeutic agent in the region of the tumor) or by biochemical means (exploiting a feature unique to cancer cells in the body).

Classically, cancer has been viewed as a set of diseases that are driven by progressive genetic abnormalities that include mutations in tumour-suppressor genes and oncogenes, and chromosomal aberrations. However, it has become apparent that cancer is also driven by epigenetic alterations.[59]

Epigenetic alterations refer to functionally relevant modifications to the genome that do not involve a change in the nucleotide sequence. Examples of such modifications are changes in DNA methylation (hypermethylation and hypomethylation) and histone modification,[60] changes in chromosomal architecture (caused by inappropriate expression of proteins such as HMGA2 or HMGA1)[61] and changes caused by microRNAs. Each of these epigenetic alterations serves to regulate gene expression without altering the underlying DNA sequence. These changes usually remain through cell divisions, last for multiple cell generations, and can be considered to be epimutations (equivalent to mutations).

While large numbers of epigenetic alterations are found in cancers, the epigenetic alterations in DNA repair genes, causing reduced expression of DNA repair proteins, appear to be particularly important. Such alterations are thought to occur early in progression to cancer and to be a likely cause of the genetic instability characteristic of cancers.[62][63][64][65]

Reduced expression of DNA repair genes causes deficient DNA repair. When DNA repair is deficient DNA damages remain in cells at a higher than usual level and these excess damages cause increased frequencies of mutation or epimutation. Mutation rates increase substantially in cells defective in DNA mismatch repair[66][67] or in homologous recombinational repair (HRR).[68] Chromosomal rearrangements and aneuploidy also increase in HRR defective cells.[69]

Higher levels of DNA damage not only cause increased mutation, but also cause increased epimutation. During repair of DNA double strand breaks, or repair of other DNA damages, incompletely cleared sites of repair can cause epigenetic gene silencing.[70][71]

Deficient expression of DNA repair proteins due to an inherited mutation can cause increased risk of cancer. Individuals with an inherited impairment in any of 34 DNA repair genes (see article DNA repair-deficiency disorder) have an increased risk of cancer, with some defects causing up to a 100% lifetime chance of cancer (e.g. p53 mutations).[72] However, such germline mutations (which cause highly penetrant cancer syndromes) are the cause of only about 1 percent of cancers.[73]

Deficiencies in DNA repair enzymes are occasionally caused by a newly arising somatic mutation in a DNA repair gene, but are much more frequently caused by epigenetic alterations that reduce or silence expression of DNA repair genes. For example, when 113 colorectal cancers were examined in sequence, only four had a missense mutation in the DNA repair gene MGMT, while the majority had reduced MGMT expression due to methylation of the MGMT promoter region (an epigenetic alteration).[74] Five different studies found that between 40% and 90% of colorectal cancers have reduced MGMT expression due to methylation of the MGMT promoter region.[75][76][77][78][79]

Similarly, out of 119 cases of mismatch repair-deficient colorectal cancers that lacked DNA repair gene PMS2 expression, PMS2 was deficient in 6 due to mutations in the PMS2 gene, while in 103 cases PMS2 expression was deficient because its pairing partner MLH1 was repressed due to promoter methylation (PMS2 protein is unstable in the absence of MLH1).[80] In the other 10 cases, loss of PMS2 expression was likely due to epigenetic overexpression of the microRNA, miR-155, which down-regulates MLH1.[81]

In further examples (tabulated in Table 4 of this reference[82]), epigenetic defects were found at frequencies of between 13%-100% for the DNA repair genes BRCA1, WRN, FANCB, FANCF, MGMT, MLH1, MSH2, MSH4, ERCC1, XPF, NEIL1 and ATM. These epigenetic defects occurred in various cancers (e.g. breast, ovarian, colorectal and head and neck). Two or three deficiencies in the expression of ERCC1, XPF or PMS2 occur simultaneously in the majority of the 49 colon cancers evaluated by Facista et al.[83]

The chart in this section shows some frequent DNA damaging agents, examples of DNA lesions they cause, and the pathways that deal with these DNA damages. At least 169 enzymes are either directly employed in DNA repair or influence DNA repair processes.[84] Of these, 83 are directly employed in repairing the 5 types of DNA damages illustrated in the chart.

Some of the more well studied genes central to these repair processes are shown in the chart. The gene designations shown in red, gray or cyan indicate genes frequently epigenetically altered in various types of cancers. Wikipedia articles on each of the genes high-lighted by red, gray or cyan describe the epigenetic alteration(s) and the cancer(s) in which these epimutations are found. Two review articles,[82][85] and two broad experimental survey articles[86][87] also document most of these epigenetic DNA repair deficiencies in cancers.

Red-highlighted genes are frequently reduced or silenced by epigenetic mechanisms in various cancers. When these genes have low or absent expression, DNA damages can accumulate. Replication errors past these damages (see translesion synthesis) can lead to increased mutations and, ultimately, cancer. Epigenetic repression of DNA repair genes in accurate DNA repair pathways appear to be central to carcinogenesis.

The two gray-highlighted genes RAD51 and BRCA2, are required for homologous recombinational repair. They are sometimes epigenetically over-expressed and sometimes under-expressed in certain cancers. As indicated in the Wikipedia articles on RAD51 and BRCA2, such cancers ordinarily have epigenetic deficiencies in other DNA repair genes. These repair deficiencies would likely cause increased unrepaired DNA damages. The over-expression of RAD51 and BRCA2 seen in these cancers may reflect selective pressures for compensatory RAD51 or BRCA2 over-expression and increased homologous recombinational repair to at least partially deal with such excess DNA damages. In those cases where RAD51 or BRCA2 are under-expressed, this would itself lead to increased unrepaired DNA damages. Replication errors past these damages (see translesion synthesis) could cause increased mutations and cancer, so that under-expression of RAD51 or BRCA2 would be carcinogenic in itself.

Cyan-highlighted genes are in the microhomology-mediated end joining (MMEJ) pathway and are up-regulated in cancer. MMEJ is an additional error-prone inaccurate repair pathway for double-strand breaks. In MMEJ repair of a double-strand break, an homology of 5-25 complementary base pairs between both paired strands is sufficient to align the strands, but mismatched ends (flaps) are usually present. MMEJ removes the extra nucleotides (flaps) where strands are joined, and then ligates the strands to create an intact DNA double helix. MMEJ almost always involves at least a small deletion, so that it is a mutagenic pathway.[88]FEN1, the flap endonuclease in MMEJ, is epigenetically increased by promoter hypomethylation and is over-expressed in the majority of cancers of the breast,[89] prostate,[90] stomach,[91][92] neuroblastomas,[93] pancreas,[94] and lung.[95] PARP1 is also over-expressed when its promoter region ETS site is epigenetically hypomethylated, and this contributes to progression to endometrial cancer,[96] BRCA-mutated ovarian cancer,[97] and BRCA-mutated serous ovarian cancer.[98] Other genes in the MMEJ pathway are also over-expressed in a number of cancers (see MMEJ for summary), and are also shown in cyan.

The basic processes of DNA repair are highly conserved among both prokaryotes and eukaryotes and even among bacteriophage (viruses that infect bacteria); however, more complex organisms with more complex genomes have correspondingly more complex repair mechanisms.[99] The ability of a large number of protein structural motifs to catalyze relevant chemical reactions has played a significant role in the elaboration of repair mechanisms during evolution. For an extremely detailed review of hypotheses relating to the evolution of DNA repair, see.[100]

The fossil record indicates that single-cell life began to proliferate on the planet at some point during the Precambrian period, although exactly when recognizably modern life first emerged is unclear. Nucleic acids became the sole and universal means of encoding genetic information, requiring DNA repair mechanisms that in their basic form have been inherited by all extant life forms from their common ancestor. The emergence of Earth’s oxygen-rich atmosphere (known as the “oxygen catastrophe”) due to photosynthetic organisms, as well as the presence of potentially damaging free radicals in the cell due to oxidative phosphorylation, necessitated the evolution of DNA repair mechanisms that act specifically to counter the types of damage induced by oxidative stress.

On some occasions, DNA damage is not repaired, or is repaired by an error-prone mechanism that results in a change from the original sequence. When this occurs, mutations may propagate into the genomes of the cell’s progeny. Should such an event occur in a germ line cell that will eventually produce a gamete, the mutation has the potential to be passed on to the organism’s offspring. The rate of evolution in a particular species (or, in a particular gene) is a function of the rate of mutation. As a consequence, the rate and accuracy of DNA repair mechanisms have an influence over the process of evolutionary change.[101] Since the normal adaptation of populations of organisms to changing circumstances (for instance the adaptation of the beaks of a population of finches to the changing presence of hard seeds or insects) proceeds by gene regulation and the recombination and selection of gene variations alleles and not by passing on irreparable DNA damages to the offspring,[102] DNA damage protection and repair does not influence the rate of adaptation by gene regulation and by recombination and selection of alleles. On the other hand, DNA damage repair and protection does influence the rate of accumulation of irreparable, advantageous, code expanding, inheritable mutations, and slows down the evolutionary mechanism for expansion of the genome of organisms with new functionalities. The tension between evolvability and mutation repair and protection needs further investigation.

A technology named clustered regularly interspaced short palindromic repeat shortened to CRISPR-Cas9 was discovered in 2012. The new technology allows anyone with molecular biology training to alter the genes of any species with precision.[103]

Read the original:
DNA repair – Wikipedia, the free encyclopedia

Posted in DNA | Comments Off on DNA repair – Wikipedia, the free encyclopedia

Religion and Nihilism – The African Perspective Magazine

Posted: August 29, 2016 at 7:34 am

I was going through some of my school notes today and i came across the following lecture notes id taken from a class on religion and illusions when i was still a student. Hence, I figured I introduce you guys to this very interesting topic as most of what we are tought regarding religion in the mainstream media is usually all but the same. Hope you enjoy it and find it interesting. Dont hesitate to leave your opinion at the end.

Nihilism as a philosophy seemed pass by the 1980s. Few talked about it in literature expect to declare it a dead issue. Literally, in the materialist sense, nihilism refers to a truism: from nothing, nothing comes. However, from a philosophical viewpoint, moral nihilism took on a similar connotation. One literally believed in nothing, which is somewhat of an oxymoron since to believe in nothing is to believe in something. A corner was turned in the history of nihilism once 9/11 became a reality. After this major event, religious and social science scholars began to ask whether violence could be attributed tonihilistic thinkingin other words, whether we had lost our way morally by believing in nothing, by rejecting traditional moral foundations. It was feared that an anything goes mentality and a lack of absolute moral foundations could lead to further acts of violence, as the goals forwarded by life-affirmation were being thwarted by the destructive ends of so-called violent nihilists. This position is, however, argumentative.

Extreme beliefs in values such as nationalism, patriotism, statism, secularism, or religion can also lead to violence, as one becomes unsettled by beliefs contrary to the reigning orthodoxy and strikes out violently to protect communal values. Therefore, believing in something can also lead to violence and suffering. To put the argument to rest, its not about whether one believes in something or nothing but howabsolutistthe position is; its the rigidity of values that causes pain and suffering, what Nobel prize winner Amartya Sen calls the illusion of singularity.Since 9/11, nihilism has become a favourite target to criticize and marginalize, yet its history and complexity actually lead to a more nuanced argument. Perhaps we should be looking at ways nihilism complements Western belief systemseven Christian doctrinerather than fear its implementation in ethical and moral discussions.

Brief History of Nihilism To understand why some forms of nihilism are still problematic, it is important to ask how it was used historically and for what motive. Nihilism was first thought synonymous with having no authentic values, no real ends, that ones whole existence is pure nothingness.In its earliest European roots, nihilism was initially used to label groups or ideas asinferior, especially if they were deemed threatening to establishedcommunal ideals. Nihilism as alabelwas its first function.

Nihilism initially functioned as apejorative labeland a term of abuse against modern trends that threatened to destroy either Christian hegemonic principles or tradition in general.During the seventeenth and eighteenth centuries, modernization in France meant that power shifted from the traditional feudal nobility to a central government filled with well-trained bourgeois professionals. Fearing a loss of influence, the nobility made a claim: If power shifted to responsible government, the nobility claimed that such centralization would lead to death and destructionin other words, anarchy and nothingness. Those upsetting the status quo were deemed nihilistic, a derogatory label requiring no serious burden of proof.Such labelling, however, worked both ways. The old world or tradition was deemed valueless by advocates of modernization and change who viewed the status quo as valueless; whereas, traditionalists pictured a new world, or new life form, as destructive and meaningless in its pursuit of a flawed transformation. Potential changes in power or ideology created a climate of fear, so the importance of defining ones opponent as nihilisticas nothing of valuewas as politically astute as it was reactionary. Those embracing the function of nihilism as a label are attempting to avoid scrutiny of their own values while the values of the opposition are literally annihilated.

Since those advocating communal values may feel threatened by new ideologies, it becomes imperative for the dominant power to present its political, metaphysical, or religious beliefs as eternal, universal, and objective. Typically, traditionalists have a stake in their own normative positions. This is because [t]he absoluteness of [ones] form of life makes [one]feel safe and at home. This means that [perfectionists]have a great interest in the maintenance of their form of life and its absoluteness.The existence of alternative beliefs and values, as well as a demand for intersubjective dialogue, is both a challenge and a threat to the traditionalist because [i]t shows people that their own form of life is not as absolute as they thought it was, and this makes them feel uncertain. . . . However, if one labels the Other as nihilistic without ever entering into a dialogue, one may become myopic, dismissing the relative value of other life forms one chooses not to see. This means that one cant see what they [other cultural groups]are doing, and why they are doing it, why they may be successful . . . Therefore, one misses the dynamics of cultural change.

Through the effect of labelling, the religious-oriented could claim that nihilists, and thus atheists by affiliation, would not feel bound by moral norms, and as a result would lose the sense that life has meaning and therefore tend toward despair and suicide.death of God. Christians argued that if there is no divine lawmaker, moral law would become interpretative, contested, and situational. The end result: [E]ach man will tend to become a law unto himself. If God does not exist to choose for the individual, the individual will assume the former prerogative of God and choose for himself. It was this kind of thinking that led perfectionists to assume that any challenge to the Absolute automatically meant moral indifference, moral relativism, and moral chaos. Put simply,nihilists were the enemy.

Nihilists were accused of rejecting ultimate values, embracing instead an all values are equal mentalitybasically, anything goes. And like Islam today, nihilists would become easy scapegoats.

Late 19th 20th Century;Nietzsche and the Death of God

Friedrich Nietzsche is still the most prestigious theorist of nihilism. Influenced by Christianitys dominant orthodoxy in the nineteenth century, Nietzsche believed that the Christian religion was nihilism incarnate. Since Christian theology involved a metaphysical reversal of temporal reality and a belief in God that came from nothing, the Christian God became the deification of nothingness, the will to nothingness pronounced holy. Nietzsche claimed that Christian metaphysics became an impediment to life-affirmation. Nietzsche explains: If one shifts the centre of gravity of life out of life into the Beyondinto nothingnessone has deprived life of its centre of gravity . . . So to live that there is no longer any meaning in living:that now becomes the meaning of life.What Nietzsche rejected more was the belief that one could create a totalizing system to explain all truths. In other words, he repudiated any religion or dogma that attempted to show how the entire body of knowledge [could]be derived from a small set of fundamental, self-evident propositions(i.e., stewardship). Nietzsche felt that we do not have the slightest right to posit a beyond or an it-self of things that is divine or the embodiment of morality.

Without God as a foundation for absolute values, all absolute values are deemed suspect (hence the birth of postmodernism). For Nietzsche, this literally meant that the belief in the Christian god ha[d]become unworthy of belief.This transition from the highest values to the death of God was not going to be a quick one; in fact, the comfort provided by an absolute divinity could potentially sustain its existence for millennia. Nietzsche elaborates: God is dead; but given the way of men, there may still be caves for thousands of years in which his shadow will be shown.And wewe still have to vanquish his shadow too.

We are left then with a dilemma: Either we abandon our reverences for the highest values and subsist, or we maintain our dependency on absolutes at the cost of our own non-absolutist reality. For Nietzsche, the second option was pure nothingness: So we can abolish either our reverences or ourselves. The latter constitutes nihilism. All one is left with are contested, situational value judgements, and these are resolved in the human arena.

One can still embrace pessimism, believing that without some form of an absolute, our existence in this world will take a turn for the worse. To avoid the trappings of pessimism and passivity, Nietzsche sought a solution to such nihilistic despair through the re-evaluation of the dominant, life-negating values. This makes Nietzsche an perspectivism a philosophy of resolution in the form of life-affirmation. It moves past despair toward a transformative stage in which new values are posited to replace the old table of values. As Reginster acknowledges, one should regard the affirmation of life as Nietzsches defining philosophical achievement. What this implies is a substantive demand to live according to a constant re-evaluation of values. By taking full responsibility for this task, humankind engages in the eternal recurrence, a recurrence of life-affirming values based on acceptance of becoming and the impermanence of values. Value formation is both fluid and cyclical.

Late-20th Century 21st Century;The Pessimism of the Post-9/11 Era

Since the events of September 11, 2001, nihilism has returned with a vengeance to scholarly literature; however, it is being discussed in almost exclusively negative terms. The labelling origin of nihilism has taken on new life in a context of suicide bombings, Islamophobia, and neoconservative rhetoric. For instance, Canadian Liberal leader Michael Ignatieff described different shades of negative nihilismtragic, cynical, and fanaticalin his bookThe Lesser Evil.Tragic nihilism begins from a foundation of noble, political intentions, but eventually this ethic of restraint spirals toward violence as the only end(i.e., Vietnam). Two sides of an armed struggle may begin with high ideals and place limitations on their means to achieve viable political goals, but such noble ends eventually become lost in all the carnage. Agents of a democratic state may find themselves driven by the horror of terror to torture, to assassinate, to kill innocent civilians, all in the name of rights and democracy. As Ignateiff states, they slip from the lesser evil [legitimate use of force]to the greater [violence as an end in itself].

However,cynical nihilism is even more narcissistic. In this case, violence does not begin as a means to noble goals. Instead, [i]t is used, from the beginning, in the service of cynical or self-serving [ends]. The term denotes narcissistic prejudice because it justifies the commission of violence for the sake of personal aggrandizement, immortality, fame, or power rather than as a means to a genuinely political end, like revolution [for social justice]or the liberation of a people.Cynical nihilists were never threatened in any legitimate way. Their own vanity, ego, greed, or need to control others drove them to commit violence against innocent civilians (e.g., Saddam Hussein in Kuwait or Bush in Iraq).

Finally,fanatical nihilism does not suffer from a belief in nothing. In actuality, this type of nihilism is dangerous because one believes in too much. What fanatical nihilism does involve is a form of conviction so intense, a devotion so blind, that it becomes impossible to see that violence necessarily betrays the ends that conviction seeks to achieve. The fanatical use of ideology to justify atrocity negates any consideration of the human cost of such fundamentalism. As a result, nihilism becomes willed indifference to the human agents sacrificed on the alter of principle. . . . Here nihilism is not a belief in nothing at all; it is, rather, the belief that nothing about particular groups of human beings matters enough to require minimizing harm to them.Fanatical nihilism is also important to understand because many of the justifications are religious. States Ignatieff:

From a human rights standpoint, the claim that such inhumanity can be divinely inspired is a piece of nihilism, an inhuman devaluation of the respect owed to all persons, and moreover a piece of hubris, since, by definition, human beings have no access to divine intentions, whatever they may be.

Positive Nihilism In the twenty-first century, humankind is searching for a philosophy to counter destructive, non-pragmatic forms of nihilism. As a middle path,positive nihilism accentuates life-affirmation through a widening of dialogue. Positively stated: [The Philosopher] . . ., having rejected the currently dominant values, must raise other values, by virtue of which life and the universe cannot only be justified but also become endearing and valuable. Rejecting any unworkable table of values, humankind now erects another table with a new ranking of values and new ideals of humanity, society, and state.Positive nihilismin both its rejection of absolute truths and its acceptance of contextual truthsis life-affirming since small-t truths are the best mere mortals can hope to accomplish. Human beings can reach for higher truths; they just do not have the totalizing knowledge required for Absolute Truth. In other words, we are not God, but we are still attempting to be God on a good day. We still need valuesin other words, we are not moral nihilists or absolutistsbut we realize that the human condition is malleable. Values come and go, and we have to be ready to bend them in the right direction in the moment moral courage requires it.

Nihilism does not have to be a dangerous or negative philosophy; it can be a philosophy of freedom. Basically, the entire purpose of positive nihilism is to transform values that no longer work and replace them with values that do. By aiding in a process that finds meaningful values through negotiation,positive nihilism prevents the exclusionary effect of perfectionism, the deceit of nihilistic labelling, as well as the senseless violence of fanatical nihilism. It is at this point that nihilism can enter its life-affirming stage and become a compliment to pluralism, multiculturalism, and the roots of religion, those being love, charity, and compassion.

Source; Professor Stuart Chambers.

@RasMutabaruka

93% Amazing

Replacing meaningful content with placeholder text allows viewers to focus on graphic aspects such as font, typography, and page layout without being distracted by the content.

Desgin 98 %

Development 91 %

Features 93 %

Awsome 90 %

Read this article:

Religion and Nihilism – The African Perspective Magazine

Posted in Nihilism | Comments Off on Religion and Nihilism – The African Perspective Magazine

What Explains the Collapse of the USSR?

Posted: August 23, 2016 at 9:34 am

A Critical Analysis into the Different Approaches Explaining the Collapse of the Soviet Union: Was the Nature of the Regimes Collapse Ontological, Conjunctural or Decisional?

Abstract

This investigation seeks to explore the different approaches behind the demise of the Soviet Union. It will draw from Richard Sakwas three approaches with regards to the collapse of the Soviet Union, namely of the ontological, decisional and conjunctural varieties. This dissertation will ultimately demonstrate the necessity of each of these if a complete understanding of the demise is to be acquired.

This dissertation will be split into three different areas of scrutiny with each analysing a different approach. The first chapter will question what elements of the collapse were ontological and will consist of delving into long-term socio-economic and political factors in order to grasp what structural flaws hindered the Soviet Union from its inception. Following this will be an analysis of the decisional approach, this time focusing on short-term factors and how the decisions of Gorbachev contributed to the fall. Finally, this investigation will examine the conjunctural approach, which will provide valuable insight as to how short-term political contingent factors played a leading role in the eventual ruin of the Soviet Union.

Introduction

On December 26th, 1991, the Soviet Union was officially dissolved into fifteen independent republics after six years of political-economic crises. This unanticipated collapse of a super-power that had once shaped the foreign policies of East and West took the international community off-guard. Since the collapse, scholars have attempted to provide insight into the reasons behind the demise of the Soviet state. In 1998 Richard Sakwa published Soviet Politics in Perspective, which categorised the three main approaches adopted by scholars in the study of the collapse of the Union of Soviet Socialist Republics (USSR). These were the ontological, decisional and conjunctural approaches and will be the foci of this investigation. Ultimately, my aim is to prove that none of these approaches can thoroughly explain the collapse when viewed individually.

Instead, I will advance that all three are vital in order to acquire a thorough understanding of the Soviet collapse. To prove this, I will be analysing how each approach covers different angles of the fall, but before being able to answer this question of validity, I must begin by arranging each scholar I scrutinize into Sakwas three approaches. In my research I have discovered that the vast majority of scholars have no notion of such schools of thought, which increases the possibility of bias in secondary sources and makes my investigation all the more challenging. Once a solid theoretical basis is set I will then move onto investigating the legitimacy of each approach when considering historical events.

Research Questions

To provide the basis for my hypothesis, my analysis will be subdivided into three research questions.

The first one will address what ontological traits existed in the collapse of the Soviet Union. Following this, the second question will mirror the first by attempting to make sense of decisional aspects of the fall. Finally, my attention will turn to answering in what way was the collapse conjunctural in nature. Although the characteristics of these questions may seem basic it is important not to fall prey to appearances and bear in mind the complexity of each approach. Moreover, the arrangement and formulation of the research questions was carried out in this manner to provide an unbiased evaluation of each approach, eventually displaying the necessity of each in the explanation of fall.

Methodology

The fall of the Soviet Union is a subject that has attracted vast amounts of literature from scholars all over the world. Although this presents a challenge when it comes to working through such a large topic it also helps the researcher elaborate solid explanations behind historical events. Consequently, I will be mainly employing qualitative data, supplemented by quantitative evidence; which will consist of both primary and secondary sources. The quantitative information will draw from various economists such as Lane, Shaffer and Dyker; these will mainly be used to ensure that qualitative explanations are properly backed by statistical data regarding socio-economic factors.

The majority of the qualitative data drawn will be from secondary sources written by contemporary scholars. A few primary sources such as official documents will also be analysed to provide further depth to analysis. Due to the vast amount of information concerning my topic, it is important to focus on literature aiding the question as one can easily deviate from the question regarding the three approaches. The other main challenge will also consist in avoiding to be drawn into deep analysis of the separate independence movements of the Soviet republics.

Theoretical Framework

Before being able to embark on a complete literature review, it is important to understand the theoretical framework that accompanies the analysis, namely Sakwas three approaches. Subsequently, I will then be able to show that all three of these approaches are necessary in explaining the downfall of the Soviet Union.

When looking at the different approaches elaborated by Sakwa, each advances a unique hypothesis as to why the Soviet Union collapsed. Although all three approaches are different in nature, some overlap or inter-connect at times. To begin with, the ontological approach argues that the Soviet Union dissolved because of certain inherent shortcomings of the system [] including [] structural flaws.[1] This approach enhances the premise that the collapse of the Soviet Union lies in long-term systemic factors that were present since the conception of the system. This view is countered by the conjunctural approach, which suggests

that the system did have an evolutionary potential that might have allowed it in time to adapt to changing economic and political circumstances. [] The collapse of the system [is] ascribed to contingent factors, including the strength of internal party opposition [and] the alleged opportunism of the Russian leadership under Boris Yeltsin.[2]

The final approach theorised by Sakwa is the decisional one, and advances the belief that

particular decisions at particular times precipitated the collapse, but that these political choices were made in the context of a system that could only be made viable through transformation of social, economic and political relations. This transformation could have been a long-term gradual process, but required a genuine understanding of the needs of the country.[3]

Although the decisional and conjunctural approaches are different in scope, they nevertheless both focus on the short-term factors of collapse, which at times may cause confusions. As both approaches analyse the same time frame, certain factors behind the collapse may be logically attributed to both. A relevant example may be seen when a contingent factor (factions within the Communist Party) affects the decisions of a leader (Gorbachev). This leads to ambiguities, as it is impossible to know whether certain outcomes should be explained in a conjunctural or decisional light. This type of ambiguity can also cast doubts on certain conjunctural phenomena with historical antecedents. In these cases it becomes unclear as to whether these phenomena are ontological (structural), as they existed since the systems conception or conjunctural as they present contingent obstacles to progress.

In most cases, when ambiguities arise, scholars may adopt a rhetoric that is inherently ontological, decisional or conjunctural and then base most of their judgements and analysis around it. Kalashnikov supplements this, stating that studies tend to opt for one factor as being most important in bringing about collapse [] [and] do not engage other standpoints.[4] This is a trait I have noticed in certain works that were written by scholars more inclined to analyse events through a certain approach, such as Kotkin with the ontological approach, Goldman with the decisional one, or Steele regarding the conjunctural approach. In my analysis, I will scrutinise the fall through the theoretical lens of each approach, and from this will prove the indispensability of each of these in the explanation of the downfall. The fact that certain approaches overlap is testament to the necessity of this theoretical categorisation.

Literature Review

The first approach to be investigated will be the ontological one: a school of thought espoused by scholars who focus on systemic long-term factors of collapse. Kotkin is one such author, providing valuable insight into the ontological dissolution of Soviet ideology and society, which will figure as the first element of analysis in that chapter. He advances the theory that the Soviet Union was condemned from an early age due to its ideological duty in providing a better alternative to capitalism. From its inception, the Soviet Union had claimed to be an experiment in socialism []. If socialism was not superior to capitalism, its existence could not be justified.[5] Kotkin elaborates that ideological credibility crumbled from the beginning as the USSR failed to fulfil expectations during Stalins post-war leadership. Kotkin goes on and couples ideological deterioration with emphasis on societal non-reforming tendency that flourished after the 1921 ban on factions, setting a precedent where reform was ironically seen as a form of anti-revolutionary dissidence.

Kenez and Sakwa also supplement the above argument with insight on the suppression of critical political thinking, notably in Soviet satellite states, showing that any possibility of reforming towards a more viable Communist rhetoric was stifled early on and continuously supressed throughout the 1950s and 60s. This characteristic of non-reform can be seen as an ontological centre-point, as after the brutal repression seen in Hungary (1956) and Czechoslovakia (1968), no feedback mechanism existed wherein leadership could comprehend the social, political and economic problems that were gradually amassing. The invasion of 1968 represented the destruction of the sources of renewal within the Soviet system itself.[6] Consequently, this led the Kremlin into a state of somewhat ignorance vis–vis the reality of life in the Soviet Union. Adding to the explanation of the Soviet Unions ontological demise, Sakwa links the tendency of non-reform to the overlapping of party and polity that occurred in the leadership structure of the USSR. The CPSU was in effect a parallel administration, shadowing the official departments of state: a party-state emerged undermining the functional adaptability of both.[7] Sakwa then develops that this led to the mis-modernisation of the command structure of the country, and coupled with non-reform, contributed to its demise. Furthermore, ontologically tending scholars also view the republican independence movements of the USSR as a factor destined to occur since the conception of the union.

The second section concerning the ontological approach analyses the economic factors of collapse. Here, Derbyshire, Kotkin and Remnick provide a quantitative and qualitative explanation of the failure of centralisation in the agricultural and industrial sectors. Derbyshire and Remnick also provide conclusive insight into ontological reasons for the failure of industrial and agricultural collectivization, which played a leading role in the overall demise of the Soviet Union.

Finally, in my third area of investigation, Remnick and Sakwa claim that the dissolution came about due to widespread discontent in individual republics regarding exploitation of their natural resources as well as Stalins detrimental policy of pitting different republics against each other.

Moscow had turned all of Central Asia into a vast cotton plantation [] [and in] the Baltic States, the official discovery of the secret protocols to the Nazi-Soviet pact was the key moment.[8]

Although I will explore how independence movements played a role in the dissolution, I will ensure the focus remains on the USSR as a whole, as it is easy to digress due to the sheer amount of information on independence movements. Upon this, although evidence proves that certain factors of collapse were long-term ontological ones, other scholars, namely Goldman and Galeotti go in another direction and accentuate that the key to understanding the downfall of the USSR lies in the analysis of short-term factors such as the decisional approach.

Dissimilar to the ontological approach, within the decisional realm, scholars more frequently ascribe the factors of the collapse to certain events or movements, which allows them to have minute precision in their explanations of the fall. Goldman is a full-fledged decisional scholar with the conviction that Gorbachev orchestrated the collapse through his lack of comprehensive approach,[9] a view espousing Sakwas definition of the decisional approach. In order to allow for a comprehensive analysis, this chapter will start off with an examination of Gorbachevs economic reforms in chronological order, allowing the reader to be guided through the decisions that affected the collapse. Goldman will be the main literary pillar of this section, supplemented by Sakwa and Galeotti. Having accomplished this, it will be possible to investigate how economic failure inter-linked with political decisions (Glasnost and Perestroika) outside of the Party created an aura of social turmoil. Here, Galeotti and Goldman will look into the events and more importantly, the decisions, that discredited Gorbachevs rule and created disillusion in Soviet society. My final section of the chapter will scrutinize the affects of Glasnost and Perestroika within the Communist Party, which will stand as a primordial step in light of the independence movements; seen as a by-product of Gorbachevs policies. Due to the inter-linked nature of the political, social and economic spheres, it will be possible to see how policy sectors affected each other in the collapse of the Soviet Union.

Overall, this chapter will end with an analysis of how Gorbachevs incoherence pushed certain republics onto the path of independence, which is perceived as a major factor behind the fall by Goldman.

In the chapter regarding the conjunctural approach, I will be looking into the key contingent factors that scholars believe are behind the fall of the Soviet Union. The first will be the conservatives of the Communist Party who obstructed the reform process since Brezhnevs rule, meaning that up until the collapse, reform efforts had run headlong into the opposition of entrenched bureaucratic interests who resisted any threat to their power.[10] Due to the broadness of this topic I will draw from two scholars, namely Kelley and Remnick, for supplementary insight. Moving on, I will also investigate the inception of the reformist left, a term encapsulating those within and outside the party striving to bring democratic reform to the USSR. Here the main conjunctural scholar used will be Steele, who explains that Gorbachevs hopes for this reformist left to support him against the Communist conservatives evaporated once Yeltsin took the lead and crossed the boundaries of socialist pluralism set by Gorbachev. A concept coined by the leader himself, which implied that there should be a wide exchange of views and organizations, provided they all accepted socialism.[11] This brought about enormous pressure and sapped social support from Gorbachev at a time when he needed political backing. Once the political scene is evaluated through conjunctural evidence, I will divide my chapter chronologically, first exploring the 1989 radicalisation of the political movements with the significant arrival of Yeltsin as the major obstacle to Gorbachevs reforms to the left. In this section I will be mainly citing Remnick due to his detailed accounts of events. Ultimately I will be attempting to vary my analysis with approach-specific scholars and more neutral ones who provide thorough accounts, such as Remnicks and Sakwas. The analysis will continue with insight in the 1990-1991 period of political turmoil and the effects it had on Gorbachevs reforms; I will be citing Galeotti, Remnick and Tedstrom as these provide varying viewpoints regarding political changes of the time. My chapter will then finally end with a scrutiny of Yeltsins Democratic Russia and the August 1991 Coup and how both of these independent action groups operated as mutual contingent factors in the dissolution of the Soviet Union.

Chapter One: Was the Collapse of the USSR Ontological in Nature?

When analysing the collapse of the USSR, it is undeniable that vital ontological problems took form during the early days of its foundation. Here I will analyse these flaws and demonstrate how the collapse occurred due to ontological reasons, hence proving the necessity of this approach. In order to provide a concrete answer I will begin by scrutinizing how the erosion of the Communist ideology acted as a systemic flaw where the Soviet Unions legitimacy was put into question. I will then analyse how a non-reformist tendency was created in society and also acted as an ontological flaw that would play a part in the fall. From there I will explore how ontological defects plagued the economic sector in the industrial and agricultural areas, leading the country to the brink of economic collapse. Finally I will analyse the independence movements, as certain scholars, especially Remnick and Kotkin, argue that these movements pushed towards ontological dissolution. It is imperative to recall that this chapter will analyse symptoms of the collapse that are of an ontological nature, namely long-term issues that manifested themselves in a negative manner on the longevity of the Soviet Union. As a result it is vital to bear in mind that the ontological factors to be analysed are usually seen as having all progressively converged together over the decades, provoking the cataclysmic collapse.

The Untimely Death of an Ideology

Since its early days, the Soviet Union was a political-economic experiment built to prove that the Communist-Socialist ideology could rival and even overtake Capitalism. It promoted itself as a superior model, and thus was condemned to surpassing capitalism if it did not want to lose its legitimacy. However, during Stalins tenure, the ideological legitimacy of the Soviet Union crumbled due to two reasons: the first one being the aforementioned premiers rule and the other being Capitalisms success, which both ultimately played a part in its demise.

The early leaders of the Communist Party of the Soviet Union (CPSU) such as Lenin, Trotsky, Kamenev, Bukharin, Zinoviev and Stalin all had different views regarding how to attain socio-economic prosperity, but Stalin would silence these after the 1921 to 1924 power struggle. Following this period, which saw the death of Lenin, Stalin emerged as the supreme leader of the Soviet Union. With the exile of Trotsky, and isolation of Zinoviev, Kamenev and Bukharin from the party, no effective opposition was left to obstruct the arrival of Stalins fledging dictatorship. Subsequently, Stalin was able to go about effectively appropriating the Communist ideology for himself; with his personality cult he became the sole curator of what was Communist or reactionary (anti-Communist). Subsequently, to protect his hold on power, he turned the Soviet Union away from Marxist Communist internationalism by introducing his doctrine of Socialism in One Country, after Lenins death in 1924.

Insisting that Soviet Russia could [] begin the building of socialism [] by its own efforts. [] [Thus treading on] Marxs view that socialism was an international socialist movement or nothing.[12]

As a result, the USSR under Stalin alienated the possibilities of ideological renewal with other Communist states and even went as far as to claim, that the interests of the Soviet Union were the interests of socialism.[13] Sakwa sees these actions as ones that locked the Soviet Union into a Stalinist mind-set early on and thus built the wrong ideological mechanisms that halted the advent of Communist ideology according to Marx. As a result, it is fair to acknowledge that when looking at ontological reasons for collapse, one of them can be mentioned as the Soviet Union being built upon an ambiguous ideological platform wherein it espoused elements of Communism but was severely tainted and handicapped by Stalinist rhetoric.

In addition to the debilitating effects Stalins political manipulations had on the ideological foundations of the USSR, capitalisms successful reform dealt a supplementary blow to Soviet ideological credibility.

Instead of a final economic crisis anticipated by Stalin and others, Capitalism experienced an unprecedented boom [] all leading capitalist countries embraced the welfare state [] stabilising their social orders and challenging Socialism on its own turf.[14]

Adding to the changing nature of capitalism was the onset of de-colonisation during the 1960s, taking away more legitimacy with every new independence agreement. By the end of the 1960s, the metamorphosis of capitalism had very much undermined the Soviet Unions ideological raison dtre, as the differences between capitalism in the Great Depression [which the USSR had moulded itself against,] and capitalism in the post-war world were nothing short of earth shattering.[15] Here the ontological approach generally elaborates that Capitalism and incoherent ideological foundations brought about the disproving of the very political foundations the Soviet state rested upon and thus any social unrest leading to the collapse during Gorbachevs rule can be interpreted as logical by-products of the previous point. From this, it is possible to better understand how the crumbling of the legitimacy of the Communist ideology was a fundamental ontological factor behind the collapse of the USSR. Building on this, I will now look into how the establishment of society during Stalins rule also played a role in the collapse due to the shaping of a non-reforming society.

The Foundations of a Non-Reforming Society

One defect that would remain etched in the Soviet political-economic mind-set was the ontological tendency for non-reform. This trait would plague the very infrastructure of the Soviet Union until its dying days. The emergence of such a debilitating characteristic appeared during the very inception of the Soviet Union with the Kronstadt Sailors Uprising. This uprising occurred during the Tenth Party Congress in 1921 and would have severe repercussion for the Soviet Unions future as Congress delegates [] accepted a resolution that outlawed factions within the Party.[16] Thus, by stifling critical thinking and opposing views, this would effectively cancel out a major source of reform and act as an ontological shortcoming for future Soviet political-economic progress. This non-reformist trait was reinforced during Stalins rule with the constant pressure the Communist Party exerted on agricultural and industrial planners. Here, the party demanded not careful planning [] but enthusiasm; the leaders considered it treason when economists pointed out irrationalities in their plans.[17] Subsequently, planners were forced into a habit of drawing up unmanageable targets, which were within the partys political dictate. This meant, central planners established planning targets that could only be achieved at enormous human cost and sacrifice. [] [and lacked] effective feedback mechanism[18], which would provide insight to the flaws that existed in their plans. In the short-run this would only hinder the economy, but in the long-term it would lock the Soviet Union in a tangent where it could not reform itself in accordance to existent problems[19], thus leading it to a practically technologically obsolete state with a backwards economy by the time it collapsed.

Nevertheless, repression of critical thinking did not limit itself to the economic realm; it also occurred in the social sector where calls for the reform of the Socialist ideology were mercilessly crushed in Hungary in 1956 and in Czechoslovakia in 1968. It is possible to see a link here with the previous section of this chapter with regards to Stalins hijacking of the Communist ideology. In the two social movements cited, both pushed towards a shift away from Stalinist rhetoric towards an actual adoption of Marxist Socialism. In Czechoslovakia this social push came under the name of Socialism with a Human Face and wanted to permit the dynamic development of socialist social relations, combine broad democracy with a scientific, highly qualified management, [and] strengthen the social order.[20] Although these were only Soviet satellite states, the fact that they were repressed showed that by the 1960s, the Soviet Unions non-reforming characteristic had consolidated itself to the point that any divergence from the official party line in the economic or social sectors was seen as high treason. This leads us to the ambiguous area of Soviet polity and how it jeopardised the existence of the USSR when merged with ontological non-reform.

Polity is the term I use here because it remains implausibly unclear as to who essentially governed the USSR during its sixty-nine years of existence. It seems that both the CPSU and the Soviet government occupied the same position of authority, thus creating

a permanent crisis of governance. [Wherein] the party itself was never designed as an instrument of government and the formulation that the party rules but the government governs allowed endless overlapping jurisdictions.[21]

Adding to the confusion was the CPSUs role in society, defined by Article Six of the USSRs 1977 Constitution: The leading and guiding force of the Soviet society and the nucleus of its political system, of all state organisations and public organisations, is the Communist Party of the Soviet Union.[22] From here a profound ambiguity is seen surrounding the role of politics in the social realm. Accordingly, these two traits would create a profound ontological factor for collapse when merged with the non-reforming tendency of society. Due to the fact that when a more efficient leadership mechanism was sought out, it was impossible to identify how and what elements of the polity had to be changed.

It is here that an inter-linkage of approaches can be identified as the politys ontological inability to reform according to Gorbachevs decisional re-shaping of society contributed to the demise of the USSR.

The one-party regime ultimately fell owing to its inability to respond to immense social changes that had taken place in Soviet society- ironically, social changes that the Party itself had set in motion.[23]

Because Soviet polity was ontologically ill defined, when time came to reform it, the notion of what was to be changed obstructed the reform process. From this analysis, it is possible to see how ontological weaknesses in the over-lapping areas of politics and the social sector seriously hindered the Soviet Union. In the following section I will explore how ontological defects were of similar importance in the economic realm and were also interwoven with previously explained shortcomings.

An Economy in Perpetual Crisis

When looking at the economic realm there are a number of weaknesses that took root from the early days of the Soviet Union, the first aspect of scrutiny will be the ontological failure of economic centralisation and its contribution to the fall. In both the agricultural and industrial sectors, the USSR was unable to progress towards economic prosperity due to its flawed centralised economy. Agriculturally, centralisation meant that peasants were compelled to fulfil farming quotas set by the ministry in Moscow on land that solely belonged to the state. Consequently this generated two problems, the first one being a lack of incentive from the farmers and secondly, the inability of central authorities to cope with the myriad of different orders that had to be issued.

Central planners in Moscow seldom know in advance what needs to be done in the different regions of the country. Because of this [] sometimes as much as 40 to 50 per cent of some crops rot in the field or in the distribution process.[24]

Worsening this was the partys non-reforming tendency, which meant that the Soviet Union protected its misconceived collective and state farming network and made up for its agricultural ineptness by importing up to 20 per cent of the grain it needed.[25] This patching-up of ontological agricultural problems would result in an unpredictable and inconsistent agricultural sector as the decades passed, thus rendering it unreliable. This can be seen in the post-war agricultural growth rates that continuously fluctuated from 13.8 per cent in 1955 to -1.5 per cent in 1959 and finally -12.8 per cent in 1963![26] Such a notoriously unpredictable agricultural sector [] consistently failed to meet planned targets[27] and would remain an unresolved problem until the fall of the regime.

As for the industrial sector, the situation was difficult; with the disappearance of a demand and supply mechanism, the central authorities were unable to properly satisfy the material demands of society. Moreover, because of centralisation, most factories were the sole manufacturers of certain products in the whole of the USSR, meaning that an enormous amount of time and money was wasted in transport-logistics costs. Without the demand and supply mechanism, the whole economy had to be planned by central authorities, which proved to be excruciatingly difficult.

Prices of inputs and outputs, the sources of supply, and markets for sale were strictly stipulated by the central ministries. [] [and] detailed regulation of factory level activities by remote ministries [] led to a dangerously narrow view of priorities at factory level.[28]

Consequently, central ministries frequently misallocated resources and factories took advantage of this by hoarding larger quantities of raw materials than they needed. Although the ontological failure of centralisation did not have as immediate effects as certain short-term conjunctural or decisional factors, its contribution to the fall can be seen in how, combined with the economic shortcomings to be highlighted hereon, it gradually deteriorated the economy of the country.

In addition to the failure of centralisation was the failure of agricultural collectivization, which would have an even greater negative effect on the Soviet Union. When looking at collectivization we can see how its affects were multi-layered, as it was a politically motivated campaign that would socially harm society and destroy the economy. Agriculturally, Stalin hindered the Soviet farming complex from its very beginnings by forcing collectivisation on farmers and publicly antagonising those who resisted as anti-revolutionary kulaks. After the winter of 1929, Stalin defined the meaning of kulak as anyone refusing to enter collectives. Kulaks were subsequently persecuted and sent to Siberian gulags, the attack on the kulaks was an essential element in coercing the peasants to give up their farms.[29] These repeated attacks came from a Bolshevik perception that peasants were regarded with suspicion as prone to petty-bourgeois individualist leanings.[30] Due to these traumatic acts of violence, the peasantry was entirely driven into collectivisation by 1937; however, this only bolstered peasant hatred of the government and can be seen as the basis for the agricultural problem of rural depopulation that gradually encroached the country-side. By the 1980s,

The legacy of collectivization was everywhere in the Soviet Union. In the Vologda region alone, there were more than seven thousand ruined villages [] For decades, the young had been abandoning the wasted villages in droves.[31]

This agricultural depopulation can be seen in how the number of collective farms gradually shrank from 235,500 in 1940 to merely 25,900 in 1981[32]; causing severe labour scarcity concerns to the agricultural sector.

Industrially, collectivisation was not widespread, although in the few cases it appeared, it brought about much suffering to yield positive results. The mining city of Magnitogorsk is a prime example where Stalinist planners

built an autonomous company town [] that pushed away every cultural, economic, and political development in the civilized world [and where] 90 per cent of the children [] suffered from pollution-related illnesses.[33]

While the West followed the spectacular expansion of Soviet industry from 1920 to 1975, this was at the cost of immense social sacrifice in the industrial and agricultural sectors, which were entirely geared towards aiding the industrial complex. In addition to this, much of Soviet industrial growth after Khrushchevs rule was fuelled by oil profits emanating from Siberia, peaking from 1973 to 1985 when energy exports accounted for 80% of the USSRs expanding hard currency earnings.[34]

Overall, ontological non-reform inter-linked with the failure of collectivisation and a deficient command structure would gradually weaken the economy to the brink of collapse in the 1980s. This elaboration was made clear in the 1983 Novosibirsk Report, which

argued that the system of management created for the old-style command economy of fifty years ago remained in operation in very different circumstances. It now held back the further development of the countrys economy.[35]

Nevertheless, ontological problems behind the fall did not only restrict themselves to the economic, political or social realms but also existed regarding the nationalities question.

A Defective Union

When looking at the fifteen different republics that comprised the USSR, one may ask how it was possible to unite such diverse nationalities together without the emergence of complications. The truth behind this is that many problems arose from this union even though the CPSU maintained, until the very end, the conviction that all republics and people were acquiescent of it. Gorbachevs statement in 1987 that

the nationalities issue has been resolved for our country [] reflected the partys most suicidal illusion, that it had truly created [] a multinational state in which dozens of nationalisms had been dissolved.[36]

Today certain scholars see the independence movements of the early 1990s as a result of the ontological malformation of the Soviet Unions identity. The most common argument expounds that the independence movements fuelling dissolution occurred due to two ontological reasons. The first one can be seen as a consequence of Stalins rule and as part of his policy of divide and rule, where the borders between ethno-federal units were often demarcated precisely to cause maximum aggravation between peoples.[37] This contributed to the Soviet Unions inability to construct a worthwhile federal polity and an actual Soviet nation-state. In addition to this was the ontological exploitation of central Soviet republics and prioritisation of the Russian state. This created long-term republican discontent that laid the foundations of independence movements: Everything that went wrong with the Soviet system over the decades was magnified in Central Asia,[38] Moscow had turned all of Central Asia into a vast cotton plantation [] destroying the Aral Sea and nearly every other area of the economy.[39]

Overall, it is possible to argue that the collapse occurred due to inherent flaws in the foundations of the Soviet Union. Ontological factors behind the collapse were an admixture of socio-political and economic weaknesses that gradually wore at the foundations of the USSR. The first area analysed was the demise of the Marxist ideology that up-held the legitimacy of the Soviet Union. I then scrutinized the non-reforming tendency that settled in Soviet society very early on. Such an area eventually brought me to inspect the ontological flaws in Soviet economy, which had close links with the previous section. Finally, I examined inherent flaws in the USSRs union and how these also played a role in the demise. While the ontological factors represent a substantial part of the explanation to the downfall, decisional and conjunctural factors must also be examined to fully grasp the collapse.

Chapter Two: Was the Collapse of the USSR Decisional in Nature?

Whilst long-term flaws in the foundations of the Soviet Union played a major role in its demise, it is important to acknowledge that most of Gorbachevs reforms also had drastic effects on the survival of the union. From hereon, I will explore how the decisional approach explains vital short-term factors behind the collapse and cannot be forgone when pondering this dissertations thesis-question. To begin with, I will analyse the failure of Gorbachevs two major economic initiatives known as Uskoreniye (acceleration of economic reforms) and Perestroika. This will then inevitably lead me to the scrutiny of his socio-political reforms under Glasnost and how imprudent decisions in this sector led to widespread unrest in the USSR. Finally I will look into how Gorbachevs decisional errors led to most republics to opt out of the Soviet Union. But before I start it is important to understand that although I will be separating the economic reforms (Uskoreniye and Perestroika), from socio-political ones (Glasnost), these were very much intertwined as Gorbachev saw them as mutually complementary.

A Botched Uskoreniye and an Ineffective Perestroika

By the time Gorbachev rose to power in March 1985, ontologically economic problems had ballooned to disproportionate levels. His initial approach to change was different to his predecessor; he took advice from field-experts and immediately set into motion economic Uskoreniye (acceleration). At this point, economic reform was indispensible as the collective agricultural sector lay in ruins with a lethargic 1.1 per cent output growth between 1981 and 1985, whilst industrial output growth fell from 8.5 per cent in 1966 to 3.7 per cent 1985.[40] Although Gorbachev could not permit himself mistakes, it is with Uskoreniye that the first decisional errors regarding the economy were committed and cost him much of his credibility. Under Abel Aganbegyans advisory, Gorbachev diverted Soviet funds to retool and refurbish the machinery industry, which was believed would accelerate scientific and technological progress. He supplemented this effort by reinforcing the centralisation of Soviet economy by creating super-ministries, that way planners could eliminate intermediate bureaucracies and concentrate on overall strategic planning.[41] Whereas these reforms did have some positive impacts, they were not far reaching enough to bring profound positive change to Soviet industrial production. Moreover, in the agricultural sector, Gorbachev initiated a crackdown on owners of private property in 1986, which led farmers to fear the government, and would disturb the success of future agricultural reforms. His error with Uskoreniye lay in the fact that he had aroused the population with his call for a complete overhaul of Soviet society, but in the economic realm at least, complete overhaul turned out for most part to be not much more than a minor lubrication job.[42] Realising his mistake, Gorbachev acquired the belief it was the economic system he had to change, and set out to do just that with his move towards Perestroika (Restructuring).

Gorbachev had at first tried simply to use the old machinery of government to reform. [] the main reason why this failed was that the old machinery [] were a very large part of the problem.[43]

Although the term Perestroika did exist prior to Gorbachevs tenure in office, it was he who remoulded it into a reform process that would attempt to totally restructure the archaic economic system. Unlike the first batch of economic reforms [] the second set seemed to reflect a turning away from the Stalinist economic system,[44] a move that startled the agricultural sector which had been subjected to repression the prior year. In 1987, Gorbachev legalised individual farming and the leasing of state land to farmers in an effort to enhance agronomic production. However, this reform was flawed due to the half-hearted nature of the endeavour, wherein farmers were allowed to buy land but it would remain state-owned. Therefore, due to Gorbachevs reluctance to fully privatise land, many prospective free farmers could see little point in developing farms that the state could snatch back at any time.[45] Adding to this social setback was the purely economic problem, since

without a large number of participants the private [] movements could never attain credibility. A large number of new sellers would produce a competitive environment that could hold prices down.[46]

Thus, due to Gorbachevs contradictory swift changes from agricultural repression to reluctant land leasing, his second agrarian reform failed.

Industrially, Gorbachev went even further in decisional miscalculations, without reverting his earlier move towards ultra-centralisation of the super-ministries, he embarked on a paradoxical semi-privatisation of markets. Gorbachevs 1987 Enterprise Law illustrates this as he attempted to transfer decision-making power from the centre to the enterprises themselves[47] through the election of factory managers by workers who would then decide what to produce and work autonomously. Adding to this, the 1988 Law on Cooperatives that legalized a wide range of small businesses[48] supplemented this move towards de-centralisation. Combined, it was anticipated that these reforms

would have introduced more motivation and market responsiveness [] in practice, it did nothing of the sort [] workers not surprisingly elected managers who offered an easy life and large bonuses.[49]

Moreover, the Enterprise Law contributed to the magnitude of the macro and monetary problems. [] [as] managers invariably opted to increase the share of expensive goods they produced,[50] which led to shortages of cheaper goods. Whilst, the law had reverse effects on workers, the blame lies with Gorbachev as no effort was put into the creation of a viable market infrastructure.

Without private banks from which to acquire investment capital, without a free market, [] without profit motive and the threat of closure or sacking, managers rarely had the incentive [] to change their ways.[51]

By going halfway in his efforts to create a market-oriented economy, Gorbachev destroyed his possibilities of success. The existing command-administrative economic system was weakened enough to be even less efficient, but not enough that market economics could begin to operate,[52] in effect, he had placed the economy in a nonsensical twilight zone. Consequently, the economy was plunged into a supply-side depression by 1991 since the availability of private and cooperative shops, which could charge higher prices, served to suck goods out of the state shops, which in turn caused labor unrest[53] and steady inflation. Here, Gorbachev began to feel the negative effects of his reforms, as mass disillusionment in his capability to lead the economy towards a superior model coupled with his emphasis on the abolition of repression and greater social freedom (Glasnost) tipped the USSR into a state of profound crisis.

The Success of Glasnost

Having understood Gorbachevs economical decisional errors with Perestroika, I will now set out to demonstrate how his simultaneous introduction of Glasnost in the social sector proved to be a fatal blow for the Soviet Union. Originally, Gorbachev set out to promote democratisation in 1987 as a complementary reform that would aid his economic ones, he saw Glasnost as a way to create nation of whistle-blowers who would work with him[54] against corruption. To the surprise of Soviet population, Gorbachev even encouraged socio-economic debates and allowed the formation of Neformaly, which were leisure organizations [and] up to a quarter were either lobby groups or were involved in issues [] which gave them an implicitly political function.[55] Gorbachev initiated this move at a time when the USSR was still searching for the correct reform process. Thus, the Neformaly movement was a way for him to strengthen the reform process without weakening the party by including the involvement of the public. But as Perestroika led to continuous setbacks, Gorbachev began to opt for more drastic measures with Glasnost, upholding his belief that the key lay in further democratisation. In November 1987, on the 70th anniversary of the October revolution, Gorbachev gave a speech purporting to Stalins crimes, which was followed by the resurgence of freedom of speech and gradual withdrawal of repression. Intellectually, politically and morally the speech would play a critical role in undermining the Stalinist system of coercion and empire.[56] At Gorbachevs behest, censorship was decreased and citizens could finally obtain truthful accounts regarding Soviet history and the outside world. However, this reform proved to be fairly detrimental as Soviet citizens were dismayed to find that their country actually lagged far behind the civilized countries. They were also taken aback by the flood of revelations about Soviet history.[57] While this did not trigger outbursts of unrest in amongst the population, it did have the cumulative impact of delegitimizing the Soviet regime in eyes of many Russians.[58] After his speech, Gorbachev continued his frenetic march towards democratisation with the astounding creation of a Congress of Peoples Deputies in 1989. Yet again, Gorbachev had found that the reform process necessitated CPSU support, however, conservatives at the heart of the party were continuously moving at cross-purpose to his reform efforts. Hence, by giving power to the people to elect deputies who would draft legislation, Gorbachev believed that he would be strengthening the government, [and] by creating this new Congress, he could gradually diminish the role of the Party regulars [conservatives].[59]

Instead of strengthening the government, Gorbachevs Glasnost of society pushed the USSR further along the path of social turmoil. In hindsight, it is possible to see that

the democracy Gorbachev had in mind was narrow in scope. [] Criticism [] would be disciplined [] and would serve to help, not hurt the reform process. [] His problems began when [] disappointment with his reforms led [] critics to disregard his notion of discipline.[60]

As soon as economic Perestroika failed to yield its promises, the proletariat began to speak out en masse, and instead of constructive openness, Gorbachev had created a Glasnost of criticism and disillusion. This was seen following the 1989 Congress, as social upheavals erupted when miners saw the politicians complain openly about grievances never aired before [61] and decided to do the same. In 1989, almost half the countrys coal miners struck,[62] followed by other episodes in 1991 when over 300,000 miners had gone out on strike.[63] Very quickly, Gorbachev also came to sourly regret his Neformaly initiative as workers, peasants, managers and even the military organized themselves in lobby groups, some of them asking the Kremlin to press forth with reforms and others asking to revert the whole reform process. Gorbachevs decisional error lay in his simultaneous initiation of Perestroika and Glasnost; as the latter met quick success whilst the economy remained in free-fall, society was plunged into a state of profound crisis.

Party Politics

Alongside his catastrophic reform of society and the economy, Gorbachev launched a restructuring of the CPSU, which he deemed essential to complement his economic reforms. In 1985, Gorbachev purged (discharged) elements of the CPSU nomenklatura, a term designating the key administrative government and party leaders.

Within a year, more than 20 to 30 % of the ranks of the Central Committee [] had been purged. Gorbachev expected that these purges would rouse the remaining members of the nomenklatura to support perestroika.[64]

This attack on the party served as an ultimatum to higher government and party officials who were less inclined on following Gorbachevs path of reform. Nevertheless, as economic and social turmoil ensued, Gorbachev went too far in his denunciation of the party, angering party members and causing amplified disillusionment within the proletariat. Examples of this are rife: behind the closed doors of the January 1987 Plenum of the Central Committee, Gorbachev [] accused the Party of resisting reform.[65] In 1988, Gorbachev also fashioned himself a scapegoat for economic failures: the Ligachev-led conservatives were strangling the reforms.[66] Up until 1988, this attack on the party nomenklatura did not have far-reaching repercussions, but as Gorbachev nurtured and strengthened the reformist faction of the CPSU, infighting between the conservatives and reformist began having two negative effects. The first one was widespread public loss of support for the party; this can be seen in the drop in Communist Party membership applications and rise in resignations. By 1988 the rate of membership growth had fallen to a minuscule 0.1 per cent, and then in 1989 membership actually fell, for the first time since 1954.[67] The other negative repercussion lay in how party infighting led to the inability of the CPSU to draft sensible legislation. This was due to Gorbachev continuously altering the faction he supported in order to prevent one from seizing power. Such a characteristic can be spotted in his legislative actions regarding the economy and social sector, which mirrored his incessant political shifts from the reformist faction to the conservative one. In 1990, Gorbachev opted for more de-centralisation and even greater autonomy in Soviet republics by creating the Presidential Council where heads of each republic were able to have a say in his decisions. However, he reversed course in 1991 with the creation of the Security Council where heads of republics now had to report to him directly, thus reasserting party control. Concerning the economy, Gorbachev acted similarly: as earlier explained, his first batch of reforms in 1986 stressed the need for centralisation with super-ministries, but he changed his mind the year after with his Cooperatives and Enterprise Laws and agricultural reforms. Gorbachev constantly

switched course [] [his] indecisiveness on the economy and the Soviet political system has generated more confusion than meaningful action. [] After a time, no one seemed to be complying with orders from the centre.[68]

In effect, it is possible to see here an overlapping of approaches since the way party infighting affected Gorbachevs reforms can be seen as a contingent factor that obstructed reform or a decisional error on Gorbachevs behalf for having reformed the party in such a manner.

Overall, this incoherence in his reform process can be seen as the result of his own decisional mistakes. Having succeeded in his Glasnost of society and the party, Gorbachev had allowed high expectation to flourish regarding his economic reforms, expectations that were gradually deceived. Amidst this social turmoil, economic downturn, party infighting and widespread disillusionment, Soviet republics began to move towards independence as the central command of the Kremlin progressively lost control and became evermore incoherent in its reforms.

The Death of the Union

As the Soviet Union descended into a state of socio-economic chaos, individual republics began to voice their plea to leave the union. This can be seen as having been triggered by the combination of three decisional errors on Gorbachevs behalf. The first one was his miscalculation of the outcome of Glasnost, as by 1990

all 15 republics began to issue calls for either economic sovereigntyor political independence. []Gorbachevs efforts to induce local groups to take initiative on their own were being implemented, but not always in the way he had anticipated.[69]

Originally, initiative had never been thought of as a topic that could lead to independence movements, instead Gorbachev had introduced this drive to stimulate workers and managers to find solutions that were akin to the problems felt in their factory or region. Adding to this mistake were Gorbachevs failed economic reforms with Perestroika, and as the Unions economic state degenerated, individual republics began to feel that independence was the key to their salvation. Gorbachevs

Read more here:

What Explains the Collapse of the USSR?

Posted in Socio-economic Collapse | Comments Off on What Explains the Collapse of the USSR?

Libertarian candidate Gary Johnson hires GOP operative to …

Posted: August 12, 2016 at 2:48 pm

The head of Hispanic Outreach for the Libertarian Party, who is Republican, says he joined up with the third party because he believes GOP presidential nominee Donald Trump is the worst of the worst.

Speaking to The Hill, Juan Hernndez, who took the post with the Libertarian Party last week, said that he is not leaving the Republican Party, but is backing Libertarian Gary Johnsons bid for the White House because he believes the former New Mexico governor “comes with a message that brings both of my worlds together.”

Johnsons message of small government and letting states decide on social issues resonated with Hernndez because it “fits Hispanics so well.”

“We came here, were religious, we dont want to get into the debate over gay marriage,” Hernndez said of Hispanics. “Let states decide.”

As for Trump, Hernndez said there are just so many reasons why he cant support the boisterous billionaire.

While he says that Trumps call to build a massive wall along the United States southern border with Mexico and his proposal to deport the 11 million undocumented immigrants living in the country would be an insult to Hispanics, Hernndez said his opposition to Trump goes even further.

Trump would “not only be a disaster for Hispanics, for Republicans, for Americans, for the world. I really fear a Trump president. The way he speaks of bombing other nations, the Muslims?”

Hernndez, however, said he never had any plans of supporting Democratic presidential nominee Hillary Clinton.

“Its not a matter of Ill go with the lesser of two evils, I think we have to vote on principle,” said Hernndez.

“Since she was first lady of Arkansas, she and her husband were always en la orillita of whats appropriate, Hernndez said, using the Mexican Spanish phrase that roughly translates to in gray space.

Hernndez has previously worked as an advisor for presidential candidates in the U.S., Mexico and Guatemala, including Arizona Sen. John McCains failed bid in 2008 and former Mexican Presidents Vicente Fox and Felipe Caldern.

Besides Hernndez, the Johnson campaign nabbed another high profile Republican boost on Wednesday when Virginia Rep. Scott Rigell said he thinks Johnson can win the presidency.

“This may surprise you to hear, but I’m ready to defend the proposition that Gov. Johnson can win,” Rigell said.

Like us on FacebookFollow us on Twitter & Instagram

Read more:

Libertarian candidate Gary Johnson hires GOP operative to …

Posted in Libertarian | Comments Off on Libertarian candidate Gary Johnson hires GOP operative to …

Midnight Eye feature: Post-Human Nightmares The World of …

Posted: August 10, 2016 at 9:05 pm

A man wakes up one morning to find himself slowly transforming into a living hybrid of meat and scrap metal; he dreams of being sodomised by a woman with a snakelike, strap-on phallus. Clandestine experiments of sensory depravation and mental torture unleash psychic powers in test subjects, prompting them to explode into showers of black pus or tear the flesh off each other’s bodies in a sexual frenzy. Meanwhile, a hysterical cyborg sex-slave runs amok through busy streets whilst electrically charged demi-gods battle for supremacy on the rooftops above. This is cyberpunk, Japanese style: a brief filmmaking movement that erupted from the Japanese underground to garner international attention in the late 1980s.

The world of live-action Japanese cyberpunk is a twisted and strange one indeed; a far cry from the established notions of computer hackers, ubiquitous technologies and domineering conglomerates as found in the pages of William Gibson’s Neuromancer (1984) – a pivotal cyberpunk text during the sub-genre’s formation and recognition in the early eighties. From a cinematic standpoint, it perhaps owes more to the industrial gothic of David Lynch’s Eraserhead (1976) and the psycho-sexual body horror of early David Cronenberg than the rain-soaked metropolis of Ridley Scott’s Blade Runner (1982), although Scott’s neon infused tech-noir has been a major aesthetic touchstone for cyberpunk manga and anime institutions such as Katsuhiro Otomo’s Akira (1982-90) and Masamune Shirow’s Ghost in the Shell (1989- ).

In the Western world, cyberpunk was born out of the new wave science fiction literature of the sixties and seventies; authors such Harlan Ellison, J.G. Ballard and Philip K. Dick – whose novel Do Androids Dream of Electric Sheep? (1968) was the basis for Blade Runner – were key proponents in its inception, creating worlds that featured artificial life, social decay and technological dependency. The hard-boiled detective novels of Dashiell Hammett also proved influential with regards to the sub-genre’s overall pessimistic stance. What came to be known as cyberpunk by the mid 1980s was thematically characterised by its exploration of the impact of high-technology on low-lives – people living in squalor; stacked on top of one another within an oppressive metropolis dominated by advanced technologies.

Live-action, Japanese cyberpunk on the other hand, is raw and primal by nature, and characterised by attitude rather than high-concept. A collision between flesh and metal, the sub-genre is an explosion of sex, violence, concrete and machinery; a small collection of pocket-sized universes that revel in post-human nightmares and teratological fetishes, powered by a boundaryless sense of invasiveness and violation. Imagery is abject, perverse and unpredictable and, like Cronenberg’s work, bodily mutation through technological intervention is a major theme, as are dehumanisation, repression and sexuality. During the late eighties and early nineties, it was a sub-strain characterised largely by the early work of two directors; Shinya Tsukamoto and Shozin Fukui.

These directors made films that were short, sharp, bludgeoning and centred on corporeal horrors that saw the body invaded, infected and infused with technology. Tsukamoto’s contributions are perhaps the most famous; Tetsuo: The Iron Man (1989) and Tetsuo II: The Body Hammer (1992). Both films present the nightmarish situation of their protagonists (played by actor Tomorowo Taguchi in both) undergoing a bizarre metamorphosis that sees a humble salaryman turn from a human into a hybrid of flesh and scrap metal.

Although not as well known to western audiences, Fukui’s work is also important. Stylistically similar to Tsukamoto but sufficiently divergent so as not to be a mere copy, Fukui opened up the sub-genre’s pallet by incorporating Cronenberg like scientific experiments that impact on the body through technological augmentation as evidenced in his contributions Pinocchio v946 (1991) and Rubber’s Lover (1996). These films focus on the venerability of the human mind and how such alteration can cause more than a physical change in appearance, but create a completely new mental state and thought processes that are beyond human.

Tsukamoto and Fukui eschewed many of conventions crystallised by Gibson’s archetypal Neuromancer. There are no mega-conglomerates or incidences of virtual reality and the power struggle between high-technology versus low-quality of life is replaced by low-technology versus low-life. The technology in their vision of cyberpunk consisted of industrial scrap – Tetsuo – and makeshift laboratories built from crude and dated equipment – Rubber’s Lover – lending a DIY aesthetic to their overall ethos. These were, after all, films made with little or no money and as a result, were not set in gargantuan, near-future metropolises but the present-day, real-life cyberpunk city of Tokyo, suggesting that anxieties over rapid modernity are not some far-off venture but something that should be worried about now. Both filmmakers also had a fixation with post-industrial landscapes; using scrap yards, boiler rooms, abandoned warehouses, compounds and factories as decaying playgrounds for their ideas.

However, this new and defiant take on the sub-genre did not come about overnight. There are many precursors to both Tsukamoto and Fukui’s work that also need to be addressed. Some are quite well known to western audiences whilst others have yet to get the recognition that they deserve in helping to create one of the most fascinating and philosophical phases in contemporary Japanese cinema.

Whilst the ideas of cyberpunk in the West were born out of literature, Japanese cyberpunk, it could be argued, was born out of music. During the late seventies and early eighties, Tokyo was enjoying an incredibly vibrant underground punk music scene. An ethos that later branched out into art and cinema thanks largely to one individual: Sogo Ishii.

Born in 1957, Ishii quickly built a reputation of being somewhat of a maverick and grew to be a prominent figure of the Tokyo underground filmmaking scene. Operating within the gathering rubble of a collapsing studio system, Ishii turned out a variety of zero-budget 8mm film projects at a time when former international filmmaking heavyweights such as Akira Kurosawa were struggling to find financial investment.

Early feature film efforts such as Panic High School (1978) and Crazy Thunder Road (1980) encapsulated the rebellion and anarchy associated with punk and went on to become highly influential in underground film circles. Crazy Thunder Road in particular pointed the way forward with its biker-gang punk aesthetic; a style that would be explored later in Otomo’s highly influential Akira. Originally made as a university graduation project, it was picked up for distribution by major studio Toei, making Ishii the first of his generation to move from amateur filmmaking into the professional industry while still a university student [ 1 ].

After Crazy Thunder Road, Ishii made the frenetic short film Shuffle (1981) – interestingly, an unofficial adaptation of a Katsuhiro Otomo comic strip – as well as a slew of music and concert videos for a variety of Japanese punk bands. However, Toei soon returned, offering Ishii studio backing for his next feature film project. This new financial investment resulted in Ishii’s most influential work to date; Burst City (1982), a film that encapsulated and epitomised his favourite subject matter: the punk movement.

No other film captured the intensity, pessimism, delinquency and the do-it-yourself bravado of Japan’s punk movement like Ishii’s Burst City; a bold, brash and anarchic time-capsule of early eighties zeitgeist. However, despite its overwhelming influence – not only did it shape the conventions of Japanese cyberpunk, but the future of contemporary Japanese cinema as a whole – Burst City remains largely unappreciated. It is frequently overshadowed by its higher profile, more internationally renowned followers: Tsukamoto, Takashi Miike and Takeshi Kitano among others, all of whom are indebted to Ishii’s work in some shape or form.

However, Ishii has always played the rebel: attending his filmmaking class at Nihon University only when he needed to borrow more equipment; dropping off the filmmaking radar for long stretches of time; making films of a commercially unviable length such as the 55-minute Electric Dragon 80,000V (2001) and challenging conventional moviegoers with his early punk films only then to defy the fans of that work with calm, hypnotic efforts such as August in the Water (1995) and Labyrinth of Dreams (1997). It is this ethos that drives Burst City; steering it through the deserted Tokyo highways and barren industrial wastelands that make up its initial exposition and into the anarchic meltdown of its closing act.

The visual aesthetic of Burst City is an eclectic mix of punk, industrialisation and post-apocalyptic wasteland imagery reminiscent of the first two Mad Max films (1979 & 1981), with some science fiction trimmings; the futuristic cannons used by the Battle Police to disperse riots for instance. However, Burst City acts beyond the usual genre trappings. It has the immediacy and atmosphere of a documentary, chronicling both the people and the music, whilst using the surrounding dystopian backdrop as a metaphor for the anxiety, haplessness and alienation as experienced by Japan’s youth at the time. This documentary feel is further enhanced by Ishii’s groundbreaking use of camera. His highly dynamic, handheld, almost stream-of-consciousness style shots interwoven with equally aggressive, machinegun editing not only captures the energy and restlessness of the music – which is very prominent here – but would highly influence Tsukamoto and the execution of his work.

The film’s industrialised environments – the abandoned warehouses and run-down boiler rooms where the biker gangs and punk bands reside – would become a key aspect for the Japanese cyberpunk look as well as depicting Tokyo as little more than a concrete slum. The notion of the metropolis as oppressive entity starts to become apparent here and it’s interesting to note that this film was made in the same year as Blade Runner, which again, displays similar connotations [ 2 ].

Ishii’s prior involvement with the punk movement allowed him to gather an impressive ensemble of real-life Japanese punk bands – The Rockers, The Roosters and The Stalin among others – as part of the cast, as well as 1970s folk singer/songwriter Shigeru Izumiya. Interestingly, Izumiya was also credited as a Planner and the film’s Art Director, suggesting that he had a strong involvement in shaping Burst City’s influential aesthetic. This serves as a vital link as Izumiya would go on to write and direct his own film; a film that would go on to crystallise many of the conventions and ideas of Japanese cyberpunk that would later be explored by Tsukamoto and Fukui.

Shigeru Izumiya’s Death Powder (1986) introduces the unorthodox visuals and abstract delivery that would prove instrumental in future Japanese cyberpunk execution. Like Burst City, sound also plays a vital part here; further laying the foundations for the sensory assault aspect of the movement that would later be championed and refined by Tsukamoto. Izumiya, like Ishii, is from a musical background; a popular folk singer/songwriter as well as a film composer – he wrote the music for Ishii’s breakthrough feature Crazy Thunder Road.

Lost in public domain purgatory for decades, Death Powder barely exists, available on bootleg DVD and only recently as video segments on the internet [ 3 ]. Western understanding of the film has been largely incoherent and underwhelming due to bad and partial translation into English and as a result, Death Powder is frequently overlooked. However, its influence is unmistakably clear and it’s arguably the first film of Japan’s extreme cyberpunk movement, exemplifying the invasive, corporeal surrealism that would follow over the next ten years.

Set in present or near-future Tokyo, the film follows a group of researchers who have in their possession Guernica; a feminine, cybernetic android capable of spewing poisonous dust from its mouth. Karima (played by Izumiya) is left to guard the android but appears to lose his mind, attacking the other two – Noris and Kiyoshi – when they return. Kiyoshi inhales some of Guernica’s powder and starts to mutate as a result. He also starts hallucinating as their subconscious starts to merge. One sequence entitled “Dr. Loo Made Me” – which suggests that the android is trying to communicate with Kiyoshi – sees the Guernica project in its early stages featuring the three researchers as well as the eccentric Dr. Loo, the guitar wielding head of the operation. The hallucinations provide Kiyoshi with further omniscience, detailing Karima’s apparent love for Guernica as well as the research group’s ongoing struggle with the ‘scar people’; men disfigured as their flesh deteriorates uncontrollably.

The subject of flesh, the boundary between life and death and the notion of what it means to be human come into play regularly as the film drifts from one surrealist situation to another. Death Powder poses the question: if you cease to have flesh, do you cease to be human? This is an idea that is routinely explored in cyberpunk but while western examples such as Blade Runner and Neuromancer focus on larger-scale implications, Death Powder – and most of Japan’s subsequent cyberpunk output for that matter – looks at the changes within the individual. With the former; invasive technologies are not only fully realised, but have been successfully integrated into society, thus becoming common practice. The technologies explored in the latter however, are still in their primordial stages; they are works in progress and extremely esoteric, and as a result, extremely volatile and unpredictable.

Death Powder also establishes Japanese cyberpunk’s tendency to place imagery ahead of its narrative, a fundamental aspect of the no-holds barred sensory assault style that they exhibit. As a result, story and purpose are evinced from what is seen as opposed to what is told, allowing subsequent films a tonal and philosophical quality. Like many similar spirited films that would follow, Death Powder highlights the destructive and dehumanising nature of technology. A big clue comes in the form of the android Guernica sharing the same name as Pablo Picasso’s famous 1937 painting that depicts the bombing of Guernica by Nazi warplanes (in support of Franco) during the Spanish Civil War. Picasso’s mural shows an orgy of twisted bodies, animals and buildings, deformed by war, or more broadly, the deviant technologies that power it. The film’s end sees the cast fused and writhing in an ocean of monstrous flesh; the human form consumed and destroyed at the hands of intervening science.

Despite Death Powder’s aesthetic and thematic influence, it went by with little fanfare and was never seen outside of Japan until years later. The subsequent, similar minded Android of Notre Dame (Kuramoto; 1988) fared slightly better, partly due to the infamy that surrounded the film series it was part of, a seven-film collection known as the Guinea Pig Series; short exploitation features that focused on torture, murder and other destructive processes, designed to appear realistic and snuff-like [ 4 ]. Android of Notre Dame failed to strike a chord with wider audiences and has since wallowed in cult obscurity along with its filmic brethren. However, this all changed as Japanese cyberpunk began to creep into the international spotlight with the anime feature film adaptation of Katsuhiro Otomo’s popular manga series, Akira (1988).

Although this writing focuses mainly on live-action cyberpunk output, Akira’s arrival was so important and influential to the sub-genre that it needs to be acknowledged. Akira achieved two things: first; it opened up and, almost single-handedly, popularised anime and manga for global audiences (especially in the UK and US) and second; it perpetuated the cyberpunk ethos on perhaps the largest scale to date – combining the neon-lit, high-technology/low-living metropolis of Blade Runner and Neuromancer with body horror overtones. The film condensed the vast narrative of Otomo’s gargantuan, six-part magnum opus into a streamlined, two-hour feature directed by Otomo himself. It is a milestone within Japanese cyberpunk as it was the first of the sub-genre to not only have commercial success domestically, but also managed to find an audience overseas.

Set within the destitute overcrowding of futuristic Neo Tokyo, the story revolves around juvenile biker thugs and best friends Kaneda and Tetsuo. During a turf spat with a rival gang, Tetsuo crashes but is mysteriously taken away by military and scientific officials. They experiment on him with chemically altering drugs, turning Tetsuo into a psycho-kinetic demigod with uncontrollable power. He goes on a destructive rampage through the city to seek an audience with Akira, a highly powerful entity that destroyed the old Tokyo decades before.

Part of Akira’s success inevitably lies in its attention to detail and vaulting ambition. The budget was astronomical for an anime feature at the time – around 1,100,000,000 [ 5 ] – acquired through the partnership of several major Japanese media companies including Toho and Bandai. It avoided the corner cutting of anime projects in the past, producing hundreds of thousands of animation cells to create fluid motion – particularly in its many action set-pieces – and capture nuances that would’ve otherwise not existed. Otomo also went to the trouble of doing lip-synched sound recording; a first for anime, resulting in extremely high and rich production values. The film set box office records for an anime in Japan during its summer 1988 release, grossing over 6,300,000,000 [ 6 ]. Internationally, it got a limited theatrical run in America and the United Kingdom soon after – sowing the seeds for the immense western cult fanbase that it enjoys to this day – but failed to get home video distribution until the early nineties.

Themes of mutation, modernity and social unrest are rife. Kaneda and Tetsuo’s biker gang are like a revved up version of the delinquents seen in Ishii’s Crazy Thunder Road and Burst City, while Tetsuo’s ESP and subsequent transformation sets the film firmly in Cronenberg’s body horror territory. His eventual fusion with metal – resulting in a horrific man-machine hybrid that sees Tetsuo become the master of a newly formed universe – not only is evocative of the cyberpunk notion of technology corrupting the human form (in this case literally) but also serves as an important visual precursor to the movement’s next breakthrough, live-action work.

Often revered as the definitive example of extreme Japanese cyberpunk and a vital cornerstone in the rebuilding of contemporary Japanese cinema, Tetsuo: The Iron Man was a baffling international success story, prompting many a sceptic on Japan’s future cinematic involvement to turn their attention eastward. Barely over an hour in length, Tetsuo was a breath of fresh air; a no-holds-barred sensory assault that gave Japanese cinema a major image renovation and launched the career of its director, Shinya Tsukamoto, who has gone on to become one of the country’s most respected and treasured auteurs.

During its unprecedented and lengthy tour of international film festivals, Tetsuo not only pointed towards exciting new possibilities for contemporary Japanese cinema but was able to fit ‘snugly into a pantheon of genre works that included Ridley Scott’s Blade Runner, James Cameron’s The Terminator, David Lynch’s Eraserhead and the work of David Cronenberg, Sam Raimi and Clive Barker'[ 7 ], which no doubt broadened its appeal. Its use of kinetic cinematography, rapid-fire editing and DIY, zero-budget special effects served as an invitation; a call to arms if you will, for independent filmmakers everywhere to produce unique and challenging cinema.

However, the majority of the film’s innovative style is, for the most part, lifted from elsewhere, promoting the fusion of a variety of influences including the hyperactive camerawork of Ishii’s Burst City; the body horror of Cronenberg’s Videodrome (1983) and The Fly (1986); the biomechanical perversions of artist H.R. Giger; the literature of J.G. Ballard – particularly Crash (1973) – and the stop-motion animation of Jan Svankmayer. There is also a sense of strange nostalgia for the old kaiju (monster) movies and television serials that Tsukamoto watched when growing up in a Tokyo experiencing post-war re-construction as well as major expansion and modernising in preparation for the Japan hosting of the 1964 Olympic Games.

Like Ishii, Tsukamoto’s early development stemmed from making 8mm films as a teenager during the 1970s, using his younger brother and friends as cast and crew members. As he reached adulthood, Tsukamoto abandoned filmmaking and turned his attention increasingly towards the stage, forming a theatre troupe with like minded university students and directing plays [ 8 ]. One of the plays that Tsukamoto wrote would subsequently be adapted into a film; The Adventure of Denchu Kozo (1987) with the assistance of his theatre cohorts – christened ‘Kaiju Theatre’. It was this same group that also made Tetsuo, along with a revolving-door line-up of other helpers, most notably fellow filmmaker Shozin Fukui who would go to make his own cyberpunk features during the nineties.

Tetsuo’s chief concern is the impact of technology on society and subsequently – and more specifically – the human form. Tsukamoto suggests that technology is a disease, bursting forth unannounced and unexplained as evidenced in the salaryman’s transformation – simultaneously reminiscent of Cronenberg’s The Fly and Otomo’s Akira – where a shard of metal lodged in the protagonist’s cheek is the starting point for further mutation. Like Seth Brundle of The Fly, the salaryman is both repulsed yet intrigued by what he is turning into and, coincidently, his evolution shares the namesake of the transforming character of Akira: Tetsuo; meaning ‘iron man’ or ‘clear thinking/philosophical man’. Tsukamoto embraces both interpretations of his film’s title. On one hand is the literal transformation of flesh to iron and on the other, a philosophical enquiry on technology’s consuming nature and the symbiosis between city and citizen.

However, closer inspection reveals further concerns, as evidenced by Steven T. Brown, author of the groundbreaking Tokyo Cyberpunk: Posthumanism in Japanese Visual Culture, in which he says: ‘the mixing of flesh and metal in Tetsuo is not only intensely violent but also darkly erotomechanical and techno-fetishistic, evoking sadomasochistic sexual practices and pleasures, as well as fears of both male and female sexuality out of control'[ 9 ].

In this regard, Tsukamoto gives horror and eroticism equal attention: the salaryman has a nightmare involving his girlfriend (played by Kei Fujiwara) sodomising him with a mechanical, snakelike appendage strapped to her crotch. This gender-reversal is not only representative of one of David Cronenberg’s favourite thematic stomping grounds, but also shares the Canadian director’s Ballardian [ 10 ] allusions, hyper-masculinity and homoerotic undertones. When the film’s antagonist, Yatsu (meaning ‘Guy’) – a metal fetishist (played by Tsukamoto himself) suffering from the same man-machine affliction – arrives at the apartment, he turns up ‘presenting flowers to the salaryman in a parody of courtship'[ 11 ] that ends with physical assimilation.

This mechanical eros continues when, in an early stage of his transformation, the salaryman’s penis turns into a rapidly oscillating drill which he then uses on his girlfriend with graphic results. By the film’s end, he does battle and fuses together with the metal fetishist; the result is a large tank-like monstrosity with the suggested goal of world domination. His newfound unrepressed nature effectively destroys his heterosexual relationship, only to start a new one with someone – another male – experiencing similar changes to their body.

The film’s metaphorical capacity is achieved primarily through its abstract and surrealist execution that bears similarities to Luis Buuel’s Un Chien Andalou (1929) – as noted by Brown in Tokyo Cyberpunk (p.60-64) – and David Lynch’s Eraserhead. The latter is a popular comparison, prompting many to refer to Tetsuo as a “Japanese Eraserhead”. Whilst both films share an allegiance to post-humanism and industrialised iconography, Eraserhead takes a slower burning, atmospheric approach. Tetsuo on the other hand, takes a startlingly aggressive stance from the outset; combining hand-held camerawork, rapid fire editing and a pummelling, industrial music score by composer Chu Ishikawa – who would serve as composer for future Tsukamoto projects – to create a battering and invasive sensory assault. It was an ethos that would carry over into the next decade of underground filmmaking.

After completing his second feature, the manga adaptation Hiruko the Goblin (1990), Tsukamoto returned to the world of mutated scrap with a second Tetsuo film. Tetsuo II: The Body Hammer (1992) serves more as a companion piece than as a straightforward sequel or remake. It is a new interpretation of the same basic premise – man-machine transformation – but played out on a larger scale. Tomorowo Taguchi reprises his role as a (different) salaryman. This time, he lives in a sterile, high-rise apartment with his wife and young son. His metamorphosis is triggered when his son is kidnapped by an underground faction of skinheads who want to harness the salaryman’s cyber-kinetic powers so that they can augment their bodies into organic weaponry in order to bring about mass destruction.

If the ethos of the first Tetsuo was related to The Fly, the second film perhaps bears more of a similarity to Cronenberg’s Scanners (1981) as the salaryman comes to blows against his mutated brother (played by Tsukamoto), the leader of the skinhead group. In doing so, Body Hammer moves away from the surreal macabre horror of its predecessor and more towards an action/science fiction movie template; although plenty of avant-garde trimmings still remain to bridge, connect and embellish ideas. As a result, Tsukamoto operates within a somewhat more conventional and ultimately, more accessible narrative structure, and the inclusion of a larger budget means that he is able to fully realise the end-of-the-world scenario suggested in the closing moments of the first film. As per Tsukamoto’s wish, Tokyo is razed to the ground.

Like the first film, Body Hammer blurs the distinction between form and content. It also re-imagines concepts that were given little attention the first time around; the metal fetishist’s obsession with physical perfection as suggested by the photos of successful athletes that adorn his shack like abode is ‘brought very much to the foreground in the shape of the skinhead cult, which consists of athletes, bodybuilders and boxers who push their training regimen to the extreme’ [ 12 ] – a topic that would dominate Tsukamoto’s subsequent film project. It’s a possible indictment of the obsessive, body culture phenomenon that came about in the 1980s that saw more and more people going to the gym and taking advantage of artificial enhancements such as plastic surgery; a time when there was a strong emphasis on physical perfection and beauty.

The film also hints at the direction Tsukamoto would start to take with future productions: the environmental focus has shifted ever so slightly from the decaying urban sprawl to the sterile functionality of the metropolis centre, and more of an emphasis has been placed on the relationship between the salaryman and his wife; a marriage torn apart by invasive elements. The catalyst for transformation this time is not from infection or a curse as suggested in the original, but from demonstrative rage. The prospect of the salaryman’s son being killed by the skinheads provokes the first instance of transformation, which occurs again when his wife is kidnapped, causing multiple gun-barrels to erupt from his chest and limbs. Rage would go on to transform Tsukamoto’s protagonists in future films Tokyo Fist (1995) and Bullet Ballet (1998), albeit figuratively instead of literally.

In the wake of Tetsuo’s startling domestic and international success, one would think that it would have acted as a catalyst to trigger a wave of similarly styled films. In retrospect, this wasn’t the case as very few filmmakers decided to follow the path forged by Tsukamoto’s breakthrough work. However, former colleague Shozin Fukui was one of the few to accept the challenge.

Like Tsukamoto and Izumiya before him, Fukui is a disciple of Sogo Ishii’s breakthrough independent filmmaking during the late seventies as well as the music that inspired it. Born in 1961, and upon moving to Tokyo in the early eighties, Fukui quickly became infatuated with the burgeoning underground punk music scene and set about forming his own band with friends. These same friends would serve as Fukui’s cast and crew on early forays into filmmaking such as Metal Days (1986) and the short films Gerorisuto (1986) and Caterpillar (1988) [ 13 ].

After serving as assistant director to both Tsukamoto and Ishii – on Tetsuo: The Iron Man and the short film The Master of Shiatsu (Shiatsu Oja, 1989) respectively – Fukui started to write and direct his own feature films. His first was Pinocchio 964 (1991), and while it did not share the same philosophical leanings that Tetsuo did two years before, it was an effective manifesto for Fukui’s thematic preoccupations nonetheless; how technological augmentation impacts on the fragile and potentially volatile nature of the human mind. The story focuses on the titular protagonist, a brainwashed individual who has been scientifically modified to operate as a sex slave. Upon being thrown away by his sexually demanding female owners, Pinocchio wonders the streets of present-day Tokyo where he meets Himiko, a fellow destitute. She takes Pinocchio under her wing whereby he begins to fall in love with her, prompting the return of previously erased memories. When Pinocchio realises what has happened to him and knows who’s responsible, he plans revenge. Meanwhile, the corporation in question organise a search party to reclaim their missing product.

Pinocchio 964 is frequently compared to Tetsuo by cyberpunk enthusiasts and academics alike. Both films represent the feature length debut of Fukui and Tsukamoto respectively and both films exhibit a similarly energetic and manic execution. It can be argued that Fukui’s style is indebted to Tsukamoto due to his serving as assistant director for a period of Tetsuo’s filming. Fukui’s previous short, Caterpillar – made at around the same time as Tetsuo – features similar techniques including hyperactive, hand-held camerawork and stop-motion animation as well as similar imagery: mounds of scrap, ubiquitous urban living and flesh merged with machinery.

However, there are some major differences. The most apparent is inherent in the film’s mise en scene: Pinocchio 964 is in colour (except for its opening sequence) whereas Tetsuo is black and white – though its sequel was in colour. Thematically, unlike Tsukamoto’s notion of technology as an organic, mutating disease, Fukui’s film depicts the body transformed as the direct result of man-made augmentation similar to early Cronenberg – Shivers (1975) and Rabid (1977) for example – as well as Mary Shelley’s Frankenstein (1818). Like the monster in Shelley’s seminal work, Pinocchio is at first oblivious to his condition, but time spent in the real world causes him to realise his artificial existence and he seeks revenge against his creator. However, unlike Frankenstein’s monster, Pinocchio was not constructed from scratch; he is his namesake in reverse – a human turned product through neuro tampering and memory wiping. Fukui seems to suggest that modernity is programming the populous to concern themselves with nothing but sex; a sentiment that’s readily apparent in the media and advertising industries.

It could be argued then, that Pinocchio 964 is the more precise cyberpunk text, offering a speculative stance on potential future technologies i.e. altered living through cybernetic assistance. As suggested in Tetsuo, these technological changes have a perverse impact on sex; Pinocchio is compelled to suckle on Himiko’s breasts in a brain-damaged, baby like stupor – not knowing any better – whereas the salaryman’s girlfriend is enticed and drawn to ride her lover’s newly developed drill-penis.

The conclusion of Pinocchio 964 sees further transformation beyond the esoteric boundaries as previously established. Like the salaryman and metal fetishist, Pinocchio and Himiko – both of whom are victims of the corporation’s scientific dalliances – merge together in a manner and style reminiscent of Peter Jackson’s first lo-fi feature Bad Taste (1987), suggesting the start of a new, technologically altered meta-race in keeping with Cronenberg’s corporeal philosophy of the “New Flesh” [ 14 ].

Thanks to Tetsuo’s worldwide success – along with other newly emerging work like Takashi Kitano’s gritty police caper Violent Cop (1989) – Pinocchio 964 enjoyed a modicum of cult success as international demand for strange and ultra-violent Japanese cinema began to increase. Film companies such as Toho started to cater to this newfound interest by introducing direct-to-video distribution lines that specialised in outputting low-budget, sensationalist material. One such entry was Tomoo Haraguchi’s specifically titled Mikadroid: Robokill Beneath Disco Club Layla (1991), a cyber/steampunk horror about a buried, technologically augmented, super-soldier – built by Japanese scientists during the second world war – being re-activated and going on a murderous rampage. Largely unheard of, the film is perhaps most notable for featuring a (brief) acting turn from a then little-known Kiyoshi Kurosawa, who would later go on to direct internationally renowned works such as Cure (1997), Pulse (2001) and Tokyo Sonata (2008).

Both Pinocchio 964 and Mikadroid would be overshadowed by Tsukamoto’s higher budget and higher profile Tetsuo sequel, which arrived the following year. In the meantime, Fukui was already planning the next project; one that would take almost five years to gestate and execute.

The result was Rubber’s Lover (1996), Fukui’s second and, at present, last feature; a subterranean post-industrial nightmare of human experimentation and bodily destruction. A clandestine group of scientists experiment on human guinea pigs pinched from the street to unlock psychic powers. This is achieved through a combination of computer interfaces, sensory depravation and regular injections of ether, usually resulting in the subject dying a gruesome and explosive death.

Often interpreted as a lose prequel to Pinocchio v946, Rubber’s Lover, despite similarities to its predecessor also represents a distinct contrast. The most readily apparent differences are the film’s use of monochrome photography – a decision made by Fukui when he disliked the look of the S&M flavoured costumes when filmed in colour – and the film’s comparatively subdued pace; favouring atmosphere over propulsion. However, his pre-established tropes still remain: invasive technologies; bizarre sexual practices as a by-product of such technologies; retrograde/outdated equipment; mutation; and a fetish for bodily fluids – pus, blood, vomit etc.

Like Tetsuo, Rubber’s Lover depicts the establishment of a new world order through corporeal and technologically informed symbiosis: the biological co-existence between flesh and metal and the destruction of mental and physical barriers respectively. Rubber’s Lover also takes great pleasure in distorting the boundaries and exploring the grey area between sex and violence; much more so than Pinocchio 964. One scene sees a frenzied character tearing the flesh off another, mid-coitus on a hospital bed whilst a corporate scumbag laughs in the corner of the room. The researcher’s successful test subject, Motomiya – a former member of the team who has since become addicted to ether – is made to wear a strange, rubber S&M bodysuit, further augmented with makeshift technological add-ons of monitors, wires and outdated gizmos. Their nurse’s rotating, ether injector is especially phallic and is used on their subjects rectally for “immediate effect”, suggesting a notion of perversion that transcends sex and violence and into the realms of science and technology.

Rubber’s Lover’s perverted view on science not only echoes some of the imagery and themes from Izumiya’s Death Powder (and to a lesser extent, Haraguchi’s Mikadroid) but the real-life, deranged human experiments carried out by the Japanese military’s infamous Unit 731 on Chinese prisoners of war during the 1930s and 40s [ 15 ]; depicting a doomsday scenario that sees the human race tear itself apart in the pursuit of scientific understanding and technological superiority. Motomiya’s ether addiction is caused by one of his research colleagues. The same colleague later kidnaps and rapes a representative of the project’s benefactor sent in to oversee its shutdown. She is also subjugated to D.D.D (Direct Digital Drive), the apparatus used in the project’s testing.

Fukui’s fascination over the frailty and destructibility of the human mind comes to fruition as Motomiya quickly turns mad; burdened with newly unlocked psychic powers that he can’t control. Like Pinocchio 964, Rubber’s Lover examines the mental transformation that invasive technologies incur on the human condition. This is in stark contrast to Tsukamoto’s Tetsuo films that focus primarily on the physical transformation caused by the same factors, which perhaps serves as the key difference between their otherwise similar films within the sub-genre.

By the mid-to-late 1990s, Japanese cyberpunk cinema was starting to wane; having been overtaken by the blood-stained yakuza films of Kitano and Miike in terms of international prominence, who would in turn be overshadowed by the new wave of supernatural, J-Horror films that emerged at the turn of the century including Hideo Nakata’s The Ring (1998) and Ring 2 (1999).

Fukui’s Rubber’s Lover was the last underground cyberpunk film of the nineties and arguably the last ever. Upon its completion and after getting a limited video release, Fukui put filmmaking on hold to join a video production company; he worked there for the best part of ten years. Tsukamoto had moved on also, continuing his exploration of the symbiosis between city and citizen with a matured pallet. His films Tokyo Fist (1995) and Bullet Ballet (1998) eschew virtually all of the science fiction and horror imagery that had characterised his work previously.

Cyberpunk was kept alive within Japan’s anime and manga industries but it wasn’t until the turn of the millennium when it returned to cinema. The year 2001 saw the release of two films that would give the genre a new lease of life. Mamoru Oshii made Avalon, a live-action Japanese/Polish co-production about an addictive virtual simulation game. It was Oshii’s first film since his internationally successful anime feature film adaptation of Ghost in the Shell (1995) – he would go on to direct the sequel; Ghost in the Shell 2: Innocence (2004).

Shot in Poland with Polish actors and a Japanese crew, Avalon’s themes of virtual reality places it in the same territory as a lot of American produced cyberpunk that surfaced during the nineties: The Lawnmower Man (1992), Strange Days (1995), The Thirteenth Floor (1999), The Matrix (1999) and Cronenberg’s similarly concerned eXistenZ (1999) for example. It was also redolent of many similarly themed anime releases – both theatrical and televised – that emerged during the same decade as the real-life phenomenon of the internet started to make the world seem even smaller; Oshii’s own adaptation of Ghost in the Shell and Ryutaro Nakamura’s Serial Experiments: Lain (1998) series were particularly indicative of these technological and cultural changes. Another notable example and precursor to much of the VR-centric work that would appear in the 1990s is the four-part anime series Megazone 23 (1985-1989), which explores the idea of a post-apocalyptic Tokyo existing as a futuristic virtual simulation.

The second film from 2001 was Sogo Ishii’s Electric Dragon 80,000V, which not only served as Ishii’s return to punk cinema after a decade of more meditative output but, like Burst City, spearheaded a new generation of like minded filmmaking that has evolved Japanese cyberpunk into a new and strange beast. As with the sensory assault cinema favoured by Tsukamoto and Fukui, Electric Dragon is a film that is experienced rather than watched, stimulating the most primitive parts of the brain in a tsunami of sound and image.

The premise is simple enough; a young boy contracts the ability to channel and wield electricity, acquired from a childhood accident whilst climbing some power lines – an ability further enhanced by receiving multiple jolts of electro-shock therapy for violent behaviour. Now an adult with megawatts of power coursing through him, Dragon Eye Morrison is a professional reptile investigator, searching alleyways for lost lizards. Equilibrium is disturbed by the arrival of Thunderbolt Buddha, a TV repair man turned vigilante whose electro-conductive talents are the result of mechanical wizardry. The two meet and battle for supremacy on Tokyo’s rooftops.

As was the case with Burst City, Electric Dragon leans less towards the cyber and more towards the punk aspect of the sub-genre, with Ishii following the train of thought he employed with his music videos and concert films during the 1980s. The film’s title also makes reference to the old days, partly derived from ‘Live Spot 20,000V’, the concert venue that plays a pivotal role in Burst City and one of Ishii’s early shorts, The Solitude of One Divided by 880,000 (1978). Electric Dragon is less about the nightmare and more about anarchic expression at odds with the post-modern universe.

However, some cyber signifiers do remain; the oppressive Tokyo setting realised in stark monochrome; the fetishist attitude towards power lines, aerials, ventilation ducts and other ubiquitous technological appliances; the hyperactive and frequently expressionist delivery; its low-budget, guerrilla-like execution and, like Tetsuo, the concept of two characters augmented through technology, giving them powers that they can’t fully control, coming to blows. Dragon Eye Morrison has to clamp himself to a metal bed frame at night whilst Thunderbolt Buddha’s penchants for electronic devices to assist in his nocturnal excursions sometimes get the better of him as he fights for control of his own body.

The psycho-sexual themes that dominated past Japanese cyberpunk have been replaced with an equally primal notion of animal magnetism. Morrison’s electric power is derived from the ‘Dragon’ that’s embedded in all living things. His rage unlocks the strength of the dragon, meaning that he can harness more energy by sucking it out of household appliances or by creating a non-melodic racket on his electric guitar; a high-voltage cacophony of noise and expression announcing that Ishii’s punk spirit is still alive and well. Indeed, lead actor Tadanobu Asano occasionally guests in Ishii’s industrial noise-punk ensemble Mach 1.67, which provided the film’s propulsive soundtrack. The film would later be used to accompany the group’s live shows, a strategy Ishii pioneered back in 1983 when he made the short film Asia Strikes Back – a little-known cyberpunk piece that provided the template for Shozin Fukui’s preferred set-up of underground experiments gone haywire – to back up the album and tour of the short-lived punk supergroup The Bacillus Army.

Similar to Tsukamoto’s Tetsuo, dialogue in Electric Dragon 80,000V is minimal thus the narrative is powered mainly by image and follows a similar template; the protagonist is seen acquiring his power; the antagonist then challenges the protagonist to combat and the final act sees them clash. All of this is wrapped up in a high energy, fatless sixty-minute package. Ishii’s film is not only is a throwback to the eighties cyberpunk manifesto but reminds us that rather than being characterised by heavy, science fiction concepts, as was the case in the West, it was defined by its independence, attitude and the will to create something out of nothing.

In the years following Electric Dragon 80,000V, a new wave of low-budget horror/science fiction began to surface largely thanks to increased DVD distribution channels, cheaper production techniques and the ever increasing reach of the internet. Films like Hellevator: The Bottled Fools (Hiroki Yamaguchi, 2004), Meatball Machine (Yudai Yamaguchi & Junichi Yamamoto, 2005), The Machine Girl (Noboru Iguchi, 2008) and Tokyo Gore Police (Yoshihiro Nishimura, 2008) have ushered in a new era of cyberpunk informed, gore-centric movies that have since been termed ‘splatter-punk’.

These splatter-punk movies share the same independent spirit of their precursors, substituting 8mm and 16mm film methods for cheap DV technology, retaining as much budget as possible for make-up, costume and practical effects. Many of the effects in these films depict mutation and body alteration; splatter re-imaginings of the flesh-metal fusions of Tetsuo, and the perverse, organic weaponry of Tetsuo II. Similar to the “splatstick” horror of early Sam Raimi and Peter Jackson, the effects and transformations lean towards the ridiculous for comedic effect. One mutated character in Tokyo Gore Police wields an oversized cannon made of contorted flesh, protruding from his crotch much like an erect penis, suggesting – in a very tongue-in-cheek manner – the blur between sex and violence that was posited by Tsukamoto and Fukui. Yamaguchi and Yamamoto’s Meatball Machine is perhaps the closest to the Japanese cyberpunk of old; parasitic aliens infect unsuspecting people, which promptly turns them into macabre man-machine teratoids that fight it out.

In many ways, this ‘splatter-punk’ phase is also reminiscent of the special-effects race that occurred with American horror movies during the 1980s; Cronenberg included. As practical effects became more advanced, a seemingly never-ending slew of films were produced, trying to out-shock one another with advancing exercises in gore. The same can be said here; the ante seems to be continually raised as each new release contorts and morphs the body in increasingly elaborate and grotesque ways.

A reason for this is that many of these film’s directors initially came from special effects backgrounds: Tokyo Gore Police director Yoshihiro Nishimura for instance, has supervised the special effects for many modern gore productions including Noboru Iguchi’s The Machine Girl and Robo-Geisha (2009). In fact, many of these films are made through Fundoshi Corps, a production company founded by Nishimura, Iguchi and film producer Yukihiko Yamaguchi, that specialise in cheaply produced, over-the-top movies of this ilk. It has proven to be a successful business model as their output is continually building a strong international fanbase, looking for perverse and outlandish content.

The recurring touchstones of combining eroticism and perversion are also present. However, they for the most part forego subverted techno-fetishism in favour of contemporary V-Cinema and Pink Film preoccupations. The Machine Girl for instance, uses typical imagery such as the Japanese schoolgirl – a popular conceit in a lot of the nation’s anime, manga and pornography industries – and takes it to new abject levels, connecting bullet spewing hardware to her severed limbs and even granting her the ability to grow weaponry from out of the small of her spine; skirt raised of course.

Unfortunately, it would appear that live-action Japanese cyberpunk cinema has moved on from the daring, experimental underground from whence it came. The remnants of its ideas are now utilised in violent gore shockers that are bereft of the immediacy and philosophical potential of their progenitors. The movement, once an expression of attitude, concerns and frustration with the world, the way it’s structured and the technology used – not just an exploration of the grey area between science fiction and horror – seems to have disappeared.

However in 2009, Shinya Tsukamoto announced his return to the world of cyberpunk with a third Tetsuo project. Tetsuo: The Bullet Man is not only a return, but a new beginning for Tsukamoto as it is his first English language film; an attempt to expose the demented world of Tetsuo to a wider audience. It premiered at the 2009 Venice Film Festival to mixed fanfare, prompting Tsukamoto to continue working on it. Subsequent showings – the 2010 Tribeca Festival for instance – have found greater critical favour, but a vital caveat still remains

Like the punk scene that it emulated, Japanese cyberpunk was pertinent and inextricably linked to a specific time and place. More than a sub-genre, it tackled the anxieties of the period in ways that conventional expression would fall short. But now that we’re in the technologically dependent twenty-first century – the post-human nightmare now a grim reality – can it still be relevant?

Continued here:
Midnight Eye feature: Post-Human Nightmares The World of …

Posted in Post Human | Comments Off on Midnight Eye feature: Post-Human Nightmares The World of …

Socio-Economic Collapse in the Congo: Causes and Solutions

Posted: July 25, 2016 at 4:00 pm

by Marie Rose Mukeni Beya

The history of the Congo is long. Some historians think that Early Congo History began with waves of Bantu migrations moving into the Congo River basin from 2000 B.C. to 500 A.D. and then gradually started to expand Southward. The modern history of the Congo may be divided into four periods starting in 1885, after the Conference of Berlin divided Africa into separate states which were then ruled by Europeans imperial powers.

Colonization. King Leopold II of Belgium acquired control over the Congo territory in 1885. He named it the Congo Free State, and ruled it as his private property from 1877-1908. The Belgian parliament took over the colony from the king in 1908. The Belgian Congo achieved independence on June 30, 1960 under new leadership representatives of various political parties. Mr. Joseph Kasavubu of the Alliance des Bakongo (ABAKO) party was elected the President; Patrice-Emery Lumumba, the leader of the National Movement of the Congo or MNC, became prime minister, and Lieutenant Colonel Joseph Mobutu (Mobutu Sese Seko) was appointed as chief-of-staff of the new army, the National Army of the Congo (ANC), and became the also Secretary of State. The new nation was given the name Republic of Congo.

Adjustment and Crisis. The Congo spent the first half of the 1960s adjusting to its independence. In 1961, the Democratic Republic of Congo [DRC] was destabilized by army mutinies, unrests, riots, rebellions and the secession of the countrys richest region, Katanga, soon followed by a similar move in the Southeastern Kasai Province, which declared itself the Independent Mining State of South Kasai. The United Nations played a critical role in managing this crisis, which was further compounded by the trial of strength at the center between President Kasavubu and Prime Minister Lumumba, culminating in Lumumbas assassination at the hands of the Katangan secessionists in January 1961.

Dictatorship. In 1965 Mobutu, by then commander-in-chief of the army, seized control of the Congolese territory and declared himself the countrys president, head of the sole political party. In 1971 he renamed the country the Republic of Zaire. Once prosperous, the country markedly declined. Rampant corruption and abuse of the civilian population ensued. The need for change was widely understood; various political parties were organized, presidential elections were held and social justice programs initiated. The Sovereign National Conference in 1992 brought together more than two thousand representatives from various political parties and NGOs.

The Congo is Rich in Human and Natural Resources. It has the third largest population in Sub-Saharan Africa: 65.8 million. It has the second largest rain forest in the world. Precipitation is ample; it rains six to eight months of the year. Agriculture was profitable before the economy failed. It was 56.3 % of the GDP. Main cash crops include coffee, palm oil, rubber, cotton, sugar, tea and cocoa. But the revenue collected from the agricultural work and farming has greatly diminished in the past decade and is now only 15 % of the GDP. The DRC is rich in a variety of minerals: copper, cobalt, diamond, gold, zinc, oil, uranium, columbite/tantalite (coltan, an essential material for cell phones and other electronics) and other rare metals. Traditionally, one mining company in upper Katanga named Gecamines has dominated mining. Copper and cobalt accounted for 75% of the total export revenues, and about 25 % of the countrys GDP. The DRC was the worlds fourth-largest producer of industrial diamonds during the 1980s. Despite the abundance of resources, the DRC is one of the poorest countries in the world. The countrys official economy has collapsed in the last few decades due to hyperinflation, mismanagement and corruption, war, conflict and general instability, political crisis and economic dislocation. Moreover, the spread of HIV/AIDS has contributed to an overall deterioration. As the DRC is hit by the global economic downturn, exports (lumber, oil, diamonds and other ores in particular) have declined, whereas the high costs for imports of most basic needs remain unchanged. The consequence is an acute deterioration of the balance of trade and the collapse of foreign investments. The DRCs foreign debt stands at over $10 billion. M. R. M. B.

Decade of Conflict. In May 1997, Joseph Kabila, leader of a rebel movement supported by neighboring countries, challenged Mobutu and forced him to leave the country. Kabila seized control, declared himself president and renamed the country the Democratic Republic of Congo. After Kabila was assassinated in January, 2001, power was transferred to his son Joseph Kabila II by appointment. On December 18, 2005, for only the second time in 46 years the Congolese voted in a presidential election. Kabila won the elections against his opponent Bemba. This has sparked off riots and civil war.

Since the beginning of its independence in 1960 to date, instability has prevailed in the DRC. Although significant attempts have been made to stabilize the political and military establishments, the Congolese people still live in an all-pervasive state of insecurity. This has made a shambles of the economy and social conditions for the Congolese people. The poorest, as always, are the most affected.

Since 1998, an estimated 3.3 million people, mostly women, children and elderly have been killed as a result of armed conflicts. Another 2.3 million, according to NGOs reports in 2003, are homeless. The wars caused a drastic increase in the number of orphans, helping to create the gruesome phenomenon known as child soldiers.

The wars also exacerbated ethnic tensions over land and territory in Eastern Congo, posing a long-term challenge for the transition to peace. Because of domestic conflicts in the neighboring countries Rwanda, Burundi, Uganda, Sudan, Central Africa and Angola many civilian refugees and displaced soldiers fled to and infiltrated the DRC. Some insurgent groups attacking contiguous countries use the DRC as their base. This created regional tensions, and deteriorated the DRCs relationships with neighboring countries. In the Eastern DRC, violence erupted between Congolese and the newcomers. This conflict is exacerbated by ethnic tensions in Eastern Congo. In the Kivu Region, Congolese militia (MaiMai) still fights to protect their land. During the wars, the spread of HIV/AIDS has drastically increased, and this affects all aspects of the social, economic and political life. Many factors have contributed to the rapid spread of HIV/AIDS in the DRC, including poverty, lack of education, cultural norms, and war. Women and girls are raped and sexually exploited by the military in their own homes. Poverty drives some girls into prostitution, which increases their risk of becoming infected. Although some NGOs are focusing on the situation of women and girls, especially in the post conflict period, little has been done; women and girls remain defenseless. Recently international resources have become available to fight HIV/AIDS, but funds are not being used properly.

It is crucial to establish a new order. This means a new, uncorrupted and disciplined government, capable of improving the living conditions of the average Congolese. As a precondition the DRC must hold fair democratic elections. The future government must focus on education. Child education should become the number one priority. Be educated or perish. It is mandatory to shift the priorities from military security to peoples social welfare and development. Political corruption must be removed, and human rights violations must be dealt with, but everything depends on the eradication of poverty.

Commitment of all parties is needed: The DRC government, leaders of political movements and civil society, administrators, professionals, workers, in brief the Congolese citizenry on all levels. Men and women, adults as well as youth must be involved in the process of change. Local services, churches, NGOs, and international organizations must cooperate in support of political change.

The fight against poverty starts by properly managing available financial resources, and discouraging corruption. The available resources must be used properly. The annual budget must be voted upon, the budget plan respected, and the expenditures must be disciplined and limited. Auditing all economic activity on a regular basis should be mandatory.

Corruption occurs because the individuals cannot satisfy their basic needs (food, health care, clothing, education, employment, wages, etc.). In order to prevent corruption the government should proceed with the following steps:

The private sector and the national organizations must be encouraged to create more jobs.

Workers in both private and public sectors should get paid on a regular basis. The wage rates should be based on the work experience and educational background of the worker. The minimum wage must cover expenditures for basic needs.

Salaries must be readjusted and periodically augmented, regardless of boom-bust cycles.

Taxes must be used to rebuild infrastructures. People need to be educated to pay their taxes, which should be understood as constructive contributions to social welfare.

Taxes should be increased on natural resources and unearned incomes, and decreased on earned incomes from production.

Finally, the government should address the tragic violation of human rights. People must be taught their human rights, and trained apply these rights in the appropriate situations. For example, people need to report human rights violations, discrimination and injustice, and to defend themselves against sexual harassment. A strong, functional judicial system must be established. People must understand and believe that human rights abuses will not be tolerated in the Democratic Republic of the Congo.

Marie Rose Mukeni Beya, Ph.D. is a psychologist specializing in child development. Prior to coming to the US, she was head of the Psychology Dept. at the University of Kinshasa. She currently teaches Georgist economics at the Henry George School in New York. She is fluent in French, English, Swahili, Lingala, and Tshiluba.

Read more:

Socio-Economic Collapse in the Congo: Causes and Solutions

Posted in Socio-economic Collapse | Comments Off on Socio-Economic Collapse in the Congo: Causes and Solutions

Wirehead hedonism versus paradise-engineering

Posted: at 3:47 pm

“The mind is its own place, and in itself Can make a Heav’n of Hell, a Hell of Heaven” Satan, in Milton’s Paradise Lost

Far-fetched? Right now, the abolitionist project sounds fanciful. The task of redesigning our legacy-wetware still seems daunting. Rewriting the vertebrate genome, and re-engineering the global ecosystem, certainly pose immense scientific challenges even to a technologically advanced civilisation.

The ideological obstacles to a happy world, however, are more formidable still. For we’ve learned how to rationalise the need for mental pain – even though its nastier varieties blight innumerable lives, and even though its very existence will soon become optional.

Today, any scientific blueprint for getting rid of suffering via biotechnology is likely to be reduced to one of two negative stereotypes. Both stereotypes are disturbing, pervasive, and profoundly ill-conceived. Together, they impoverish our notion of what a Post-Darwinian regime of life-long happiness might be like; and delay its prospect.

Rats, of course, have a very poor image in our culture. Our mammalian cousins are still widely perceived as “vermin”. Thus the sight of a blissed-out, manically self-stimulating rat does not inspire a sense of vicarious happiness in the rest of us. On the contrary, if achieving invincible well-being entails launching a program of world-wide wireheading – or its pharmacological and/or genetic counterparts – then most of us will recoil in distaste.

Yet the Olds’ rat, and the image of electronically-triggered bliss, embody a morally catastrophic misconception of the landscape of options for paradise-engineering in the aeons ahead. For the varieties of genetically-coded well-being on offer to our successors needn’t be squalid or self-centred. Nor need they be insipid, empty and amoral la Huxley’s Brave New World. Our future modes of well-being can be sublime, cerebral and empathetic – or take forms hitherto unknown.

Instead of being toxic, such exotically enriched states of consciousness can be transformed into the everyday norm of mental health. When it’s precision-engineered, hedonic enrichment needn’t lead to unbridled orgasmic frenzy. Nor need hedonic enrichment entail getting stuck in a wirehead rut. This is partly because in a naturalistic setting, even the crudest dopaminergic drugs tend to increase exploratory behaviour, will-power and the range of stimuli an organism finds rewarding. Novelty-seeking is normally heightened. Dopaminergics aren’t just euphoriants: they also enhance “incentive-motivation”. On this basis, our future is likely to be more diverse, not less.

Perhaps surprisingly too, controlled euphoria needn’t be inherently “selfish” – i.e. hedonistic in the baser, egoistic sense. Non-neurotoxic and sustainable analogues of empathogen hug-drugs like MDMA (“Ecstasy”) – which releases a lot of extra serotonin, dopamine and pro-social oxytocin – may potentially induce extraordinary serenity, empathy and love for others. An arsenal of cognitive enhancers will allow us be smarter too. For feeling blissful isn’t the same as being “blissed-out”.

Ultimately, however, using drugs or electrodes for psychological superhealth is arguably no better than taking medicines to promote physical superhealth. Such interventions can serve only as dirty and inelegant stopgaps. In an ideal world, our emotional, intellectual and physical well-being would be genetically predestined. A capacity for sustained bliss may be a design-feature of any Post-Darwinian mind. Indeed some futurists predict we will one day live in a paradise where suffering is physiologically inconceivable – a world where we can no more imagine what it is like to suffer than we can presently imagine what it is like to be a bat.

Technofantasy? Quite possibly. Today it is sublime bliss that is effectively inconceivable to most of us.

Olds mapped the whole brain. Stimulation of some areas – the periaqueductal grey matter, for instance – proved aversive: an animal will work hard to avoid it. “Aversive” is probably a euphemism: electrical pulses to certain neural pathways may be terrifying or excruciating. Euphemisms aside, our victims are being tortured.

Happily, more regions in the brain are rewarding to stimulate than are unpleasant. Yet electrical stimulation of most areas, including the great bulk of the neocortex, is motivationally neutral.

One brain region in particular does seem especially enjoyable to stimulate: the medial forebrain bundle. The key neurons in this bundle originate in the ventral tegmental area (VTA) of the basal ganglia. VTA neurons manufacture the catecholamine neurotransmitter dopamine. Dopamine is transported down the length of the neuron, packaged in synaptic vesicles, and released into the synapse. Crucially, VTA neuronal pathways project to the nucleus accumbens. VTA dopaminergic neurons are under continuous inhibition by the gamma-aminobutyric acid (GABA) system.

In recent years, a convergence of neuropharmacological evidence, clinical research, and electrical stimulation experiments has led many researchers to endorse some version of the “final common pathway” hypothesis of reward. There are anomalies and complications which the final-common-pathway hypothesis still has to account for. Any story which omits the role and complex interplay of, say, “the love hormone”, oxytocin; the “chocolate amphetamine”, phenylethylamine; the glutamate system; the multiple receptor sub-types of serotonin, noradrenaline and the opioid families; and most crucially of all, the intra-cellular post-synaptic cascade within individual neurons, is going to be simplistic. Yet there is accumulating evidence that recreational euphoriants, clinically useful mood-brighteners, and perhaps all rewarding experiences critically depend on the mesolimbic dopamine pathway.

One complication is that pleasure and desire circuitry have intimately connected but distinguishable neural substrates. Some investigators believe that the role of the mesolimbic dopamine system is not primarily to encode pleasure, but “wanting” i.e. incentive-motivation. On this analysis, endomorphins and enkephalins – which activate mu and delta opioid receptors most especially in the ventral pallidum – are most directly implicated in pleasure itself. Mesolimbic dopamine, signalling to the ventral pallidum, mediates desire. Thus “dopamine overdrive”, whether natural or drug-induced, promotes a sense of urgency and a motivation to engage with the world, whereas direct activation of mu opioid receptors in the ventral pallidum induces emotionally self-sufficient bliss.

Certainly, the dopamine neurotransmitter is not itself the brain’s magic pleasure chemical. Only the intra-cellular cascades triggered by neurotransmitter binding to the post-synaptic receptor presumably hold the elusive, tantalising key to everlasting happiness; and they are not yet fully understood. But it’s notable that dopamine D2 receptor-blocking phenothiazines, for example, and other aversive drugs such as kappa opioid agonists, tend to inhibit activity, or increase the threshold of stimulation, in the mesolimbic dopamine system. Conversely, heroin and cocaine both mimic the effects of direct electrical stimulation of the reward-pathways.

Comparing the respective behavioural effects of heroin and cocaine is instructive.If rats or monkeys are hooked up to an intravenous source of heroin (or other potent mu opioid agonist such as fentanyl), the animals will happily self-administer the drug indefinitely; but they still find time to sleep and eat. If rats or monkeys have the opportunity to self-administer cocaine without limit, however, they will do virtually nothing else. They continue to push a drug-delivery lever for as long as they are physically capable of doing so. Within weeks, if not days, they will lose a substantial portion of their body weight – up to 40%. Within a month, they will be dead.

Humans don’t have this problem. So what keeps our mesolimbic dopamine and opioidergic systems so indolent? Why does a “hedonic treadmill” stop us escaping from a genetically-predisposed “set-point” of emotional ill-being? Why can’t social engineering, politico-economic reform or psychotherapy – as distinct from germ-line genetic re-writes – make us durably happy?

Evolutionary biology provides some plausible answers. A capacity to experience many different flavours of unhappiness – and short-lived joys too – was adaptive in the ancestral environment. Anger, fear, disgust, sadness, anxiety and other core emotions each played a distinctive information-theoretic role, enhancing the reproductive success of our forebears. Thus at least a partial explanation of endemic human misery today lies in ancient selection pressure and the state of the unreconstructed vertebrate genome. Selfish DNA makes its throwaway survival-machines feel discontented a lot of the time. A restless discontent is typically good for promoting its “inclusive fitness”, even if it’s bad news for us. Nature simply doesn’t care; and God has gone missing, presumed dead.

On the African savannah, naturally happy and un-anxious creatures typically got outbred or eaten or both. Rank theory suggests that the far greater incidence of the internalised correlate of the yielding sub-routine, depression, reflects how low spirits were frequently more adaptive among group-living organisms than manic self-assertion. Group living can be genetically adaptive for the individual members of the tribe in a predator-infested environment, but we’ve paid a very high psychological price.

Whatever the origins of malaise, a web of negative feedback mechanisms in the CNS conspires to prevent well-being – and (usually) extreme ill-being – from persisting for very long.

Life-enriching emotional superhealth will depend on subverting these homeostatic mechanisms. The hedonic set-point around which our lives fluctuate can be genetically switched to a far higher altitude plateau of well-being.

At the most immediate level, firing in the neurons of the ventral tegmental area is held in check mainly by gamma-aminobutyric acid (GABA), the major inhibitory neurotransmitter in the vertebrate central nervous system. Opioids act to diminish the braking action of GABA on the dopaminergic neurons of the VTA. In consequence, VTA neurons release more dopamine in the nucleus accumbens. The reuptake of dopamine in the nucleus accumbens is performed by the dopamine transporter. The transporter is blocked by cocaine. Dopamine reuptake inhibition induces euphoria, augmented by activation of the sigma1 receptors. [Why? We don’t know. Science has no understanding of why sentience – or insentience for that matter – exists at all.] Amphetamines block the dopamine transporter too; but they also act directly on the dopaminergic neurons and promote neurotransmitter release.

The mesolimbic dopamine pathway passes from the VTA to the nucleus accumbens and ascends to the frontal cortex where it innervates the higher brain. This architecture is explicable in the light of evolution. Raw limbic emotional highs and lows – in the absence of represented objects, events or properties to be (dis)satisfied about – would be genetically useless to the organism. To help self-replicating DNA differentially leave more copies of itself, the textures of subjective niceness and nastiness must infuse our representations of the world, and – by our lights – the world itself. Hedonic tone must be functionally coupled to motor-responses initiated on the basis of the perceived significance of the stimulus to the organism, and of the anticipated consequences – adaptively nice or nasty – of simulations of alternative courses of action that the agent can perform. Natural selection has engineered the “encephalisation of emotion”. We often get happy, sad or worried “about” the most obscure notions. One form this encephalisation takes is our revulsion at the prospect of turning ourselves into undignified wirehead rats – or soma-pacified dupes of a ruling elite. Both scenarios strike us as too distasteful to contemplate.

In any case, wouldn’t we get bored of life-long bliss?

Apparently not. That’s what’s so revealing about wireheading. Unlike food, drink or sex, the experience of pleasure itself exhibits no tolerance, even though our innumerable objects of desire certainly do so. Thus we can eventually get bored of anything – with a single exception. Stimulation of the pleasure-centres of the brain never palls. Fire them in the right way, and boredom is neurochemically impossible. Its substrates are missing. Electrical stimulation of the mesolimbic dopamine system is more intensely rewarding than eating, drinking, and love-making; and it never gets in the slightest a bit tedious. It stays exhilarating. The unlimited raw pleasure conjured up by wirehead bliss certainly inspires images of monotony in the electrode-na&iumlve outsider; but that’s a different story altogether.

Yet are wireheading or supersoma really the only ways to ubiquitous ecstasy? Or does posing the very question reflect our stunted conception of the diverse family of paradise-engineering options in prospect?

This question isn’t an exercise in idle philosophising. As molecular neuroscience advances, not just boredom, but pain, terror, disgust, jealousy, anxiety, depression, malaise and any form of unpleasantness are destined to become truly optional. Their shifting gradients played a distinct information-theoretic role in the lives of our ancestors in the ancestral environment of adaptation. But their individual textures (i.e. “what it feels like”, “qualia”) can shortly be either abolished or genetically shifted to a more exalted plane of well-being instead. Our complicity in their awful persistence, and ultimately our responsibility for sustaining and creating them in the living world, is destined to increase as the new reproductive technologies mature and the revolution in post-genomic medicine unfolds. The biggest obstacles to a cruelty-free world – a world cured of any obligate suffering – are ideological, not technical. Yet whatever the exact time-scale of its replacement, in evolutionary terms we are on the brink of a Post-Darwinian Transition.

Natural selection has previously been “blind”. Complications aside, genetic mutations and meiotic shufflings are quasi-random i.e. random with respect to what is favoured by natural selection. Nature has no capacity for foresight or contingency-planning. During the primordial Darwinian Era of life on Earth, selfishness in the technical genetic sense has closely overlapped with selfishness in the popular sense: they are easily confused, and indeed selfishness in the technical sense is unavoidable. But in the new reproductive era – where (suites of) alleles will be societally chosen and actively designed by quasi-rational agents in anticipation of their likely behavioural effects – the character of fitness-enhancing traits will be radically different.

For a start, the elimination of such evolutionary relics as the ageing process will make any form of (post-)human reproduction on earth – whether sexual or clonal – a relatively rare and momentous event. It’s likely that designer post-human babies will be meticulously pre-planned. The notion that all reproductive decisions will be socially regulated in a post-ageing world is abhorrent to one’s libertarian instincts; but if they weren’t regulated, then the Earth would soon simply exceed its carrying capacity – whether it is 15 billion people or even 150 billion. If reproduction on earth does cease to be a personal affair and becomes a (democratically accountable?) state-sanctioned choice, then a major shift in the character of typically adaptive behavioural traits will inevitably occur. Taking a crude genes’ eye-view, a variant allele coding for, say, enhanced oxytocin expression, or a sub-type of serotonin receptor predisposing to unselfishness in the popular sense, will actually carry a higher payoff in the technical selfish sense – hugely increasing the likelihood that such alleles and their customised successors will be differentially pre-selected in preference to alleles promoting, say, anti-social behaviour.

Told like this, of course, the neurochemical story is a simplistic parody. It barely even hints at the complex biological, socio-economic and political issues at stake. Just who will take the decisions, and how? What will be the role in shaping post-human value systems, not just of exotic new technologies, but of alien forms of emotion whose metabolic pathways and substrates haven’t yet been disclosed to us? What kinds, if any, of inorganic organisms or non-DNA-driven states of consciousness will we want to design and implement? What will be the nature of the transitional era – when our genetic mastery of emotional mind-making is still incomplete? How can we be sure that unknown unknowns won’t make things go wrong? True, Darwinian life may often be dreadful, but couldn’t botched paradise-engineering make it even worse? And even if it couldn’t, might not there be some metaphysical sense in which life in a blissful biosphere could still be morally “wrong” – even if it strikes its inhabitants as self-evidently right?

Unfortunately, we will only begin to glimpse the implications of Post-Darwinism when paradise-engineering becomes a mature scientific discipline and mainstream research tradition. Yet as the vertebrate genome is rewritten, the two senses of “selfish” will foreseeably diverge. Today they are easily conflated. A tendency to quasi-psychopathic callousness to other sentient beings did indeed enhance the inclusive fitness of our DNA in the evolutionary past. In the new reproductive era, such traits are potentially maladaptive. They may even disappear as the Reproductive Revolution matures.

The possibility that we will become not just exceedingly happier, but nicer, may sound too good to be true. Perhaps we’ll just become happier egotists – in every sense. But if a genetic predisposition to niceness becomes systematically fitness-enhancing, then genetic selfishness – in the technical sense of “selfish” – ensures that benevolence will not just triumph; it will also be evolutionarily stable, in the games-theory sense, against “defectors”.

Needless to say, subtleties and technical complexities abound here. The very meaning of being “nice” to anyone or anything, for instance, is changed if well-being becomes a generic property of mental life. Either way, once suffering becomes biologically optional, then only sustained and systematic malice towards others could allow us to perpetuate it for ever. And although today we may sometimes be spiteful, there is no evidence that institutionalised malevolence will prevail.

From an ethical perspective, the task of hastening the Post-Darwinian Transition has a desperate moral urgency – brought home by studying just how nasty “natural” pain can be. Those who would resist the demise of unpleasantness may be asked: is it really permissible to compel others to suffer when any form of distress becomes purely optional? Should the metabolic pathways of our evolutionary past be forced on anyone who prefers an odyssey of life-long happiness instead? If so, what means of coercion should be employed, and by whom?

Or is paradise-engineering the only morally serious option? And much more fun.

Refs and further reading

Roborats James Olds Homeostasis Robert Heath Orgasmatrons Future Opioids BLTC Research Hypermotivation Superhappiness? Empathogens.com The Orgasmic Brain Social Media (2016) The Good Drug Guide The Abolitionist Project Utilitarianism On The Net The Hedonistic Imperative The Reproductive Revolution Critique of Brave New World MDMA: Utopian Pharmacology? When Is It Best To Take Crack Cocaine? Wireheads and Wireheading in Science Fiction Pleasure Evoked by Electrical Stimulation of the Brain Wireheads and wireheading: Definitions from Science Fiction

E-mail info@wireheading.com

Continue reading here:

Wirehead hedonism versus paradise-engineering

Posted in Hedonism | Comments Off on Wirehead hedonism versus paradise-engineering

Articles about Space Exploration – latimes

Posted: July 21, 2016 at 2:17 am

SCIENCE

July 18, 2013 | By Louis Sahagun

More than a hundred explorers, scientists and government officials will gather at Long Beach’s Aquarium of the Pacific on Friday to draft a blueprint to solve a deep blue problem: About 95% of the world’s oceans remains unexplored. The invitation-only forum , hosted by the aquarium and the National Oceanic and Atmospheric Administration, aims to identify priorities, technologies and collaborative strategies that could advance understanding of the uncharted mega-wilderness that humans rely on for oxygen, food, medicines, commerce and recreation.

SCIENCE

June 12, 2013 | By Brad Balukjian

Dancer , rapper , and, oh yeah, Man on the Moon Buzz Aldrin is talking, but are the right people listening? One of the original moonwalkers (Michael Jackson always did it backwards! Aldrin complained) challenged the United States to pick up the space slack Tuesday evening, mere hours after China sent three astronauts into orbit. Speaking in front of a friendly crowd of 880 at the Richard Nixon Presidential Library and Museum in Yorba Linda, Aldrin criticized the U.S. for not adequately leading the international community in space exploration, and suggested that we bump up our federal investment in space while still encouraging the private sector’s efforts.

ENTERTAINMENT

February 2, 2013 | By Holly Myers

It will come as news to many, no doubt, that there is a Warhol on the moon. And a Rauschenberg and an Oldenburg – a whole “Moon Museum,” in fact, containing the work of six artists in all, in the form of drawings inscribed on the surface of a ceramic chip roughly the size of a thumbprint. Conceived by the artist Forrest Myers in 1969, the chip was fabricated in collaboration with scientists at Bell Laboratories and illicitly slipped by a willing engineer between some sheets of insulation on the Apollo 12 lander module.

WORLD

January 29, 2013 | By Patrick J. McDonnell and Ramin Mostaghim, This post has been updated. See the note below for details.

BEIRUT – U.S. officials are not exactly welcoming Iran’s revelation this week that the Islamic Republic has sent a monkey into space and brought the creature back to Earth safely. The report by Iranian media recalled for many the early days of space flight, when both the United States and the Soviet Union launched animal-bearing spacecraft as a prelude to human space travel. But State Department spokeswoman Victoria Nuland told reporters in Washington on Monday that the reported mission raises concerns about possible Iranian violations of a United Nations ban on development of ballistic missiles capable of delivering nuclear weapons.

CALIFORNIA | LOCAL

December 22, 2012 | By Scott Gold, Los Angeles Times

WATERTON CANYON, Colo. – The concrete-floored room looks, at first glance, like little more than a garage. There is a red tool chest, its drawers labeled: “Hacksaws. ” “Allen wrenches. ” There are stepladders and vise grips. There is also, at one end of the room, a half-built spaceship, and everyone is wearing toe-to-fingertip protective suits. “Don’t. Touch. Anything. ” Bruce Jakosky says the words politely but tautly, like a protective father – which, effectively, he is. Jakosky is the principal investigator behind NASA’s next mission to Mars, putting him in the vanguard of an arcane niche of science: planetary protection – the science of exploring space without messing it up. PHOTOS: Stunning images of Earth at night As NASA pursues the search for life in the solar system, the cleanliness of robotic explorers is crucial to avoid contaminating other worlds.

SCIENCE

December 6, 2012 | By Amina Khan and Rosie Mestel, Los Angeles Times

Years of trying to do too many things with too little money have put NASA at risk of ceding leadership in space exploration to other nations, according to a new report that calls on the space agency to make wrenching decisions about its long-term strategy and future scope. As other countries – including some potential adversaries – are investing heavily in space, federal funding for NASA is essentially flat and under constant threat of being cut. Without a clear vision, that fiscal uncertainty makes it all the more difficult for the agency to make progress on ambitious goals like sending astronauts to an asteroid or Mars while executing big-ticket science missions, such as the $8.8-billion James Webb Space Telescope, says the analysis released Wednesday by the National Research Council.

Read more:

Articles about Space Exploration – latimes

Posted in Space Exploration | Comments Off on Articles about Space Exploration – latimes

Singularity – RationalWiki

Posted: July 18, 2016 at 3:37 pm

There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles–all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.

A singularity is a sign that your model doesn’t apply past a certain point, not infinity arriving in real life

A singularity, as most commonly used, is a point at which expected rules break down. The term comes from mathematics, where a point on a curve that has a sudden break in slope is considered to have a slope of undefined or infinite value; such a point is known as a singularity.

The term has extended into other fields; the most notable use is in astrophysics, where a singularity is a point (usually, but perhaps not exclusively, at the center a of black hole) where curvature of spacetime approaches infinity.

This article, however, is not about the mathematical or physics uses of the term, but rather the borrowing of it by various futurists. They define a technological singularity as the point beyond which we can know nothing about the world. So, of course, they then write at length on the world after that time.

It’s intelligent design for the IQ 140 people. This proposition that we’re heading to this point at which everything is going to be just unimaginably different – it’s fundamentally, in my view, driven by a religious impulse. And all of the frantic arm-waving can’t obscure that fact for me, no matter what numbers he marshals in favor of it. He’s very good at having a lot of curves that point up to the right.

In transhumanist belief, the “technological singularity” refers to a hypothetical point beyond which human technology and civilization is no longer comprehensible to the current human mind. The theory of technological singularity states that at some point in time humans will invent a machine that through the use of artificial intelligence will be smarter than any human could ever be. This machine in turn will be capable of inventing new technologies that are even smarter. This event will trigger an exponential explosion of technological advances of which the outcome and effect on humankind is heavily debated by transhumanists and singularists.

Many proponents of the theory believe that the machines eventually will see no use for humans on Earth and simply wipe us out their intelligence being far superior to humans, there would be probably nothing we could do about it. They also fear that the use of extremely intelligent machines to solve complex mathematical problems may lead to our extinction. The machine may theoretically respond to our question by turning all matter in our solar system or our galaxy into a giant calculator, thus destroying all of humankind.

Critics, however, believe that humans will never be able to invent a machine that will match human intelligence, let alone exceed it. They also attack the methodology that is used to “prove” the theory by suggesting that Moore’s Law may be subject to the law of diminishing returns, or that other metrics used by proponents to measure progress are totally subjective and meaningless. Theorists like Theodore Modis argue that progress measured in metrics such as CPU clock speeds is decreasing, refuting Moore’s Law[3]. (As of 2015, not only Moore’s Law is beginning to stall, Dennard scaling is also long dead, returns in raw compute power from transistors is subjected to diminishing returns as we use more and more of them, there is also Amdahl’s Law and Wirth’s law to take into account, and also that raw computing power simply doesn’t scale up linearly at providing real marginal utility. Then even after all those things, we still haven’t taken into account of the fundamental limitations of conventional computing architecture. Moore’s law suddenly doesn’t look to be the panacea to our problems now, does it?)

Transhumanist thinkers see a chance of the technological singularity arriving on Earth within the twenty first century, a concept that most[Who?]rationalists either consider a little too messianic in nature or ignore outright. Some of the wishful thinking may simply be the expression of a desire to avoid death, since the singularity is supposed to bring the technology to reverse human aging, or to upload human minds into computers. However, recent research, supported by singularitarian organizations including MIRI and the Future of Humanity Institute, does not support the hypothesis that near-term predictions of the singularity are motivated by a desire to avoid death, but instead provides some evidence that many optimistic predications about the timing of a singularity are motivated by a desire to “gain credit for working on something that will be of relevance, but without any possibility that their prediction could be shown to be false within their current career”.[4][5]

Don’t bother quoting Ray Kurzweil to anyone who knows a damn thing about human cognition or, indeed, biology. He’s a computer science genius who has difficulty in perceiving when he’s well out of his area of expertise.[6]

Eliezer Yudkowsky identifies three major schools of thinking when it comes to the singularity.[7] While all share common ground in advancing intelligence and rapidly developing technology, they differ in how the singularity will occur and the evidence to support the position.

Under this school of thought, it is assumed that change and development of technology and human (or AI assisted) intelligence will accelerate at an exponential rate. So change a decade ago was much faster than change a century ago, which was faster than a millennium ago. While thinking in exponential terms can lead to predictions about the future and the developments that will occur, it does mean that past events are an unreliable source of evidence for making these predictions.

The “event horizon” school posits that the post-singularity world would be unpredictable. Here, the creation of a super-human artificial intelligence will change the world so dramatically that it would bear no resemblance to the current world, or even the wildest science fiction. This school of thought sees the singularity most like a single point event rather than a process indeed, it is this thesis that spawned the term “singularity.” However, this view of the singularity does treat transhuman intelligence as some kind of magic.

This posits that the singularity is driven by a feedback cycle between intelligence enhancing technology and intelligence itself. As Yudkowsky (who endorses this view) “What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that theyd design the next generation of brain-computer interfaces.” When this feedback loop of technology and intelligence begins to increase rapidly, the singularity is upon us.

There is also a fourth singularity school which is much more popular than the other three: It’s all a load of baloney![8] This position is not popular with high-tech billionaires.[9]

This is largely dependent on your definition of “singularity”.

The intelligence explosion singularity is by far the most unlikely. According to present calculations, a hypothetical future supercomputer may well not be able to replicate a human brain in real time. We presently don’t even understand how intelligence works, and there is no evidence that intelligence is self-iterative in this manner – indeed, it is not unlikely that improvements on intelligence are actually more difficult the smarter you become, meaning that each improvement on intelligence is increasingly difficult to execute. Indeed, how much smarter it is possible for something to even be than a human being is an open question. Energy requirements are another issue; humans can run off of Doritos and Mountain Dew Dr. Pepper, while supercomputers require vast amounts of energy to function. Unless such an intelligence can solve problems better than groups of humans, its greater intelligence may well not matter, as it may not be as efficient as groups of humans working together to solve problems.

Another major issue arises from the nature of intellectual development; if an artificial intelligence needs to be raised and trained, it may well take twenty years or more between generations of artificial intelligences to get further improvements. More intelligent animals seem to generally require longer to mature, which may put another limitation on any such “explosion”.

Accelerating change is questionable; in real life, the rate of patents per capita actually peaked in the 20th century, with a minor decline since then, despite the fact that human beings have gotten more intelligent and gotten superior tools. As noted above, Moore’s Law has been in decline, and outside of the realm of computers, the rate of increase in other things has not been exponential – airplanes and cars continue to improve, but they do not improve at the ridiculous rate of computers. It is likely that once computers hit physical limits of transistor density, their rate of improvement will fall off dramatically, and already even today, computers which are “good enough” continue to operate for many years, something which was unheard of in the 1990s, where old computers were rapidly and obviously obsoleted by new ones.

According to this point of view, the Singularity is a past event, and we live in a post-Singularity world.

The rate of advancement has actually been in decline in recent times, as patents per-capita has gone down, and the rate of increase of technology has declined rather than risen, though the basal rate is higher than it was in centuries past. According to this point of view, the intelligence explosion and increasing rate of change already happened with computers, and now that everyone has handheld computing devices, the rate of increase is going to decline as we hit natural barriers in how much additional benefit we gain out of additional computing power. The densification of transistors on microchips has slowed by about a third, and the absolute limit to transistors is approaching – a true, physical barrier which cannot be bypassed or broken, and which would require an entirely different means of computing to create a denser still microchip.

From the point of view of travel, humans have gone from walking to sailing to railroads to highways to airplanes, but communication has now reached the point where a lot of travel is obsolete – the Internet is omnipresent and allows us to effectively communicate with people on any corner of the planet without travelling at all. From this point of view, there is no further point of advancement, because we’re already at the point where we can be anywhere on the planet instantly for many purposes, and with improvements in automation, the amount of physical travel necessary for the average human being has declined over recent years. Instant global communication and the ability to communicate and do calculations from anywhere are a natural physical barrier, beyond which further advancement is less meaningful, as it is mostly just making things more convenient – the cost is already extremely low.

The prevalence of computers and communications devices has completely changed the world, as has the presence of cheap, high-speed transportation technology. The world of the 21st century is almost unrecognizable to people from the founding of the United States in the latter half of the 18th century, or even to people from the height of the industrial era at the turn of the 20th century.

Extraterrestrial technological singularities might become evident from acts of stellar/cosmic engineering. One such possibility for example would be the construction of Dyson Spheres that would result in the altering of a star’s electromagnetic spectrum in a way detectable from Earth. Both SETI and Fermilab have incorporated that possibility into their searches for alien life. [10][11]

A different view of the concept of singularity is explored in the science fiction book Dragon’s Egg by Robert Lull Forward, in which an alien civilization on the surface of a neutron star, being observed by human space explorers, goes from Stone Age to technological singularity in the space of about an hour in human time, leaving behind a large quantity of encrypted data for the human explorers that are expected to take over a million years (for humanity) to even develop the technology to decrypt.

No signs of extraterrestrial civilizations have been found as of 2016.

Read the rest here:

Singularity – RationalWiki

Posted in Singularity | Comments Off on Singularity – RationalWiki