Breaking News and Updates
- Abolition Of Work
- Alternative Medicine
- Artificial Intelligence
- Atlas Shrugged
- Ayn Rand
- Basic Income Guarantee
- Conscious Evolution
- Cosmic Heaven
- Designer Babies
- Ethical Egoism
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom of Speech
- Gene Medicine
- Genetic Engineering
- Germ Warfare
- Golden Rule
- Government Oppression
- High Seas
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Longevity
- Immortality Medicine
- Intentional Communities
- Life Extension
- Mars Colonization
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- New Utopia
- Personal Empowerment
- Political Correctness
- Politically Incorrect
- Post Human
- Post Humanism
- Private Islands
- Resource Based Economy
- Ron Paul
- Second Amendment
- Second Amendment
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Teilhard De Charden
- The Singularity
- Tor Browser
- Transhuman News
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Zeitgeist Movement
The Evolutionary Perspective
Tag Archives: post
Posted: September 18, 2016 at 8:12 am
I recently posted a revised draft of my forthcoming article, The Effect of Legislation on Fourth Amendment Interpretation, and I thought I would blog a bit about it. The article considers a recurring question in Fourth Amendment law: When courts are called on to interpret the Fourth Amendment, and there is privacy legislation on the books that relates to the governments conduct, should the existence of legislation have any effect on how the Fourth Amendment is interpreted? And if it should have an effect, what effect should it have?
I was led to this question by reading a lot of cases in which the issue came up and was answered in very different ways by particularly prominent judges. When I assembled all the cases, I found that judges had articulated three different answers. None of the judges seemed aware that the question had come up in other cases and had been answered differently there. Each of the three answers seemed plausible, and each tapped into important traditions in constitutional interpretation. So you have a pretty interesting situation: Really smart judges were running into the same question and answering it in very different ways, each rooted in substantial traditions, with no one approach predominating and no conversation about which approach was best. It seemed like a fun issue to explore in an article.
In this post Ill summarize the three approaches courts have taken. I call the approaches influence, displacement and independence. For each approach, Ill give one illustrative case. But theres a lot more where that came from: For more details on the three approaches and the cases supporting them, please read the draft article.
1. Influence. In the influence cases, legislation is considered a possible standard for judicial adoption under the Fourth Amendment. The influence cases rest on a pragmatic judgment: If courts must make difficult judgment calls about how to balance privacy and security, and legislatures have done so already in enacting legislation, courts can draw lessons from the thoughtful judgment of a co-equal branch. Investigative legislation provides an important standard for courts to consider in interpreting the Fourth Amendment. Its not binding on courts, but its a relevant consideration.
The Supreme Courts decision in United States v. Watsonis an example of the influence approach. Watson considered whether it is constitutionally reasonable for a postal inspector to make a public arrest for a felony offense based on probable cause but without a warrant. A federal statute expressly authorized such warrantless arrests. The court ruled that the arrests were constitutional without a warrant and that the statute was constitutional. Justice Whites majority opinion relied heavily on deference to Congresss legislative judgment. According to Justice White, the statute authorizing the arrests represents a judgment by Congress that it is not unreasonable under the Fourth Amendment for postal inspectors to arrest without a warrant provided they have probable cause to do so. That judgment was entitled to presumptive deference as the considered judgment of a co-equal branch. Because there is a strong presumption of constitutionality due to an Act of Congress, the court stated, especially when it turns on what is reasonable, then obviously the Court should be reluctant to decide that a search thus authorized by Congress was unreasonable and that the Act was therefore unconstitutional.
2. Displacement. In the displacement cases, the existence of legislation counsels against Fourth Amendment protection that might interrupt the statutory scheme. Because legislatures can often do a better job at balancing privacy and security in new technologies as compared to courts, courts should reject Fourth Amendment protection as long as legislatures are protecting privacy adequately to avoid interfering with the careful work of the legislative branch. The existence of investigative legislation effectively preempts the field and displaces Fourth Amendment protection that may otherwise exist.
Justice Alitos concurrence in Riley v. Californiais an example of the displacement approach. Riley held that the government must obtain a search warrant before searching a cellphone incident to a suspects lawful arrest. Justice Alito concurred, agreeing with the majority only in the absence of adequate legislation regulating cellphone searches. I would reconsider the question presented here, he wrote, if either Congress or state legislatures, after assessing the legitimate needs of law enforcement and the privacy interests of cell phone owners, enact legislation that draws reasonable distinctions based on categories of information or perhaps other variables.
The enactment of investigative legislation should discourage judicial intervention, Justice Alito reasoned, because [l]egislatures, elected by the people, are in a better position than we are to assess and respond to the changes that have already occurred and those that almost certainly will take place in the future. Although Fourth Amendment protection was necessary in the absence of legislation, the enactment of legislation might be reason to withdraw Fourth Amendment protection to avoid the very unfortunate result of federal courts using the blunt instrument of the Fourth Amendment to try to protect privacy in emerging technologies.
3. Independence. In the independence cases, courts treat legislation as irrelevant to the Fourth Amendment. Legislatures are free to supplement privacy protections by enacting statutes, of course. But from the independence perspective, legislation sheds no light on what the Fourth Amendment requires. Courts must independently interpret the Fourth Amendment, and what legislatures have done has no relevance.
An example of independence is Virginia v. Moore, where the Supreme Court decided whether the search incident to a lawful arrest exception incorporates the state law of arrest. Moore was arrested despite a state law saying his crime could not lead to arrest; the question was whether the state law violation rendered the arrest unconstitutional. According to the court, whether state law made the arrest lawful was irrelevant to the Fourth Amendment. It was the courts duty to interpret the Fourth Amendment, and what the legislature decided about when arrests could be made was a separate question. History suggested that the Fourth Amendment did not incorporate statutes. And the states decision of when to make arrests was not based on the Fourth Amendment and was based on other considerations, such as the costs of arrests and whether the legislature valued privacy more than the Fourth Amendment required. Constitutionalizing the state standard would only frustrate the states efforts to achieve those goals, as it would mean los[ing] control of the regulatory scheme and might lead the state to abandon restrictions on arrest altogether. For that reason, the statute regulating the police was independent of the Fourth Amendment standard.
Those are the three approaches. The next question is, which is best? Ill offer some thoughts on that in my next post.
See the original post here:
Should privacy legislation influence how courts interpret the …
Posted: September 8, 2016 at 6:32 am
DNA damage resulting in multiple broken chromosomes
DNA repair is a collection of processes by which a cell identifies and corrects damage to the DNA molecules that encode its genome. In human cells, both normal metabolic activities and environmental factors such as radiation can cause DNA damage, resulting in as many as 1 million individual molecular lesions per cell per day. Many of these lesions cause structural damage to the DNA molecule and can alter or eliminate the cell’s ability to transcribe the gene that the affected DNA encodes. Other lesions induce potentially harmful mutations in the cell’s genome, which affect the survival of its daughter cells after it undergoes mitosis. As a consequence, the DNA repair process is constantly active as it responds to damage in the DNA structure. When normal repair processes fail, and when cellular apoptosis does not occur, irreparable DNA damage may occur, including double-strand breaks and DNA crosslinkages (interstrand crosslinks or ICLs). This can eventually lead to malignant tumors, or cancer as per the two hit hypothesis.
The rate of DNA repair is dependent on many factors, including the cell type, the age of the cell, and the extracellular environment. A cell that has accumulated a large amount of DNA damage, or one that no longer effectively repairs damage incurred to its DNA, can enter one of three possible states:
The DNA repair ability of a cell is vital to the integrity of its genome and thus to the normal functionality of that organism. Many genes that were initially shown to influence life span have turned out to be involved in DNA damage repair and protection.
The 2015 Nobel Prize in Chemistry was awarded to Tomas Lindahl, Paul Modrich, and Aziz Sancar for their work on the molecular mechanisms of DNA repair processes.
DNA damage, due to environmental factors and normal metabolic processes inside the cell, occurs at a rate of 10,000 to 1,000,000 molecular lesions per cell per day. While this constitutes only 0.000165% of the human genome’s approximately 6 billion bases (3 billion base pairs), unrepaired lesions in critical genes (such as tumor suppressor genes) can impede a cell’s ability to carry out its function and appreciably increase the likelihood of tumor formation and contribute to tumour heterogeneity.
The vast majority of DNA damage affects the primary structure of the double helix; that is, the bases themselves are chemically modified. These modifications can in turn disrupt the molecules’ regular helical structure by introducing non-native chemical bonds or bulky adducts that do not fit in the standard double helix. Unlike proteins and RNA, DNA usually lacks tertiary structure and therefore damage or disturbance does not occur at that level. DNA is, however, supercoiled and wound around “packaging” proteins called histones (in eukaryotes), and both superstructures are vulnerable to the effects of DNA damage.
DNA damage can be subdivided into two main types:
The replication of damaged DNA before cell division can lead to the incorporation of wrong bases opposite damaged ones. Daughter cells that inherit these wrong bases carry mutations from which the original DNA sequence is unrecoverable (except in the rare case of a back mutation, for example, through gene conversion).
There are several types of damage to DNA due to endogenous cellular processes:
Damage caused by exogenous agents comes in many forms. Some examples are:
UV damage, alkylation/methylation, X-ray damage and oxidative damage are examples of induced damage. Spontaneous damage can include the loss of a base, deamination, sugar ring puckering and tautomeric shift.
In human cells, and eukaryotic cells in general, DNA is found in two cellular locations inside the nucleus and inside the mitochondria. Nuclear DNA (nDNA) exists as chromatin during non-replicative stages of the cell cycle and is condensed into aggregate structures known as chromosomes during cell division. In either state the DNA is highly compacted and wound up around bead-like proteins called histones. Whenever a cell needs to express the genetic information encoded in its nDNA the required chromosomal region is unravelled, genes located therein are expressed, and then the region is condensed back to its resting conformation. Mitochondrial DNA (mtDNA) is located inside mitochondria organelles, exists in multiple copies, and is also tightly associated with a number of proteins to form a complex known as the nucleoid. Inside mitochondria, reactive oxygen species (ROS), or free radicals, byproducts of the constant production of adenosine triphosphate (ATP) via oxidative phosphorylation, create a highly oxidative environment that is known to damage mtDNA. A critical enzyme in counteracting the toxicity of these species is superoxide dismutase, which is present in both the mitochondria and cytoplasm of eukaryotic cells.
Senescence, an irreversible process in which the cell no longer divides, is a protective response to the shortening of the chromosome ends. The telomeres are long regions of repetitive noncoding DNA that cap chromosomes and undergo partial degradation each time a cell undergoes division (see Hayflick limit). In contrast, quiescence is a reversible state of cellular dormancy that is unrelated to genome damage (see cell cycle). Senescence in cells may serve as a functional alternative to apoptosis in cases where the physical presence of a cell for spatial reasons is required by the organism, which serves as a “last resort” mechanism to prevent a cell with damaged DNA from replicating inappropriately in the absence of pro-growth cellular signaling. Unregulated cell division can lead to the formation of a tumor (see cancer), which is potentially lethal to an organism. Therefore, the induction of senescence and apoptosis is considered to be part of a strategy of protection against cancer.
It is important to distinguish between DNA damage and mutation, the two major types of error in DNA. DNA damages and mutation are fundamentally different. Damages are physical abnormalities in the DNA, such as single- and double-strand breaks, 8-hydroxydeoxyguanosine residues, and polycyclic aromatic hydrocarbon adducts. DNA damages can be recognized by enzymes, and, thus, they can be correctly repaired if redundant information, such as the undamaged sequence in the complementary DNA strand or in a homologous chromosome, is available for copying. If a cell retains DNA damage, transcription of a gene can be prevented, and, thus, translation into a protein will also be blocked. Replication may also be blocked or the cell may die.
In contrast to DNA damage, a mutation is a change in the base sequence of the DNA. A mutation cannot be recognized by enzymes once the base change is present in both DNA strands, and, thus, a mutation cannot be repaired. At the cellular level, mutations can cause alterations in protein function and regulation. Mutations are replicated when the cell replicates. In a population of cells, mutant cells will increase or decrease in frequency according to the effects of the mutation on the ability of the cell to survive and reproduce. Although distinctly different from each other, DNA damages and mutations are related because DNA damages often cause errors of DNA synthesis during replication or repair; these errors are a major source of mutation.
Given these properties of DNA damage and mutation, it can be seen that DNA damages are a special problem in non-dividing or slowly dividing cells, where unrepaired damages will tend to accumulate over time. On the other hand, in rapidly dividing cells, unrepaired DNA damages that do not kill the cell by blocking replication will tend to cause replication errors and thus mutation. The great majority of mutations that are not neutral in their effect are deleterious to a cell’s survival. Thus, in a population of cells composing a tissue with replicating cells, mutant cells will tend to be lost. However, infrequent mutations that provide a survival advantage will tend to clonally expand at the expense of neighboring cells in the tissue. This advantage to the cell is disadvantageous to the whole organism, because such mutant cells can give rise to cancer. Thus, DNA damages in frequently dividing cells, because they give rise to mutations, are a prominent cause of cancer. In contrast, DNA damages in infrequently dividing cells are likely a prominent cause of aging.
Single-strand and double-strand DNA damage
Cells cannot function if DNA damage corrupts the integrity and accessibility of essential information in the genome (but cells remain superficially functional when non-essential genes are missing or damaged). Depending on the type of damage inflicted on the DNA’s double helical structure, a variety of repair strategies have evolved to restore lost information. If possible, cells use the unmodified complementary strand of the DNA or the sister chromatid as a template to recover the original information. Without access to a template, cells use an error-prone recovery mechanism known as translesion synthesis as a last resort.
Damage to DNA alters the spatial configuration of the helix, and such alterations can be detected by the cell. Once damage is localized, specific DNA repair molecules bind at or near the site of damage, inducing other molecules to bind and form a complex that enables the actual repair to take place.
Cells are known to eliminate three types of damage to their DNA by chemically reversing it. These mechanisms do not require a template, since the types of damage they counteract can occur in only one of the four bases. Such direct reversal mechanisms are specific to the type of damage incurred and do not involve breakage of the phosphodiester backbone. The formation of pyrimidine dimers upon irradiation with UV light results in an abnormal covalent bond between adjacent pyrimidine bases. The photoreactivation process directly reverses this damage by the action of the enzyme photolyase, whose activation is obligately dependent on energy absorbed from blue/UV light (300500nm wavelength) to promote catalysis. Photolyase, an old enzyme present in bacteria, fungi, and most animals no longer functions in humans, who instead use nucleotide excision repair to repair damage from UV irradiation. Another type of damage, methylation of guanine bases, is directly reversed by the protein methyl guanine methyl transferase (MGMT), the bacterial equivalent of which is called ogt. This is an expensive process because each MGMT molecule can be used only once; that is, the reaction is stoichiometric rather than catalytic. A generalized response to methylating agents in bacteria is known as the adaptive response and confers a level of resistance to alkylating agents upon sustained exposure by upregulation of alkylation repair enzymes. The third type of DNA damage reversed by cells is certain methylation of the bases cytosine and adenine.
When only one of the two strands of a double helix has a defect, the other strand can be used as a template to guide the correction of the damaged strand. In order to repair damage to one of the two paired molecules of DNA, there exist a number of excision repair mechanisms that remove the damaged nucleotide and replace it with an undamaged nucleotide complementary to that found in the undamaged DNA strand.
Double-strand breaks, in which both strands in the double helix are severed, are particularly hazardous to the cell because they can lead to genome rearrangements. Three mechanisms exist to repair double-strand breaks (DSBs): non-homologous end joining (NHEJ), microhomology-mediated end joining (MMEJ), and homologous recombination. PVN Acharya noted that double-strand breaks and a “cross-linkage joining both strands at the same point is irreparable because neither strand can then serve as a template for repair. The cell will die in the next mitosis or in some rare instances, mutate.”
In NHEJ, DNA Ligase IV, a specialized DNA ligase that forms a complex with the cofactor XRCC4, directly joins the two ends. To guide accurate repair, NHEJ relies on short homologous sequences called microhomologies present on the single-stranded tails of the DNA ends to be joined. If these overhangs are compatible, repair is usually accurate. NHEJ can also introduce mutations during repair. Loss of damaged nucleotides at the break site can lead to deletions, and joining of nonmatching termini forms insertions or translocations. NHEJ is especially important before the cell has replicated its DNA, since there is no template available for repair by homologous recombination. There are “backup” NHEJ pathways in higher eukaryotes. Besides its role as a genome caretaker, NHEJ is required for joining hairpin-capped double-strand breaks induced during V(D)J recombination, the process that generates diversity in B-cell and T-cell receptors in the vertebrate immune system.
MMEJ starts with short-range end resection by MRE11 nuclease on either side of a double-strand break to reveal microhomology regions. In further steps, PARP1 is required and may be an early step in MMEJ. There is pairing of microhomology regions followed by recruitment of flap structure-specific endonuclease 1 (FEN1) to remove overhanging flaps. This is followed by recruitment of XRCC1LIG3 to the site for ligating the DNA ends, leading to an intact DNA.
DNA double strand breaks in mammalian cells are primarily repaired by homologous recombination (HR) and non-homologous end joining (NHEJ). In an in vitro system, MMEJ occurred in mammalian cells at the levels of 1020% of HR when both HR and NHEJ mechanisms were also available. MMEJ is always accompanied by a deletion, so that MMEJ is a mutagenic pathway for DNA repair.
Homologous recombination requires the presence of an identical or nearly identical sequence to be used as a template for repair of the break. The enzymatic machinery responsible for this repair process is nearly identical to the machinery responsible for chromosomal crossover during meiosis. This pathway allows a damaged chromosome to be repaired using a sister chromatid (available in G2 after DNA replication) or a homologous chromosome as a template. DSBs caused by the replication machinery attempting to synthesize across a single-strand break or unrepaired lesion cause collapse of the replication fork and are typically repaired by recombination.
Topoisomerases introduce both single- and double-strand breaks in the course of changing the DNA’s state of supercoiling, which is especially common in regions near an open replication fork. Such breaks are not considered DNA damage because they are a natural intermediate in the topoisomerase biochemical mechanism and are immediately repaired by the enzymes that created them.
A team of French researchers bombarded Deinococcus radiodurans to study the mechanism of double-strand break DNA repair in that bacterium. At least two copies of the genome, with random DNA breaks, can form DNA fragments through annealing. Partially overlapping fragments are then used for synthesis of homologous regions through a moving D-loop that can continue extension until they find complementary partner strands. In the final step there is crossover by means of RecA-dependent homologous recombination.
Translesion synthesis (TLS) is a DNA damage tolerance process that allows the DNA replication machinery to replicate past DNA lesions such as thymine dimers or AP sites. It involves switching out regular DNA polymerases for specialized translesion polymerases (i.e. DNA polymerase IV or V, from the Y Polymerase family), often with larger active sites that can facilitate the insertion of bases opposite damaged nucleotides. The polymerase switching is thought to be mediated by, among other factors, the post-translational modification of the replication processivity factor PCNA. Translesion synthesis polymerases often have low fidelity (high propensity to insert wrong bases) on undamaged templates relative to regular polymerases. However, many are extremely efficient at inserting correct bases opposite specific types of damage. For example, Pol mediates error-free bypass of lesions induced by UV irradiation, whereas Pol introduces mutations at these sites. Pol is known to add the first adenine across the T^T photodimer using Watson-Crick base pairing and the second adenine will be added in its syn conformation using Hoogsteen base pairing. From a cellular perspective, risking the introduction of point mutations during translesion synthesis may be preferable to resorting to more drastic mechanisms of DNA repair, which may cause gross chromosomal aberrations or cell death. In short, the process involves specialized polymerases either bypassing or repairing lesions at locations of stalled DNA replication. For example, Human DNA polymerase eta can bypass complex DNA lesions like guanine-thymine intra-strand crosslink, G[8,5-Me]T, although can cause targeted and semi-targeted mutations. Paromita Raychaudhury and Ashis Basu studied the toxicity and mutagenesis of the same lesion in Escherichia coli by replicating a G[8,5-Me]T-modified plasmid in E. coli with specific DNA polymerase knockouts. Viability was very low in a strain lacking pol II, pol IV, and pol V, the three SOS-inducible DNA polymerases, indicating that translesion synthesis is conducted primarily by these specialized DNA polymerases. A bypass platform is provided to these polymerases by Proliferating cell nuclear antigen (PCNA). Under normal circumstances, PCNA bound to polymerases replicates the DNA. At a site of lesion, PCNA is ubiquitinated, or modified, by the RAD6/RAD18 proteins to provide a platform for the specialized polymerases to bypass the lesion and resume DNA replication. After translesion synthesis, extension is required. This extension can be carried out by a replicative polymerase if the TLS is error-free, as in the case of Pol , yet if TLS results in a mismatch, a specialized polymerase is needed to extend it; Pol . Pol is unique in that it can extend terminal mismatches, whereas more processive polymerases cannot. So when a lesion is encountered, the replication fork will stall, PCNA will switch from a processive polymerase to a TLS polymerase such as Pol to fix the lesion, then PCNA may switch to Pol to extend the mismatch, and last PCNA will switch to the processive polymerase to continue replication.
Cells exposed to ionizing radiation, ultraviolet light or chemicals are prone to acquire multiple sites of bulky DNA lesions and double-strand breaks. Moreover, DNA damaging agents can damage other biomolecules such as proteins, carbohydrates, lipids, and RNA. The accumulation of damage, to be specific, double-strand breaks or adducts stalling the replication forks, are among known stimulation signals for a global response to DNA damage. The global response to damage is an act directed toward the cells’ own preservation and triggers multiple pathways of macromolecular repair, lesion bypass, tolerance, or apoptosis. The common features of global response are induction of multiple genes, cell cycle arrest, and inhibition of cell division.
After DNA damage, cell cycle checkpoints are activated. Checkpoint activation pauses the cell cycle and gives the cell time to repair the damage before continuing to divide. DNA damage checkpoints occur at the G1/S and G2/M boundaries. An intra-S checkpoint also exists. Checkpoint activation is controlled by two master kinases, ATM and ATR. ATM responds to DNA double-strand breaks and disruptions in chromatin structure, whereas ATR primarily responds to stalled replication forks. These kinases phosphorylate downstream targets in a signal transduction cascade, eventually leading to cell cycle arrest. A class of checkpoint mediator proteins including BRCA1, MDC1, and 53BP1 has also been identified. These proteins seem to be required for transmitting the checkpoint activation signal to downstream proteins.
DNA damage checkpoint is a signal transduction pathway that blocks cell cycle progression in G1, G2 and metaphase and slows down the rate of S phase progression when DNA is damaged. It leads to a pause in cell cycle allowing the cell time to repair the damage before continuing to divide.
Checkpoint Proteins can be separated into four groups: phosphatidylinositol 3-kinase (PI3K)-like protein kinase, proliferating cell nuclear antigen (PCNA)-like group, two serine/threonine(S/T) kinases and their adaptors. Central to all DNA damage induced checkpoints responses is a pair of large protein kinases belonging to the first group of PI3K-like protein kinases-the ATM (Ataxia telangiectasia mutated) and ATR (Ataxia- and Rad-related) kinases, whose sequence and functions have been well conserved in evolution. All DNA damage response requires either ATM or ATR because they have the ability to bind to the chromosomes at the site of DNA damage, together with accessory proteins that are platforms on which DNA damage response components and DNA repair complexes can be assembled.
An important downstream target of ATM and ATR is p53, as it is required for inducing apoptosis following DNA damage. The cyclin-dependent kinase inhibitor p21 is induced by both p53-dependent and p53-independent mechanisms and can arrest the cell cycle at the G1/S and G2/M checkpoints by deactivating cyclin/cyclin-dependent kinase complexes.
The SOS response is the changes in gene expression in Escherichia coli and other bacteria in response to extensive DNA damage. The prokaryotic SOS system is regulated by two key proteins: LexA and RecA. The LexA homodimer is a transcriptional repressor that binds to operator sequences commonly referred to as SOS boxes. In Escherichia coli it is known that LexA regulates transcription of approximately 48 genes including the lexA and recA genes. The SOS response is known to be widespread in the Bacteria domain, but it is mostly absent in some bacterial phyla, like the Spirochetes. The most common cellular signals activating the SOS response are regions of single-stranded DNA (ssDNA), arising from stalled replication forks or double-strand breaks, which are processed by DNA helicase to separate the two DNA strands. In the initiation step, RecA protein binds to ssDNA in an ATP hydrolysis driven reaction creating RecAssDNA filaments. RecAssDNA filaments activate LexA autoprotease activity, which ultimately leads to cleavage of LexA dimer and subsequent LexA degradation. The loss of LexA repressor induces transcription of the SOS genes and allows for further signal induction, inhibition of cell division and an increase in levels of proteins responsible for damage processing.
In Escherichia coli, SOS boxes are 20-nucleotide long sequences near promoters with palindromic structure and a high degree of sequence conservation. In other classes and phyla, the sequence of SOS boxes varies considerably, with different length and composition, but it is always highly conserved and one of the strongest short signals in the genome. The high information content of SOS boxes permits differential binding of LexA to different promoters and allows for timing of the SOS response. The lesion repair genes are induced at the beginning of SOS response. The error-prone translesion polymerases, for example, UmuCD’2 (also called DNA polymerase V), are induced later on as a last resort. Once the DNA damage is repaired or bypassed using polymerases or through recombination, the amount of single-stranded DNA in cells is decreased, lowering the amounts of RecA filaments decreases cleavage activity of LexA homodimer, which then binds to the SOS boxes near promoters and restores normal gene expression.
Eukaryotic cells exposed to DNA damaging agents also activate important defensive pathways by inducing multiple proteins involved in DNA repair, cell cycle checkpoint control, protein trafficking and degradation. Such genome wide transcriptional response is very complex and tightly regulated, thus allowing coordinated global response to damage. Exposure of yeast Saccharomyces cerevisiae to DNA damaging agents results in overlapping but distinct transcriptional profiles. Similarities to environmental shock response indicates that a general global stress response pathway exist at the level of transcriptional activation. In contrast, different human cell types respond to damage differently indicating an absence of a common global response. The probable explanation for this difference between yeast and human cells may be in the heterogeneity of mammalian cells. In an animal different types of cells are distributed among different organs that have evolved different sensitivities to DNA damage.
In general global response to DNA damage involves expression of multiple genes responsible for postreplication repair, homologous recombination, nucleotide excision repair, DNA damage checkpoint, global transcriptional activation, genes controlling mRNA decay, and many others. A large amount of damage to a cell leaves it with an important decision: undergo apoptosis and die, or survive at the cost of living with a modified genome. An increase in tolerance to damage can lead to an increased rate of survival that will allow a greater accumulation of mutations. Yeast Rev1 and human polymerase are members of [Y family translesion DNA polymerases present during global response to DNA damage and are responsible for enhanced mutagenesis during a global response to DNA damage in eukaryotes.
DNA repair rate is an important determinant of cell pathology
Experimental animals with genetic deficiencies in DNA repair often show decreased life span and increased cancer incidence. For example, mice deficient in the dominant NHEJ pathway and in telomere maintenance mechanisms get lymphoma and infections more often, and, as a consequence, have shorter lifespans than wild-type mice. In similar manner, mice deficient in a key repair and transcription protein that unwinds DNA helices have premature onset of aging-related diseases and consequent shortening of lifespan. However, not every DNA repair deficiency creates exactly the predicted effects; mice deficient in the NER pathway exhibited shortened life span without correspondingly higher rates of mutation.
If the rate of DNA damage exceeds the capacity of the cell to repair it, the accumulation of errors can overwhelm the cell and result in early senescence, apoptosis, or cancer. Inherited diseases associated with faulty DNA repair functioning result in premature aging, increased sensitivity to carcinogens, and correspondingly increased cancer risk (see below). On the other hand, organisms with enhanced DNA repair systems, such as Deinococcus radiodurans, the most radiation-resistant known organism, exhibit remarkable resistance to the double-strand break-inducing effects of radioactivity, likely due to enhanced efficiency of DNA repair and especially NHEJ.
Most life span influencing genes affect the rate of DNA damage
A number of individual genes have been identified as influencing variations in life span within a population of organisms. The effects of these genes is strongly dependent on the environment, in particular, on the organism’s diet. Caloric restriction reproducibly results in extended lifespan in a variety of organisms, likely via nutrient sensing pathways and decreased metabolic rate. The molecular mechanisms by which such restriction results in lengthened lifespan are as yet unclear (see for some discussion); however, the behavior of many genes known to be involved in DNA repair is altered under conditions of caloric restriction.
For example, increasing the gene dosage of the gene SIR-2, which regulates DNA packaging in the nematode worm Caenorhabditis elegans, can significantly extend lifespan. The mammalian homolog of SIR-2 is known to induce downstream DNA repair factors involved in NHEJ, an activity that is especially promoted under conditions of caloric restriction. Caloric restriction has been closely linked to the rate of base excision repair in the nuclear DNA of rodents, although similar effects have not been observed in mitochondrial DNA.
The C. elegans gene AGE-1, an upstream effector of DNA repair pathways, confers dramatically extended life span under free-feeding conditions but leads to a decrease in reproductive fitness under conditions of caloric restriction. This observation supports the pleiotropy theory of the biological origins of aging, which suggests that genes conferring a large survival advantage early in life will be selected for even if they carry a corresponding disadvantage late in life.
Defects in the NER mechanism are responsible for several genetic disorders, including:
Mental retardation often accompanies the latter two disorders, suggesting increased vulnerability of developmental neurons.
Other DNA repair disorders include:
All of the above diseases are often called “segmental progerias” (“accelerated aging diseases”) because their victims appear elderly and suffer from aging-related diseases at an abnormally young age, while not manifesting all the symptoms of old age.
Other diseases associated with reduced DNA repair function include Fanconi anemia, hereditary breast cancer and hereditary colon cancer.
Because of inherent limitations in the DNA repair mechanisms, if humans lived long enough, they would all eventually develop cancer. There are at least 34 Inherited human DNA repair gene mutations that increase cancer risk. Many of these mutations cause DNA repair to be less effective than normal. In particular, Hereditary nonpolyposis colorectal cancer (HNPCC) is strongly associated with specific mutations in the DNA mismatch repair pathway. BRCA1 and BRCA2, two famous genes whose mutations confer a hugely increased risk of breast cancer on carriers, are both associated with a large number of DNA repair pathways, especially NHEJ and homologous recombination.
Cancer therapy procedures such as chemotherapy and radiotherapy work by overwhelming the capacity of the cell to repair DNA damage, resulting in cell death. Cells that are most rapidly dividing most typically cancer cells are preferentially affected. The side-effect is that other non-cancerous but rapidly dividing cells such as progenitor cells in the gut, skin, and hematopoietic system are also affected. Modern cancer treatments attempt to localize the DNA damage to cells and tissues only associated with cancer, either by physical means (concentrating the therapeutic agent in the region of the tumor) or by biochemical means (exploiting a feature unique to cancer cells in the body).
Classically, cancer has been viewed as a set of diseases that are driven by progressive genetic abnormalities that include mutations in tumour-suppressor genes and oncogenes, and chromosomal aberrations. However, it has become apparent that cancer is also driven by epigenetic alterations.
Epigenetic alterations refer to functionally relevant modifications to the genome that do not involve a change in the nucleotide sequence. Examples of such modifications are changes in DNA methylation (hypermethylation and hypomethylation) and histone modification, changes in chromosomal architecture (caused by inappropriate expression of proteins such as HMGA2 or HMGA1) and changes caused by microRNAs. Each of these epigenetic alterations serves to regulate gene expression without altering the underlying DNA sequence. These changes usually remain through cell divisions, last for multiple cell generations, and can be considered to be epimutations (equivalent to mutations).
While large numbers of epigenetic alterations are found in cancers, the epigenetic alterations in DNA repair genes, causing reduced expression of DNA repair proteins, appear to be particularly important. Such alterations are thought to occur early in progression to cancer and to be a likely cause of the genetic instability characteristic of cancers.
Reduced expression of DNA repair genes causes deficient DNA repair. When DNA repair is deficient DNA damages remain in cells at a higher than usual level and these excess damages cause increased frequencies of mutation or epimutation. Mutation rates increase substantially in cells defective in DNA mismatch repair or in homologous recombinational repair (HRR). Chromosomal rearrangements and aneuploidy also increase in HRR defective cells.
Higher levels of DNA damage not only cause increased mutation, but also cause increased epimutation. During repair of DNA double strand breaks, or repair of other DNA damages, incompletely cleared sites of repair can cause epigenetic gene silencing.
Deficient expression of DNA repair proteins due to an inherited mutation can cause increased risk of cancer. Individuals with an inherited impairment in any of 34 DNA repair genes (see article DNA repair-deficiency disorder) have an increased risk of cancer, with some defects causing up to a 100% lifetime chance of cancer (e.g. p53 mutations). However, such germline mutations (which cause highly penetrant cancer syndromes) are the cause of only about 1 percent of cancers.
Deficiencies in DNA repair enzymes are occasionally caused by a newly arising somatic mutation in a DNA repair gene, but are much more frequently caused by epigenetic alterations that reduce or silence expression of DNA repair genes. For example, when 113 colorectal cancers were examined in sequence, only four had a missense mutation in the DNA repair gene MGMT, while the majority had reduced MGMT expression due to methylation of the MGMT promoter region (an epigenetic alteration). Five different studies found that between 40% and 90% of colorectal cancers have reduced MGMT expression due to methylation of the MGMT promoter region.
Similarly, out of 119 cases of mismatch repair-deficient colorectal cancers that lacked DNA repair gene PMS2 expression, PMS2 was deficient in 6 due to mutations in the PMS2 gene, while in 103 cases PMS2 expression was deficient because its pairing partner MLH1 was repressed due to promoter methylation (PMS2 protein is unstable in the absence of MLH1). In the other 10 cases, loss of PMS2 expression was likely due to epigenetic overexpression of the microRNA, miR-155, which down-regulates MLH1.
In further examples (tabulated in Table 4 of this reference), epigenetic defects were found at frequencies of between 13%-100% for the DNA repair genes BRCA1, WRN, FANCB, FANCF, MGMT, MLH1, MSH2, MSH4, ERCC1, XPF, NEIL1 and ATM. These epigenetic defects occurred in various cancers (e.g. breast, ovarian, colorectal and head and neck). Two or three deficiencies in the expression of ERCC1, XPF or PMS2 occur simultaneously in the majority of the 49 colon cancers evaluated by Facista et al.
The chart in this section shows some frequent DNA damaging agents, examples of DNA lesions they cause, and the pathways that deal with these DNA damages. At least 169 enzymes are either directly employed in DNA repair or influence DNA repair processes. Of these, 83 are directly employed in repairing the 5 types of DNA damages illustrated in the chart.
Some of the more well studied genes central to these repair processes are shown in the chart. The gene designations shown in red, gray or cyan indicate genes frequently epigenetically altered in various types of cancers. Wikipedia articles on each of the genes high-lighted by red, gray or cyan describe the epigenetic alteration(s) and the cancer(s) in which these epimutations are found. Two review articles, and two broad experimental survey articles also document most of these epigenetic DNA repair deficiencies in cancers.
Red-highlighted genes are frequently reduced or silenced by epigenetic mechanisms in various cancers. When these genes have low or absent expression, DNA damages can accumulate. Replication errors past these damages (see translesion synthesis) can lead to increased mutations and, ultimately, cancer. Epigenetic repression of DNA repair genes in accurate DNA repair pathways appear to be central to carcinogenesis.
The two gray-highlighted genes RAD51 and BRCA2, are required for homologous recombinational repair. They are sometimes epigenetically over-expressed and sometimes under-expressed in certain cancers. As indicated in the Wikipedia articles on RAD51 and BRCA2, such cancers ordinarily have epigenetic deficiencies in other DNA repair genes. These repair deficiencies would likely cause increased unrepaired DNA damages. The over-expression of RAD51 and BRCA2 seen in these cancers may reflect selective pressures for compensatory RAD51 or BRCA2 over-expression and increased homologous recombinational repair to at least partially deal with such excess DNA damages. In those cases where RAD51 or BRCA2 are under-expressed, this would itself lead to increased unrepaired DNA damages. Replication errors past these damages (see translesion synthesis) could cause increased mutations and cancer, so that under-expression of RAD51 or BRCA2 would be carcinogenic in itself.
Cyan-highlighted genes are in the microhomology-mediated end joining (MMEJ) pathway and are up-regulated in cancer. MMEJ is an additional error-prone inaccurate repair pathway for double-strand breaks. In MMEJ repair of a double-strand break, an homology of 5-25 complementary base pairs between both paired strands is sufficient to align the strands, but mismatched ends (flaps) are usually present. MMEJ removes the extra nucleotides (flaps) where strands are joined, and then ligates the strands to create an intact DNA double helix. MMEJ almost always involves at least a small deletion, so that it is a mutagenic pathway.FEN1, the flap endonuclease in MMEJ, is epigenetically increased by promoter hypomethylation and is over-expressed in the majority of cancers of the breast, prostate, stomach, neuroblastomas, pancreas, and lung. PARP1 is also over-expressed when its promoter region ETS site is epigenetically hypomethylated, and this contributes to progression to endometrial cancer, BRCA-mutated ovarian cancer, and BRCA-mutated serous ovarian cancer. Other genes in the MMEJ pathway are also over-expressed in a number of cancers (see MMEJ for summary), and are also shown in cyan.
The basic processes of DNA repair are highly conserved among both prokaryotes and eukaryotes and even among bacteriophage (viruses that infect bacteria); however, more complex organisms with more complex genomes have correspondingly more complex repair mechanisms. The ability of a large number of protein structural motifs to catalyze relevant chemical reactions has played a significant role in the elaboration of repair mechanisms during evolution. For an extremely detailed review of hypotheses relating to the evolution of DNA repair, see.
The fossil record indicates that single-cell life began to proliferate on the planet at some point during the Precambrian period, although exactly when recognizably modern life first emerged is unclear. Nucleic acids became the sole and universal means of encoding genetic information, requiring DNA repair mechanisms that in their basic form have been inherited by all extant life forms from their common ancestor. The emergence of Earth’s oxygen-rich atmosphere (known as the “oxygen catastrophe”) due to photosynthetic organisms, as well as the presence of potentially damaging free radicals in the cell due to oxidative phosphorylation, necessitated the evolution of DNA repair mechanisms that act specifically to counter the types of damage induced by oxidative stress.
On some occasions, DNA damage is not repaired, or is repaired by an error-prone mechanism that results in a change from the original sequence. When this occurs, mutations may propagate into the genomes of the cell’s progeny. Should such an event occur in a germ line cell that will eventually produce a gamete, the mutation has the potential to be passed on to the organism’s offspring. The rate of evolution in a particular species (or, in a particular gene) is a function of the rate of mutation. As a consequence, the rate and accuracy of DNA repair mechanisms have an influence over the process of evolutionary change. Since the normal adaptation of populations of organisms to changing circumstances (for instance the adaptation of the beaks of a population of finches to the changing presence of hard seeds or insects) proceeds by gene regulation and the recombination and selection of gene variations alleles and not by passing on irreparable DNA damages to the offspring, DNA damage protection and repair does not influence the rate of adaptation by gene regulation and by recombination and selection of alleles. On the other hand, DNA damage repair and protection does influence the rate of accumulation of irreparable, advantageous, code expanding, inheritable mutations, and slows down the evolutionary mechanism for expansion of the genome of organisms with new functionalities. The tension between evolvability and mutation repair and protection needs further investigation.
A technology named clustered regularly interspaced short palindromic repeat shortened to CRISPR-Cas9 was discovered in 2012. The new technology allows anyone with molecular biology training to alter the genes of any species with precision.
Read the original:
DNA repair – Wikipedia, the free encyclopedia
Posted: August 29, 2016 at 7:34 am
I was going through some of my school notes today and i came across the following lecture notes id taken from a class on religion and illusions when i was still a student. Hence, I figured I introduce you guys to this very interesting topic as most of what we are tought regarding religion in the mainstream media is usually all but the same. Hope you enjoy it and find it interesting. Dont hesitate to leave your opinion at the end.
Nihilism as a philosophy seemed pass by the 1980s. Few talked about it in literature expect to declare it a dead issue. Literally, in the materialist sense, nihilism refers to a truism: from nothing, nothing comes. However, from a philosophical viewpoint, moral nihilism took on a similar connotation. One literally believed in nothing, which is somewhat of an oxymoron since to believe in nothing is to believe in something. A corner was turned in the history of nihilism once 9/11 became a reality. After this major event, religious and social science scholars began to ask whether violence could be attributed tonihilistic thinkingin other words, whether we had lost our way morally by believing in nothing, by rejecting traditional moral foundations. It was feared that an anything goes mentality and a lack of absolute moral foundations could lead to further acts of violence, as the goals forwarded by life-affirmation were being thwarted by the destructive ends of so-called violent nihilists. This position is, however, argumentative.
Extreme beliefs in values such as nationalism, patriotism, statism, secularism, or religion can also lead to violence, as one becomes unsettled by beliefs contrary to the reigning orthodoxy and strikes out violently to protect communal values. Therefore, believing in something can also lead to violence and suffering. To put the argument to rest, its not about whether one believes in something or nothing but howabsolutistthe position is; its the rigidity of values that causes pain and suffering, what Nobel prize winner Amartya Sen calls the illusion of singularity.Since 9/11, nihilism has become a favourite target to criticize and marginalize, yet its history and complexity actually lead to a more nuanced argument. Perhaps we should be looking at ways nihilism complements Western belief systemseven Christian doctrinerather than fear its implementation in ethical and moral discussions.
Brief History of Nihilism To understand why some forms of nihilism are still problematic, it is important to ask how it was used historically and for what motive. Nihilism was first thought synonymous with having no authentic values, no real ends, that ones whole existence is pure nothingness.In its earliest European roots, nihilism was initially used to label groups or ideas asinferior, especially if they were deemed threatening to establishedcommunal ideals. Nihilism as alabelwas its first function.
Nihilism initially functioned as apejorative labeland a term of abuse against modern trends that threatened to destroy either Christian hegemonic principles or tradition in general.During the seventeenth and eighteenth centuries, modernization in France meant that power shifted from the traditional feudal nobility to a central government filled with well-trained bourgeois professionals. Fearing a loss of influence, the nobility made a claim: If power shifted to responsible government, the nobility claimed that such centralization would lead to death and destructionin other words, anarchy and nothingness. Those upsetting the status quo were deemed nihilistic, a derogatory label requiring no serious burden of proof.Such labelling, however, worked both ways. The old world or tradition was deemed valueless by advocates of modernization and change who viewed the status quo as valueless; whereas, traditionalists pictured a new world, or new life form, as destructive and meaningless in its pursuit of a flawed transformation. Potential changes in power or ideology created a climate of fear, so the importance of defining ones opponent as nihilisticas nothing of valuewas as politically astute as it was reactionary. Those embracing the function of nihilism as a label are attempting to avoid scrutiny of their own values while the values of the opposition are literally annihilated.
Since those advocating communal values may feel threatened by new ideologies, it becomes imperative for the dominant power to present its political, metaphysical, or religious beliefs as eternal, universal, and objective. Typically, traditionalists have a stake in their own normative positions. This is because [t]he absoluteness of [ones] form of life makes [one]feel safe and at home. This means that [perfectionists]have a great interest in the maintenance of their form of life and its absoluteness.The existence of alternative beliefs and values, as well as a demand for intersubjective dialogue, is both a challenge and a threat to the traditionalist because [i]t shows people that their own form of life is not as absolute as they thought it was, and this makes them feel uncertain. . . . However, if one labels the Other as nihilistic without ever entering into a dialogue, one may become myopic, dismissing the relative value of other life forms one chooses not to see. This means that one cant see what they [other cultural groups]are doing, and why they are doing it, why they may be successful . . . Therefore, one misses the dynamics of cultural change.
Through the effect of labelling, the religious-oriented could claim that nihilists, and thus atheists by affiliation, would not feel bound by moral norms, and as a result would lose the sense that life has meaning and therefore tend toward despair and suicide.death of God. Christians argued that if there is no divine lawmaker, moral law would become interpretative, contested, and situational. The end result: [E]ach man will tend to become a law unto himself. If God does not exist to choose for the individual, the individual will assume the former prerogative of God and choose for himself. It was this kind of thinking that led perfectionists to assume that any challenge to the Absolute automatically meant moral indifference, moral relativism, and moral chaos. Put simply,nihilists were the enemy.
Nihilists were accused of rejecting ultimate values, embracing instead an all values are equal mentalitybasically, anything goes. And like Islam today, nihilists would become easy scapegoats.
Late 19th 20th Century;Nietzsche and the Death of God
Friedrich Nietzsche is still the most prestigious theorist of nihilism. Influenced by Christianitys dominant orthodoxy in the nineteenth century, Nietzsche believed that the Christian religion was nihilism incarnate. Since Christian theology involved a metaphysical reversal of temporal reality and a belief in God that came from nothing, the Christian God became the deification of nothingness, the will to nothingness pronounced holy. Nietzsche claimed that Christian metaphysics became an impediment to life-affirmation. Nietzsche explains: If one shifts the centre of gravity of life out of life into the Beyondinto nothingnessone has deprived life of its centre of gravity . . . So to live that there is no longer any meaning in living:that now becomes the meaning of life.What Nietzsche rejected more was the belief that one could create a totalizing system to explain all truths. In other words, he repudiated any religion or dogma that attempted to show how the entire body of knowledge [could]be derived from a small set of fundamental, self-evident propositions(i.e., stewardship). Nietzsche felt that we do not have the slightest right to posit a beyond or an it-self of things that is divine or the embodiment of morality.
Without God as a foundation for absolute values, all absolute values are deemed suspect (hence the birth of postmodernism). For Nietzsche, this literally meant that the belief in the Christian god ha[d]become unworthy of belief.This transition from the highest values to the death of God was not going to be a quick one; in fact, the comfort provided by an absolute divinity could potentially sustain its existence for millennia. Nietzsche elaborates: God is dead; but given the way of men, there may still be caves for thousands of years in which his shadow will be shown.And wewe still have to vanquish his shadow too.
We are left then with a dilemma: Either we abandon our reverences for the highest values and subsist, or we maintain our dependency on absolutes at the cost of our own non-absolutist reality. For Nietzsche, the second option was pure nothingness: So we can abolish either our reverences or ourselves. The latter constitutes nihilism. All one is left with are contested, situational value judgements, and these are resolved in the human arena.
One can still embrace pessimism, believing that without some form of an absolute, our existence in this world will take a turn for the worse. To avoid the trappings of pessimism and passivity, Nietzsche sought a solution to such nihilistic despair through the re-evaluation of the dominant, life-negating values. This makes Nietzsche an perspectivism a philosophy of resolution in the form of life-affirmation. It moves past despair toward a transformative stage in which new values are posited to replace the old table of values. As Reginster acknowledges, one should regard the affirmation of life as Nietzsches defining philosophical achievement. What this implies is a substantive demand to live according to a constant re-evaluation of values. By taking full responsibility for this task, humankind engages in the eternal recurrence, a recurrence of life-affirming values based on acceptance of becoming and the impermanence of values. Value formation is both fluid and cyclical.
Late-20th Century 21st Century;The Pessimism of the Post-9/11 Era
Since the events of September 11, 2001, nihilism has returned with a vengeance to scholarly literature; however, it is being discussed in almost exclusively negative terms. The labelling origin of nihilism has taken on new life in a context of suicide bombings, Islamophobia, and neoconservative rhetoric. For instance, Canadian Liberal leader Michael Ignatieff described different shades of negative nihilismtragic, cynical, and fanaticalin his bookThe Lesser Evil.Tragic nihilism begins from a foundation of noble, political intentions, but eventually this ethic of restraint spirals toward violence as the only end(i.e., Vietnam). Two sides of an armed struggle may begin with high ideals and place limitations on their means to achieve viable political goals, but such noble ends eventually become lost in all the carnage. Agents of a democratic state may find themselves driven by the horror of terror to torture, to assassinate, to kill innocent civilians, all in the name of rights and democracy. As Ignateiff states, they slip from the lesser evil [legitimate use of force]to the greater [violence as an end in itself].
However,cynical nihilism is even more narcissistic. In this case, violence does not begin as a means to noble goals. Instead, [i]t is used, from the beginning, in the service of cynical or self-serving [ends]. The term denotes narcissistic prejudice because it justifies the commission of violence for the sake of personal aggrandizement, immortality, fame, or power rather than as a means to a genuinely political end, like revolution [for social justice]or the liberation of a people.Cynical nihilists were never threatened in any legitimate way. Their own vanity, ego, greed, or need to control others drove them to commit violence against innocent civilians (e.g., Saddam Hussein in Kuwait or Bush in Iraq).
Finally,fanatical nihilism does not suffer from a belief in nothing. In actuality, this type of nihilism is dangerous because one believes in too much. What fanatical nihilism does involve is a form of conviction so intense, a devotion so blind, that it becomes impossible to see that violence necessarily betrays the ends that conviction seeks to achieve. The fanatical use of ideology to justify atrocity negates any consideration of the human cost of such fundamentalism. As a result, nihilism becomes willed indifference to the human agents sacrificed on the alter of principle. . . . Here nihilism is not a belief in nothing at all; it is, rather, the belief that nothing about particular groups of human beings matters enough to require minimizing harm to them.Fanatical nihilism is also important to understand because many of the justifications are religious. States Ignatieff:
From a human rights standpoint, the claim that such inhumanity can be divinely inspired is a piece of nihilism, an inhuman devaluation of the respect owed to all persons, and moreover a piece of hubris, since, by definition, human beings have no access to divine intentions, whatever they may be.
Positive Nihilism In the twenty-first century, humankind is searching for a philosophy to counter destructive, non-pragmatic forms of nihilism. As a middle path,positive nihilism accentuates life-affirmation through a widening of dialogue. Positively stated: [The Philosopher] . . ., having rejected the currently dominant values, must raise other values, by virtue of which life and the universe cannot only be justified but also become endearing and valuable. Rejecting any unworkable table of values, humankind now erects another table with a new ranking of values and new ideals of humanity, society, and state.Positive nihilismin both its rejection of absolute truths and its acceptance of contextual truthsis life-affirming since small-t truths are the best mere mortals can hope to accomplish. Human beings can reach for higher truths; they just do not have the totalizing knowledge required for Absolute Truth. In other words, we are not God, but we are still attempting to be God on a good day. We still need valuesin other words, we are not moral nihilists or absolutistsbut we realize that the human condition is malleable. Values come and go, and we have to be ready to bend them in the right direction in the moment moral courage requires it.
Nihilism does not have to be a dangerous or negative philosophy; it can be a philosophy of freedom. Basically, the entire purpose of positive nihilism is to transform values that no longer work and replace them with values that do. By aiding in a process that finds meaningful values through negotiation,positive nihilism prevents the exclusionary effect of perfectionism, the deceit of nihilistic labelling, as well as the senseless violence of fanatical nihilism. It is at this point that nihilism can enter its life-affirming stage and become a compliment to pluralism, multiculturalism, and the roots of religion, those being love, charity, and compassion.
Source; Professor Stuart Chambers.
Replacing meaningful content with placeholder text allows viewers to focus on graphic aspects such as font, typography, and page layout without being distracted by the content.
Desgin 98 %
Development 91 %
Features 93 %
Awsome 90 %
Read this article:
Posted: August 12, 2016 at 2:48 pm
The head of Hispanic Outreach for the Libertarian Party, who is Republican, says he joined up with the third party because he believes GOP presidential nominee Donald Trump is the worst of the worst.
Speaking to The Hill, Juan Hernndez, who took the post with the Libertarian Party last week, said that he is not leaving the Republican Party, but is backing Libertarian Gary Johnsons bid for the White House because he believes the former New Mexico governor “comes with a message that brings both of my worlds together.”
Johnsons message of small government and letting states decide on social issues resonated with Hernndez because it “fits Hispanics so well.”
“We came here, were religious, we dont want to get into the debate over gay marriage,” Hernndez said of Hispanics. “Let states decide.”
As for Trump, Hernndez said there are just so many reasons why he cant support the boisterous billionaire.
While he says that Trumps call to build a massive wall along the United States southern border with Mexico and his proposal to deport the 11 million undocumented immigrants living in the country would be an insult to Hispanics, Hernndez said his opposition to Trump goes even further.
Trump would “not only be a disaster for Hispanics, for Republicans, for Americans, for the world. I really fear a Trump president. The way he speaks of bombing other nations, the Muslims?”
Hernndez, however, said he never had any plans of supporting Democratic presidential nominee Hillary Clinton.
“Its not a matter of Ill go with the lesser of two evils, I think we have to vote on principle,” said Hernndez.
“Since she was first lady of Arkansas, she and her husband were always en la orillita of whats appropriate, Hernndez said, using the Mexican Spanish phrase that roughly translates to in gray space.
Hernndez has previously worked as an advisor for presidential candidates in the U.S., Mexico and Guatemala, including Arizona Sen. John McCains failed bid in 2008 and former Mexican Presidents Vicente Fox and Felipe Caldern.
Besides Hernndez, the Johnson campaign nabbed another high profile Republican boost on Wednesday when Virginia Rep. Scott Rigell said he thinks Johnson can win the presidency.
“This may surprise you to hear, but I’m ready to defend the proposition that Gov. Johnson can win,” Rigell said.
Like us on FacebookFollow us on Twitter & Instagram
Posted: August 10, 2016 at 9:05 pm
A man wakes up one morning to find himself slowly transforming into a living hybrid of meat and scrap metal; he dreams of being sodomised by a woman with a snakelike, strap-on phallus. Clandestine experiments of sensory depravation and mental torture unleash psychic powers in test subjects, prompting them to explode into showers of black pus or tear the flesh off each other’s bodies in a sexual frenzy. Meanwhile, a hysterical cyborg sex-slave runs amok through busy streets whilst electrically charged demi-gods battle for supremacy on the rooftops above. This is cyberpunk, Japanese style: a brief filmmaking movement that erupted from the Japanese underground to garner international attention in the late 1980s.
The world of live-action Japanese cyberpunk is a twisted and strange one indeed; a far cry from the established notions of computer hackers, ubiquitous technologies and domineering conglomerates as found in the pages of William Gibson’s Neuromancer (1984) – a pivotal cyberpunk text during the sub-genre’s formation and recognition in the early eighties. From a cinematic standpoint, it perhaps owes more to the industrial gothic of David Lynch’s Eraserhead (1976) and the psycho-sexual body horror of early David Cronenberg than the rain-soaked metropolis of Ridley Scott’s Blade Runner (1982), although Scott’s neon infused tech-noir has been a major aesthetic touchstone for cyberpunk manga and anime institutions such as Katsuhiro Otomo’s Akira (1982-90) and Masamune Shirow’s Ghost in the Shell (1989- ).
In the Western world, cyberpunk was born out of the new wave science fiction literature of the sixties and seventies; authors such Harlan Ellison, J.G. Ballard and Philip K. Dick – whose novel Do Androids Dream of Electric Sheep? (1968) was the basis for Blade Runner – were key proponents in its inception, creating worlds that featured artificial life, social decay and technological dependency. The hard-boiled detective novels of Dashiell Hammett also proved influential with regards to the sub-genre’s overall pessimistic stance. What came to be known as cyberpunk by the mid 1980s was thematically characterised by its exploration of the impact of high-technology on low-lives – people living in squalor; stacked on top of one another within an oppressive metropolis dominated by advanced technologies.
Live-action, Japanese cyberpunk on the other hand, is raw and primal by nature, and characterised by attitude rather than high-concept. A collision between flesh and metal, the sub-genre is an explosion of sex, violence, concrete and machinery; a small collection of pocket-sized universes that revel in post-human nightmares and teratological fetishes, powered by a boundaryless sense of invasiveness and violation. Imagery is abject, perverse and unpredictable and, like Cronenberg’s work, bodily mutation through technological intervention is a major theme, as are dehumanisation, repression and sexuality. During the late eighties and early nineties, it was a sub-strain characterised largely by the early work of two directors; Shinya Tsukamoto and Shozin Fukui.
These directors made films that were short, sharp, bludgeoning and centred on corporeal horrors that saw the body invaded, infected and infused with technology. Tsukamoto’s contributions are perhaps the most famous; Tetsuo: The Iron Man (1989) and Tetsuo II: The Body Hammer (1992). Both films present the nightmarish situation of their protagonists (played by actor Tomorowo Taguchi in both) undergoing a bizarre metamorphosis that sees a humble salaryman turn from a human into a hybrid of flesh and scrap metal.
Although not as well known to western audiences, Fukui’s work is also important. Stylistically similar to Tsukamoto but sufficiently divergent so as not to be a mere copy, Fukui opened up the sub-genre’s pallet by incorporating Cronenberg like scientific experiments that impact on the body through technological augmentation as evidenced in his contributions Pinocchio v946 (1991) and Rubber’s Lover (1996). These films focus on the venerability of the human mind and how such alteration can cause more than a physical change in appearance, but create a completely new mental state and thought processes that are beyond human.
Tsukamoto and Fukui eschewed many of conventions crystallised by Gibson’s archetypal Neuromancer. There are no mega-conglomerates or incidences of virtual reality and the power struggle between high-technology versus low-quality of life is replaced by low-technology versus low-life. The technology in their vision of cyberpunk consisted of industrial scrap – Tetsuo – and makeshift laboratories built from crude and dated equipment – Rubber’s Lover – lending a DIY aesthetic to their overall ethos. These were, after all, films made with little or no money and as a result, were not set in gargantuan, near-future metropolises but the present-day, real-life cyberpunk city of Tokyo, suggesting that anxieties over rapid modernity are not some far-off venture but something that should be worried about now. Both filmmakers also had a fixation with post-industrial landscapes; using scrap yards, boiler rooms, abandoned warehouses, compounds and factories as decaying playgrounds for their ideas.
However, this new and defiant take on the sub-genre did not come about overnight. There are many precursors to both Tsukamoto and Fukui’s work that also need to be addressed. Some are quite well known to western audiences whilst others have yet to get the recognition that they deserve in helping to create one of the most fascinating and philosophical phases in contemporary Japanese cinema.
Whilst the ideas of cyberpunk in the West were born out of literature, Japanese cyberpunk, it could be argued, was born out of music. During the late seventies and early eighties, Tokyo was enjoying an incredibly vibrant underground punk music scene. An ethos that later branched out into art and cinema thanks largely to one individual: Sogo Ishii.
Born in 1957, Ishii quickly built a reputation of being somewhat of a maverick and grew to be a prominent figure of the Tokyo underground filmmaking scene. Operating within the gathering rubble of a collapsing studio system, Ishii turned out a variety of zero-budget 8mm film projects at a time when former international filmmaking heavyweights such as Akira Kurosawa were struggling to find financial investment.
Early feature film efforts such as Panic High School (1978) and Crazy Thunder Road (1980) encapsulated the rebellion and anarchy associated with punk and went on to become highly influential in underground film circles. Crazy Thunder Road in particular pointed the way forward with its biker-gang punk aesthetic; a style that would be explored later in Otomo’s highly influential Akira. Originally made as a university graduation project, it was picked up for distribution by major studio Toei, making Ishii the first of his generation to move from amateur filmmaking into the professional industry while still a university student [ 1 ].
After Crazy Thunder Road, Ishii made the frenetic short film Shuffle (1981) – interestingly, an unofficial adaptation of a Katsuhiro Otomo comic strip – as well as a slew of music and concert videos for a variety of Japanese punk bands. However, Toei soon returned, offering Ishii studio backing for his next feature film project. This new financial investment resulted in Ishii’s most influential work to date; Burst City (1982), a film that encapsulated and epitomised his favourite subject matter: the punk movement.
No other film captured the intensity, pessimism, delinquency and the do-it-yourself bravado of Japan’s punk movement like Ishii’s Burst City; a bold, brash and anarchic time-capsule of early eighties zeitgeist. However, despite its overwhelming influence – not only did it shape the conventions of Japanese cyberpunk, but the future of contemporary Japanese cinema as a whole – Burst City remains largely unappreciated. It is frequently overshadowed by its higher profile, more internationally renowned followers: Tsukamoto, Takashi Miike and Takeshi Kitano among others, all of whom are indebted to Ishii’s work in some shape or form.
However, Ishii has always played the rebel: attending his filmmaking class at Nihon University only when he needed to borrow more equipment; dropping off the filmmaking radar for long stretches of time; making films of a commercially unviable length such as the 55-minute Electric Dragon 80,000V (2001) and challenging conventional moviegoers with his early punk films only then to defy the fans of that work with calm, hypnotic efforts such as August in the Water (1995) and Labyrinth of Dreams (1997). It is this ethos that drives Burst City; steering it through the deserted Tokyo highways and barren industrial wastelands that make up its initial exposition and into the anarchic meltdown of its closing act.
The visual aesthetic of Burst City is an eclectic mix of punk, industrialisation and post-apocalyptic wasteland imagery reminiscent of the first two Mad Max films (1979 & 1981), with some science fiction trimmings; the futuristic cannons used by the Battle Police to disperse riots for instance. However, Burst City acts beyond the usual genre trappings. It has the immediacy and atmosphere of a documentary, chronicling both the people and the music, whilst using the surrounding dystopian backdrop as a metaphor for the anxiety, haplessness and alienation as experienced by Japan’s youth at the time. This documentary feel is further enhanced by Ishii’s groundbreaking use of camera. His highly dynamic, handheld, almost stream-of-consciousness style shots interwoven with equally aggressive, machinegun editing not only captures the energy and restlessness of the music – which is very prominent here – but would highly influence Tsukamoto and the execution of his work.
The film’s industrialised environments – the abandoned warehouses and run-down boiler rooms where the biker gangs and punk bands reside – would become a key aspect for the Japanese cyberpunk look as well as depicting Tokyo as little more than a concrete slum. The notion of the metropolis as oppressive entity starts to become apparent here and it’s interesting to note that this film was made in the same year as Blade Runner, which again, displays similar connotations [ 2 ].
Ishii’s prior involvement with the punk movement allowed him to gather an impressive ensemble of real-life Japanese punk bands – The Rockers, The Roosters and The Stalin among others – as part of the cast, as well as 1970s folk singer/songwriter Shigeru Izumiya. Interestingly, Izumiya was also credited as a Planner and the film’s Art Director, suggesting that he had a strong involvement in shaping Burst City’s influential aesthetic. This serves as a vital link as Izumiya would go on to write and direct his own film; a film that would go on to crystallise many of the conventions and ideas of Japanese cyberpunk that would later be explored by Tsukamoto and Fukui.
Shigeru Izumiya’s Death Powder (1986) introduces the unorthodox visuals and abstract delivery that would prove instrumental in future Japanese cyberpunk execution. Like Burst City, sound also plays a vital part here; further laying the foundations for the sensory assault aspect of the movement that would later be championed and refined by Tsukamoto. Izumiya, like Ishii, is from a musical background; a popular folk singer/songwriter as well as a film composer – he wrote the music for Ishii’s breakthrough feature Crazy Thunder Road.
Lost in public domain purgatory for decades, Death Powder barely exists, available on bootleg DVD and only recently as video segments on the internet [ 3 ]. Western understanding of the film has been largely incoherent and underwhelming due to bad and partial translation into English and as a result, Death Powder is frequently overlooked. However, its influence is unmistakably clear and it’s arguably the first film of Japan’s extreme cyberpunk movement, exemplifying the invasive, corporeal surrealism that would follow over the next ten years.
Set in present or near-future Tokyo, the film follows a group of researchers who have in their possession Guernica; a feminine, cybernetic android capable of spewing poisonous dust from its mouth. Karima (played by Izumiya) is left to guard the android but appears to lose his mind, attacking the other two – Noris and Kiyoshi – when they return. Kiyoshi inhales some of Guernica’s powder and starts to mutate as a result. He also starts hallucinating as their subconscious starts to merge. One sequence entitled “Dr. Loo Made Me” – which suggests that the android is trying to communicate with Kiyoshi – sees the Guernica project in its early stages featuring the three researchers as well as the eccentric Dr. Loo, the guitar wielding head of the operation. The hallucinations provide Kiyoshi with further omniscience, detailing Karima’s apparent love for Guernica as well as the research group’s ongoing struggle with the ‘scar people’; men disfigured as their flesh deteriorates uncontrollably.
The subject of flesh, the boundary between life and death and the notion of what it means to be human come into play regularly as the film drifts from one surrealist situation to another. Death Powder poses the question: if you cease to have flesh, do you cease to be human? This is an idea that is routinely explored in cyberpunk but while western examples such as Blade Runner and Neuromancer focus on larger-scale implications, Death Powder – and most of Japan’s subsequent cyberpunk output for that matter – looks at the changes within the individual. With the former; invasive technologies are not only fully realised, but have been successfully integrated into society, thus becoming common practice. The technologies explored in the latter however, are still in their primordial stages; they are works in progress and extremely esoteric, and as a result, extremely volatile and unpredictable.
Death Powder also establishes Japanese cyberpunk’s tendency to place imagery ahead of its narrative, a fundamental aspect of the no-holds barred sensory assault style that they exhibit. As a result, story and purpose are evinced from what is seen as opposed to what is told, allowing subsequent films a tonal and philosophical quality. Like many similar spirited films that would follow, Death Powder highlights the destructive and dehumanising nature of technology. A big clue comes in the form of the android Guernica sharing the same name as Pablo Picasso’s famous 1937 painting that depicts the bombing of Guernica by Nazi warplanes (in support of Franco) during the Spanish Civil War. Picasso’s mural shows an orgy of twisted bodies, animals and buildings, deformed by war, or more broadly, the deviant technologies that power it. The film’s end sees the cast fused and writhing in an ocean of monstrous flesh; the human form consumed and destroyed at the hands of intervening science.
Despite Death Powder’s aesthetic and thematic influence, it went by with little fanfare and was never seen outside of Japan until years later. The subsequent, similar minded Android of Notre Dame (Kuramoto; 1988) fared slightly better, partly due to the infamy that surrounded the film series it was part of, a seven-film collection known as the Guinea Pig Series; short exploitation features that focused on torture, murder and other destructive processes, designed to appear realistic and snuff-like [ 4 ]. Android of Notre Dame failed to strike a chord with wider audiences and has since wallowed in cult obscurity along with its filmic brethren. However, this all changed as Japanese cyberpunk began to creep into the international spotlight with the anime feature film adaptation of Katsuhiro Otomo’s popular manga series, Akira (1988).
Although this writing focuses mainly on live-action cyberpunk output, Akira’s arrival was so important and influential to the sub-genre that it needs to be acknowledged. Akira achieved two things: first; it opened up and, almost single-handedly, popularised anime and manga for global audiences (especially in the UK and US) and second; it perpetuated the cyberpunk ethos on perhaps the largest scale to date – combining the neon-lit, high-technology/low-living metropolis of Blade Runner and Neuromancer with body horror overtones. The film condensed the vast narrative of Otomo’s gargantuan, six-part magnum opus into a streamlined, two-hour feature directed by Otomo himself. It is a milestone within Japanese cyberpunk as it was the first of the sub-genre to not only have commercial success domestically, but also managed to find an audience overseas.
Set within the destitute overcrowding of futuristic Neo Tokyo, the story revolves around juvenile biker thugs and best friends Kaneda and Tetsuo. During a turf spat with a rival gang, Tetsuo crashes but is mysteriously taken away by military and scientific officials. They experiment on him with chemically altering drugs, turning Tetsuo into a psycho-kinetic demigod with uncontrollable power. He goes on a destructive rampage through the city to seek an audience with Akira, a highly powerful entity that destroyed the old Tokyo decades before.
Part of Akira’s success inevitably lies in its attention to detail and vaulting ambition. The budget was astronomical for an anime feature at the time – around 1,100,000,000 [ 5 ] – acquired through the partnership of several major Japanese media companies including Toho and Bandai. It avoided the corner cutting of anime projects in the past, producing hundreds of thousands of animation cells to create fluid motion – particularly in its many action set-pieces – and capture nuances that would’ve otherwise not existed. Otomo also went to the trouble of doing lip-synched sound recording; a first for anime, resulting in extremely high and rich production values. The film set box office records for an anime in Japan during its summer 1988 release, grossing over 6,300,000,000 [ 6 ]. Internationally, it got a limited theatrical run in America and the United Kingdom soon after – sowing the seeds for the immense western cult fanbase that it enjoys to this day – but failed to get home video distribution until the early nineties.
Themes of mutation, modernity and social unrest are rife. Kaneda and Tetsuo’s biker gang are like a revved up version of the delinquents seen in Ishii’s Crazy Thunder Road and Burst City, while Tetsuo’s ESP and subsequent transformation sets the film firmly in Cronenberg’s body horror territory. His eventual fusion with metal – resulting in a horrific man-machine hybrid that sees Tetsuo become the master of a newly formed universe – not only is evocative of the cyberpunk notion of technology corrupting the human form (in this case literally) but also serves as an important visual precursor to the movement’s next breakthrough, live-action work.
Often revered as the definitive example of extreme Japanese cyberpunk and a vital cornerstone in the rebuilding of contemporary Japanese cinema, Tetsuo: The Iron Man was a baffling international success story, prompting many a sceptic on Japan’s future cinematic involvement to turn their attention eastward. Barely over an hour in length, Tetsuo was a breath of fresh air; a no-holds-barred sensory assault that gave Japanese cinema a major image renovation and launched the career of its director, Shinya Tsukamoto, who has gone on to become one of the country’s most respected and treasured auteurs.
During its unprecedented and lengthy tour of international film festivals, Tetsuo not only pointed towards exciting new possibilities for contemporary Japanese cinema but was able to fit ‘snugly into a pantheon of genre works that included Ridley Scott’s Blade Runner, James Cameron’s The Terminator, David Lynch’s Eraserhead and the work of David Cronenberg, Sam Raimi and Clive Barker'[ 7 ], which no doubt broadened its appeal. Its use of kinetic cinematography, rapid-fire editing and DIY, zero-budget special effects served as an invitation; a call to arms if you will, for independent filmmakers everywhere to produce unique and challenging cinema.
However, the majority of the film’s innovative style is, for the most part, lifted from elsewhere, promoting the fusion of a variety of influences including the hyperactive camerawork of Ishii’s Burst City; the body horror of Cronenberg’s Videodrome (1983) and The Fly (1986); the biomechanical perversions of artist H.R. Giger; the literature of J.G. Ballard – particularly Crash (1973) – and the stop-motion animation of Jan Svankmayer. There is also a sense of strange nostalgia for the old kaiju (monster) movies and television serials that Tsukamoto watched when growing up in a Tokyo experiencing post-war re-construction as well as major expansion and modernising in preparation for the Japan hosting of the 1964 Olympic Games.
Like Ishii, Tsukamoto’s early development stemmed from making 8mm films as a teenager during the 1970s, using his younger brother and friends as cast and crew members. As he reached adulthood, Tsukamoto abandoned filmmaking and turned his attention increasingly towards the stage, forming a theatre troupe with like minded university students and directing plays [ 8 ]. One of the plays that Tsukamoto wrote would subsequently be adapted into a film; The Adventure of Denchu Kozo (1987) with the assistance of his theatre cohorts – christened ‘Kaiju Theatre’. It was this same group that also made Tetsuo, along with a revolving-door line-up of other helpers, most notably fellow filmmaker Shozin Fukui who would go to make his own cyberpunk features during the nineties.
Tetsuo’s chief concern is the impact of technology on society and subsequently – and more specifically – the human form. Tsukamoto suggests that technology is a disease, bursting forth unannounced and unexplained as evidenced in the salaryman’s transformation – simultaneously reminiscent of Cronenberg’s The Fly and Otomo’s Akira – where a shard of metal lodged in the protagonist’s cheek is the starting point for further mutation. Like Seth Brundle of The Fly, the salaryman is both repulsed yet intrigued by what he is turning into and, coincidently, his evolution shares the namesake of the transforming character of Akira: Tetsuo; meaning ‘iron man’ or ‘clear thinking/philosophical man’. Tsukamoto embraces both interpretations of his film’s title. On one hand is the literal transformation of flesh to iron and on the other, a philosophical enquiry on technology’s consuming nature and the symbiosis between city and citizen.
However, closer inspection reveals further concerns, as evidenced by Steven T. Brown, author of the groundbreaking Tokyo Cyberpunk: Posthumanism in Japanese Visual Culture, in which he says: ‘the mixing of flesh and metal in Tetsuo is not only intensely violent but also darkly erotomechanical and techno-fetishistic, evoking sadomasochistic sexual practices and pleasures, as well as fears of both male and female sexuality out of control'[ 9 ].
In this regard, Tsukamoto gives horror and eroticism equal attention: the salaryman has a nightmare involving his girlfriend (played by Kei Fujiwara) sodomising him with a mechanical, snakelike appendage strapped to her crotch. This gender-reversal is not only representative of one of David Cronenberg’s favourite thematic stomping grounds, but also shares the Canadian director’s Ballardian [ 10 ] allusions, hyper-masculinity and homoerotic undertones. When the film’s antagonist, Yatsu (meaning ‘Guy’) – a metal fetishist (played by Tsukamoto himself) suffering from the same man-machine affliction – arrives at the apartment, he turns up ‘presenting flowers to the salaryman in a parody of courtship'[ 11 ] that ends with physical assimilation.
This mechanical eros continues when, in an early stage of his transformation, the salaryman’s penis turns into a rapidly oscillating drill which he then uses on his girlfriend with graphic results. By the film’s end, he does battle and fuses together with the metal fetishist; the result is a large tank-like monstrosity with the suggested goal of world domination. His newfound unrepressed nature effectively destroys his heterosexual relationship, only to start a new one with someone – another male – experiencing similar changes to their body.
The film’s metaphorical capacity is achieved primarily through its abstract and surrealist execution that bears similarities to Luis Buuel’s Un Chien Andalou (1929) – as noted by Brown in Tokyo Cyberpunk (p.60-64) – and David Lynch’s Eraserhead. The latter is a popular comparison, prompting many to refer to Tetsuo as a “Japanese Eraserhead”. Whilst both films share an allegiance to post-humanism and industrialised iconography, Eraserhead takes a slower burning, atmospheric approach. Tetsuo on the other hand, takes a startlingly aggressive stance from the outset; combining hand-held camerawork, rapid fire editing and a pummelling, industrial music score by composer Chu Ishikawa – who would serve as composer for future Tsukamoto projects – to create a battering and invasive sensory assault. It was an ethos that would carry over into the next decade of underground filmmaking.
After completing his second feature, the manga adaptation Hiruko the Goblin (1990), Tsukamoto returned to the world of mutated scrap with a second Tetsuo film. Tetsuo II: The Body Hammer (1992) serves more as a companion piece than as a straightforward sequel or remake. It is a new interpretation of the same basic premise – man-machine transformation – but played out on a larger scale. Tomorowo Taguchi reprises his role as a (different) salaryman. This time, he lives in a sterile, high-rise apartment with his wife and young son. His metamorphosis is triggered when his son is kidnapped by an underground faction of skinheads who want to harness the salaryman’s cyber-kinetic powers so that they can augment their bodies into organic weaponry in order to bring about mass destruction.
If the ethos of the first Tetsuo was related to The Fly, the second film perhaps bears more of a similarity to Cronenberg’s Scanners (1981) as the salaryman comes to blows against his mutated brother (played by Tsukamoto), the leader of the skinhead group. In doing so, Body Hammer moves away from the surreal macabre horror of its predecessor and more towards an action/science fiction movie template; although plenty of avant-garde trimmings still remain to bridge, connect and embellish ideas. As a result, Tsukamoto operates within a somewhat more conventional and ultimately, more accessible narrative structure, and the inclusion of a larger budget means that he is able to fully realise the end-of-the-world scenario suggested in the closing moments of the first film. As per Tsukamoto’s wish, Tokyo is razed to the ground.
Like the first film, Body Hammer blurs the distinction between form and content. It also re-imagines concepts that were given little attention the first time around; the metal fetishist’s obsession with physical perfection as suggested by the photos of successful athletes that adorn his shack like abode is ‘brought very much to the foreground in the shape of the skinhead cult, which consists of athletes, bodybuilders and boxers who push their training regimen to the extreme’ [ 12 ] – a topic that would dominate Tsukamoto’s subsequent film project. It’s a possible indictment of the obsessive, body culture phenomenon that came about in the 1980s that saw more and more people going to the gym and taking advantage of artificial enhancements such as plastic surgery; a time when there was a strong emphasis on physical perfection and beauty.
The film also hints at the direction Tsukamoto would start to take with future productions: the environmental focus has shifted ever so slightly from the decaying urban sprawl to the sterile functionality of the metropolis centre, and more of an emphasis has been placed on the relationship between the salaryman and his wife; a marriage torn apart by invasive elements. The catalyst for transformation this time is not from infection or a curse as suggested in the original, but from demonstrative rage. The prospect of the salaryman’s son being killed by the skinheads provokes the first instance of transformation, which occurs again when his wife is kidnapped, causing multiple gun-barrels to erupt from his chest and limbs. Rage would go on to transform Tsukamoto’s protagonists in future films Tokyo Fist (1995) and Bullet Ballet (1998), albeit figuratively instead of literally.
In the wake of Tetsuo’s startling domestic and international success, one would think that it would have acted as a catalyst to trigger a wave of similarly styled films. In retrospect, this wasn’t the case as very few filmmakers decided to follow the path forged by Tsukamoto’s breakthrough work. However, former colleague Shozin Fukui was one of the few to accept the challenge.
Like Tsukamoto and Izumiya before him, Fukui is a disciple of Sogo Ishii’s breakthrough independent filmmaking during the late seventies as well as the music that inspired it. Born in 1961, and upon moving to Tokyo in the early eighties, Fukui quickly became infatuated with the burgeoning underground punk music scene and set about forming his own band with friends. These same friends would serve as Fukui’s cast and crew on early forays into filmmaking such as Metal Days (1986) and the short films Gerorisuto (1986) and Caterpillar (1988) [ 13 ].
After serving as assistant director to both Tsukamoto and Ishii – on Tetsuo: The Iron Man and the short film The Master of Shiatsu (Shiatsu Oja, 1989) respectively – Fukui started to write and direct his own feature films. His first was Pinocchio 964 (1991), and while it did not share the same philosophical leanings that Tetsuo did two years before, it was an effective manifesto for Fukui’s thematic preoccupations nonetheless; how technological augmentation impacts on the fragile and potentially volatile nature of the human mind. The story focuses on the titular protagonist, a brainwashed individual who has been scientifically modified to operate as a sex slave. Upon being thrown away by his sexually demanding female owners, Pinocchio wonders the streets of present-day Tokyo where he meets Himiko, a fellow destitute. She takes Pinocchio under her wing whereby he begins to fall in love with her, prompting the return of previously erased memories. When Pinocchio realises what has happened to him and knows who’s responsible, he plans revenge. Meanwhile, the corporation in question organise a search party to reclaim their missing product.
Pinocchio 964 is frequently compared to Tetsuo by cyberpunk enthusiasts and academics alike. Both films represent the feature length debut of Fukui and Tsukamoto respectively and both films exhibit a similarly energetic and manic execution. It can be argued that Fukui’s style is indebted to Tsukamoto due to his serving as assistant director for a period of Tetsuo’s filming. Fukui’s previous short, Caterpillar – made at around the same time as Tetsuo – features similar techniques including hyperactive, hand-held camerawork and stop-motion animation as well as similar imagery: mounds of scrap, ubiquitous urban living and flesh merged with machinery.
However, there are some major differences. The most apparent is inherent in the film’s mise en scene: Pinocchio 964 is in colour (except for its opening sequence) whereas Tetsuo is black and white – though its sequel was in colour. Thematically, unlike Tsukamoto’s notion of technology as an organic, mutating disease, Fukui’s film depicts the body transformed as the direct result of man-made augmentation similar to early Cronenberg – Shivers (1975) and Rabid (1977) for example – as well as Mary Shelley’s Frankenstein (1818). Like the monster in Shelley’s seminal work, Pinocchio is at first oblivious to his condition, but time spent in the real world causes him to realise his artificial existence and he seeks revenge against his creator. However, unlike Frankenstein’s monster, Pinocchio was not constructed from scratch; he is his namesake in reverse – a human turned product through neuro tampering and memory wiping. Fukui seems to suggest that modernity is programming the populous to concern themselves with nothing but sex; a sentiment that’s readily apparent in the media and advertising industries.
It could be argued then, that Pinocchio 964 is the more precise cyberpunk text, offering a speculative stance on potential future technologies i.e. altered living through cybernetic assistance. As suggested in Tetsuo, these technological changes have a perverse impact on sex; Pinocchio is compelled to suckle on Himiko’s breasts in a brain-damaged, baby like stupor – not knowing any better – whereas the salaryman’s girlfriend is enticed and drawn to ride her lover’s newly developed drill-penis.
The conclusion of Pinocchio 964 sees further transformation beyond the esoteric boundaries as previously established. Like the salaryman and metal fetishist, Pinocchio and Himiko – both of whom are victims of the corporation’s scientific dalliances – merge together in a manner and style reminiscent of Peter Jackson’s first lo-fi feature Bad Taste (1987), suggesting the start of a new, technologically altered meta-race in keeping with Cronenberg’s corporeal philosophy of the “New Flesh” [ 14 ].
Thanks to Tetsuo’s worldwide success – along with other newly emerging work like Takashi Kitano’s gritty police caper Violent Cop (1989) – Pinocchio 964 enjoyed a modicum of cult success as international demand for strange and ultra-violent Japanese cinema began to increase. Film companies such as Toho started to cater to this newfound interest by introducing direct-to-video distribution lines that specialised in outputting low-budget, sensationalist material. One such entry was Tomoo Haraguchi’s specifically titled Mikadroid: Robokill Beneath Disco Club Layla (1991), a cyber/steampunk horror about a buried, technologically augmented, super-soldier – built by Japanese scientists during the second world war – being re-activated and going on a murderous rampage. Largely unheard of, the film is perhaps most notable for featuring a (brief) acting turn from a then little-known Kiyoshi Kurosawa, who would later go on to direct internationally renowned works such as Cure (1997), Pulse (2001) and Tokyo Sonata (2008).
Both Pinocchio 964 and Mikadroid would be overshadowed by Tsukamoto’s higher budget and higher profile Tetsuo sequel, which arrived the following year. In the meantime, Fukui was already planning the next project; one that would take almost five years to gestate and execute.
The result was Rubber’s Lover (1996), Fukui’s second and, at present, last feature; a subterranean post-industrial nightmare of human experimentation and bodily destruction. A clandestine group of scientists experiment on human guinea pigs pinched from the street to unlock psychic powers. This is achieved through a combination of computer interfaces, sensory depravation and regular injections of ether, usually resulting in the subject dying a gruesome and explosive death.
Often interpreted as a lose prequel to Pinocchio v946, Rubber’s Lover, despite similarities to its predecessor also represents a distinct contrast. The most readily apparent differences are the film’s use of monochrome photography – a decision made by Fukui when he disliked the look of the S&M flavoured costumes when filmed in colour – and the film’s comparatively subdued pace; favouring atmosphere over propulsion. However, his pre-established tropes still remain: invasive technologies; bizarre sexual practices as a by-product of such technologies; retrograde/outdated equipment; mutation; and a fetish for bodily fluids – pus, blood, vomit etc.
Like Tetsuo, Rubber’s Lover depicts the establishment of a new world order through corporeal and technologically informed symbiosis: the biological co-existence between flesh and metal and the destruction of mental and physical barriers respectively. Rubber’s Lover also takes great pleasure in distorting the boundaries and exploring the grey area between sex and violence; much more so than Pinocchio 964. One scene sees a frenzied character tearing the flesh off another, mid-coitus on a hospital bed whilst a corporate scumbag laughs in the corner of the room. The researcher’s successful test subject, Motomiya – a former member of the team who has since become addicted to ether – is made to wear a strange, rubber S&M bodysuit, further augmented with makeshift technological add-ons of monitors, wires and outdated gizmos. Their nurse’s rotating, ether injector is especially phallic and is used on their subjects rectally for “immediate effect”, suggesting a notion of perversion that transcends sex and violence and into the realms of science and technology.
Rubber’s Lover’s perverted view on science not only echoes some of the imagery and themes from Izumiya’s Death Powder (and to a lesser extent, Haraguchi’s Mikadroid) but the real-life, deranged human experiments carried out by the Japanese military’s infamous Unit 731 on Chinese prisoners of war during the 1930s and 40s [ 15 ]; depicting a doomsday scenario that sees the human race tear itself apart in the pursuit of scientific understanding and technological superiority. Motomiya’s ether addiction is caused by one of his research colleagues. The same colleague later kidnaps and rapes a representative of the project’s benefactor sent in to oversee its shutdown. She is also subjugated to D.D.D (Direct Digital Drive), the apparatus used in the project’s testing.
Fukui’s fascination over the frailty and destructibility of the human mind comes to fruition as Motomiya quickly turns mad; burdened with newly unlocked psychic powers that he can’t control. Like Pinocchio 964, Rubber’s Lover examines the mental transformation that invasive technologies incur on the human condition. This is in stark contrast to Tsukamoto’s Tetsuo films that focus primarily on the physical transformation caused by the same factors, which perhaps serves as the key difference between their otherwise similar films within the sub-genre.
By the mid-to-late 1990s, Japanese cyberpunk cinema was starting to wane; having been overtaken by the blood-stained yakuza films of Kitano and Miike in terms of international prominence, who would in turn be overshadowed by the new wave of supernatural, J-Horror films that emerged at the turn of the century including Hideo Nakata’s The Ring (1998) and Ring 2 (1999).
Fukui’s Rubber’s Lover was the last underground cyberpunk film of the nineties and arguably the last ever. Upon its completion and after getting a limited video release, Fukui put filmmaking on hold to join a video production company; he worked there for the best part of ten years. Tsukamoto had moved on also, continuing his exploration of the symbiosis between city and citizen with a matured pallet. His films Tokyo Fist (1995) and Bullet Ballet (1998) eschew virtually all of the science fiction and horror imagery that had characterised his work previously.
Cyberpunk was kept alive within Japan’s anime and manga industries but it wasn’t until the turn of the millennium when it returned to cinema. The year 2001 saw the release of two films that would give the genre a new lease of life. Mamoru Oshii made Avalon, a live-action Japanese/Polish co-production about an addictive virtual simulation game. It was Oshii’s first film since his internationally successful anime feature film adaptation of Ghost in the Shell (1995) – he would go on to direct the sequel; Ghost in the Shell 2: Innocence (2004).
Shot in Poland with Polish actors and a Japanese crew, Avalon’s themes of virtual reality places it in the same territory as a lot of American produced cyberpunk that surfaced during the nineties: The Lawnmower Man (1992), Strange Days (1995), The Thirteenth Floor (1999), The Matrix (1999) and Cronenberg’s similarly concerned eXistenZ (1999) for example. It was also redolent of many similarly themed anime releases – both theatrical and televised – that emerged during the same decade as the real-life phenomenon of the internet started to make the world seem even smaller; Oshii’s own adaptation of Ghost in the Shell and Ryutaro Nakamura’s Serial Experiments: Lain (1998) series were particularly indicative of these technological and cultural changes. Another notable example and precursor to much of the VR-centric work that would appear in the 1990s is the four-part anime series Megazone 23 (1985-1989), which explores the idea of a post-apocalyptic Tokyo existing as a futuristic virtual simulation.
The second film from 2001 was Sogo Ishii’s Electric Dragon 80,000V, which not only served as Ishii’s return to punk cinema after a decade of more meditative output but, like Burst City, spearheaded a new generation of like minded filmmaking that has evolved Japanese cyberpunk into a new and strange beast. As with the sensory assault cinema favoured by Tsukamoto and Fukui, Electric Dragon is a film that is experienced rather than watched, stimulating the most primitive parts of the brain in a tsunami of sound and image.
The premise is simple enough; a young boy contracts the ability to channel and wield electricity, acquired from a childhood accident whilst climbing some power lines – an ability further enhanced by receiving multiple jolts of electro-shock therapy for violent behaviour. Now an adult with megawatts of power coursing through him, Dragon Eye Morrison is a professional reptile investigator, searching alleyways for lost lizards. Equilibrium is disturbed by the arrival of Thunderbolt Buddha, a TV repair man turned vigilante whose electro-conductive talents are the result of mechanical wizardry. The two meet and battle for supremacy on Tokyo’s rooftops.
As was the case with Burst City, Electric Dragon leans less towards the cyber and more towards the punk aspect of the sub-genre, with Ishii following the train of thought he employed with his music videos and concert films during the 1980s. The film’s title also makes reference to the old days, partly derived from ‘Live Spot 20,000V’, the concert venue that plays a pivotal role in Burst City and one of Ishii’s early shorts, The Solitude of One Divided by 880,000 (1978). Electric Dragon is less about the nightmare and more about anarchic expression at odds with the post-modern universe.
However, some cyber signifiers do remain; the oppressive Tokyo setting realised in stark monochrome; the fetishist attitude towards power lines, aerials, ventilation ducts and other ubiquitous technological appliances; the hyperactive and frequently expressionist delivery; its low-budget, guerrilla-like execution and, like Tetsuo, the concept of two characters augmented through technology, giving them powers that they can’t fully control, coming to blows. Dragon Eye Morrison has to clamp himself to a metal bed frame at night whilst Thunderbolt Buddha’s penchants for electronic devices to assist in his nocturnal excursions sometimes get the better of him as he fights for control of his own body.
The psycho-sexual themes that dominated past Japanese cyberpunk have been replaced with an equally primal notion of animal magnetism. Morrison’s electric power is derived from the ‘Dragon’ that’s embedded in all living things. His rage unlocks the strength of the dragon, meaning that he can harness more energy by sucking it out of household appliances or by creating a non-melodic racket on his electric guitar; a high-voltage cacophony of noise and expression announcing that Ishii’s punk spirit is still alive and well. Indeed, lead actor Tadanobu Asano occasionally guests in Ishii’s industrial noise-punk ensemble Mach 1.67, which provided the film’s propulsive soundtrack. The film would later be used to accompany the group’s live shows, a strategy Ishii pioneered back in 1983 when he made the short film Asia Strikes Back – a little-known cyberpunk piece that provided the template for Shozin Fukui’s preferred set-up of underground experiments gone haywire – to back up the album and tour of the short-lived punk supergroup The Bacillus Army.
Similar to Tsukamoto’s Tetsuo, dialogue in Electric Dragon 80,000V is minimal thus the narrative is powered mainly by image and follows a similar template; the protagonist is seen acquiring his power; the antagonist then challenges the protagonist to combat and the final act sees them clash. All of this is wrapped up in a high energy, fatless sixty-minute package. Ishii’s film is not only is a throwback to the eighties cyberpunk manifesto but reminds us that rather than being characterised by heavy, science fiction concepts, as was the case in the West, it was defined by its independence, attitude and the will to create something out of nothing.
In the years following Electric Dragon 80,000V, a new wave of low-budget horror/science fiction began to surface largely thanks to increased DVD distribution channels, cheaper production techniques and the ever increasing reach of the internet. Films like Hellevator: The Bottled Fools (Hiroki Yamaguchi, 2004), Meatball Machine (Yudai Yamaguchi & Junichi Yamamoto, 2005), The Machine Girl (Noboru Iguchi, 2008) and Tokyo Gore Police (Yoshihiro Nishimura, 2008) have ushered in a new era of cyberpunk informed, gore-centric movies that have since been termed ‘splatter-punk’.
These splatter-punk movies share the same independent spirit of their precursors, substituting 8mm and 16mm film methods for cheap DV technology, retaining as much budget as possible for make-up, costume and practical effects. Many of the effects in these films depict mutation and body alteration; splatter re-imaginings of the flesh-metal fusions of Tetsuo, and the perverse, organic weaponry of Tetsuo II. Similar to the “splatstick” horror of early Sam Raimi and Peter Jackson, the effects and transformations lean towards the ridiculous for comedic effect. One mutated character in Tokyo Gore Police wields an oversized cannon made of contorted flesh, protruding from his crotch much like an erect penis, suggesting – in a very tongue-in-cheek manner – the blur between sex and violence that was posited by Tsukamoto and Fukui. Yamaguchi and Yamamoto’s Meatball Machine is perhaps the closest to the Japanese cyberpunk of old; parasitic aliens infect unsuspecting people, which promptly turns them into macabre man-machine teratoids that fight it out.
In many ways, this ‘splatter-punk’ phase is also reminiscent of the special-effects race that occurred with American horror movies during the 1980s; Cronenberg included. As practical effects became more advanced, a seemingly never-ending slew of films were produced, trying to out-shock one another with advancing exercises in gore. The same can be said here; the ante seems to be continually raised as each new release contorts and morphs the body in increasingly elaborate and grotesque ways.
A reason for this is that many of these film’s directors initially came from special effects backgrounds: Tokyo Gore Police director Yoshihiro Nishimura for instance, has supervised the special effects for many modern gore productions including Noboru Iguchi’s The Machine Girl and Robo-Geisha (2009). In fact, many of these films are made through Fundoshi Corps, a production company founded by Nishimura, Iguchi and film producer Yukihiko Yamaguchi, that specialise in cheaply produced, over-the-top movies of this ilk. It has proven to be a successful business model as their output is continually building a strong international fanbase, looking for perverse and outlandish content.
The recurring touchstones of combining eroticism and perversion are also present. However, they for the most part forego subverted techno-fetishism in favour of contemporary V-Cinema and Pink Film preoccupations. The Machine Girl for instance, uses typical imagery such as the Japanese schoolgirl – a popular conceit in a lot of the nation’s anime, manga and pornography industries – and takes it to new abject levels, connecting bullet spewing hardware to her severed limbs and even granting her the ability to grow weaponry from out of the small of her spine; skirt raised of course.
Unfortunately, it would appear that live-action Japanese cyberpunk cinema has moved on from the daring, experimental underground from whence it came. The remnants of its ideas are now utilised in violent gore shockers that are bereft of the immediacy and philosophical potential of their progenitors. The movement, once an expression of attitude, concerns and frustration with the world, the way it’s structured and the technology used – not just an exploration of the grey area between science fiction and horror – seems to have disappeared.
However in 2009, Shinya Tsukamoto announced his return to the world of cyberpunk with a third Tetsuo project. Tetsuo: The Bullet Man is not only a return, but a new beginning for Tsukamoto as it is his first English language film; an attempt to expose the demented world of Tetsuo to a wider audience. It premiered at the 2009 Venice Film Festival to mixed fanfare, prompting Tsukamoto to continue working on it. Subsequent showings – the 2010 Tribeca Festival for instance – have found greater critical favour, but a vital caveat still remains
Like the punk scene that it emulated, Japanese cyberpunk was pertinent and inextricably linked to a specific time and place. More than a sub-genre, it tackled the anxieties of the period in ways that conventional expression would fall short. But now that we’re in the technologically dependent twenty-first century – the post-human nightmare now a grim reality – can it still be relevant?
Midnight Eye feature: Post-Human Nightmares The World of …
Posted: at 3:47 pm
“The mind is its own place, and in itself Can make a Heav’n of Hell, a Hell of Heaven” Satan, in Milton’s Paradise Lost
Far-fetched? Right now, the abolitionist project sounds fanciful. The task of redesigning our legacy-wetware still seems daunting. Rewriting the vertebrate genome, and re-engineering the global ecosystem, certainly pose immense scientific challenges even to a technologically advanced civilisation.
The ideological obstacles to a happy world, however, are more formidable still. For we’ve learned how to rationalise the need for mental pain – even though its nastier varieties blight innumerable lives, and even though its very existence will soon become optional.
Today, any scientific blueprint for getting rid of suffering via biotechnology is likely to be reduced to one of two negative stereotypes. Both stereotypes are disturbing, pervasive, and profoundly ill-conceived. Together, they impoverish our notion of what a Post-Darwinian regime of life-long happiness might be like; and delay its prospect.
Rats, of course, have a very poor image in our culture. Our mammalian cousins are still widely perceived as “vermin”. Thus the sight of a blissed-out, manically self-stimulating rat does not inspire a sense of vicarious happiness in the rest of us. On the contrary, if achieving invincible well-being entails launching a program of world-wide wireheading – or its pharmacological and/or genetic counterparts – then most of us will recoil in distaste.
Yet the Olds’ rat, and the image of electronically-triggered bliss, embody a morally catastrophic misconception of the landscape of options for paradise-engineering in the aeons ahead. For the varieties of genetically-coded well-being on offer to our successors needn’t be squalid or self-centred. Nor need they be insipid, empty and amoral la Huxley’s Brave New World. Our future modes of well-being can be sublime, cerebral and empathetic – or take forms hitherto unknown.
Instead of being toxic, such exotically enriched states of consciousness can be transformed into the everyday norm of mental health. When it’s precision-engineered, hedonic enrichment needn’t lead to unbridled orgasmic frenzy. Nor need hedonic enrichment entail getting stuck in a wirehead rut. This is partly because in a naturalistic setting, even the crudest dopaminergic drugs tend to increase exploratory behaviour, will-power and the range of stimuli an organism finds rewarding. Novelty-seeking is normally heightened. Dopaminergics aren’t just euphoriants: they also enhance “incentive-motivation”. On this basis, our future is likely to be more diverse, not less.
Perhaps surprisingly too, controlled euphoria needn’t be inherently “selfish” – i.e. hedonistic in the baser, egoistic sense. Non-neurotoxic and sustainable analogues of empathogen hug-drugs like MDMA (“Ecstasy”) – which releases a lot of extra serotonin, dopamine and pro-social oxytocin – may potentially induce extraordinary serenity, empathy and love for others. An arsenal of cognitive enhancers will allow us be smarter too. For feeling blissful isn’t the same as being “blissed-out”.
Ultimately, however, using drugs or electrodes for psychological superhealth is arguably no better than taking medicines to promote physical superhealth. Such interventions can serve only as dirty and inelegant stopgaps. In an ideal world, our emotional, intellectual and physical well-being would be genetically predestined. A capacity for sustained bliss may be a design-feature of any Post-Darwinian mind. Indeed some futurists predict we will one day live in a paradise where suffering is physiologically inconceivable – a world where we can no more imagine what it is like to suffer than we can presently imagine what it is like to be a bat.
Technofantasy? Quite possibly. Today it is sublime bliss that is effectively inconceivable to most of us.
Olds mapped the whole brain. Stimulation of some areas – the periaqueductal grey matter, for instance – proved aversive: an animal will work hard to avoid it. “Aversive” is probably a euphemism: electrical pulses to certain neural pathways may be terrifying or excruciating. Euphemisms aside, our victims are being tortured.
Happily, more regions in the brain are rewarding to stimulate than are unpleasant. Yet electrical stimulation of most areas, including the great bulk of the neocortex, is motivationally neutral.
One brain region in particular does seem especially enjoyable to stimulate: the medial forebrain bundle. The key neurons in this bundle originate in the ventral tegmental area (VTA) of the basal ganglia. VTA neurons manufacture the catecholamine neurotransmitter dopamine. Dopamine is transported down the length of the neuron, packaged in synaptic vesicles, and released into the synapse. Crucially, VTA neuronal pathways project to the nucleus accumbens. VTA dopaminergic neurons are under continuous inhibition by the gamma-aminobutyric acid (GABA) system.
In recent years, a convergence of neuropharmacological evidence, clinical research, and electrical stimulation experiments has led many researchers to endorse some version of the “final common pathway” hypothesis of reward. There are anomalies and complications which the final-common-pathway hypothesis still has to account for. Any story which omits the role and complex interplay of, say, “the love hormone”, oxytocin; the “chocolate amphetamine”, phenylethylamine; the glutamate system; the multiple receptor sub-types of serotonin, noradrenaline and the opioid families; and most crucially of all, the intra-cellular post-synaptic cascade within individual neurons, is going to be simplistic. Yet there is accumulating evidence that recreational euphoriants, clinically useful mood-brighteners, and perhaps all rewarding experiences critically depend on the mesolimbic dopamine pathway.
One complication is that pleasure and desire circuitry have intimately connected but distinguishable neural substrates. Some investigators believe that the role of the mesolimbic dopamine system is not primarily to encode pleasure, but “wanting” i.e. incentive-motivation. On this analysis, endomorphins and enkephalins – which activate mu and delta opioid receptors most especially in the ventral pallidum – are most directly implicated in pleasure itself. Mesolimbic dopamine, signalling to the ventral pallidum, mediates desire. Thus “dopamine overdrive”, whether natural or drug-induced, promotes a sense of urgency and a motivation to engage with the world, whereas direct activation of mu opioid receptors in the ventral pallidum induces emotionally self-sufficient bliss.
Certainly, the dopamine neurotransmitter is not itself the brain’s magic pleasure chemical. Only the intra-cellular cascades triggered by neurotransmitter binding to the post-synaptic receptor presumably hold the elusive, tantalising key to everlasting happiness; and they are not yet fully understood. But it’s notable that dopamine D2 receptor-blocking phenothiazines, for example, and other aversive drugs such as kappa opioid agonists, tend to inhibit activity, or increase the threshold of stimulation, in the mesolimbic dopamine system. Conversely, heroin and cocaine both mimic the effects of direct electrical stimulation of the reward-pathways.
Comparing the respective behavioural effects of heroin and cocaine is instructive.If rats or monkeys are hooked up to an intravenous source of heroin (or other potent mu opioid agonist such as fentanyl), the animals will happily self-administer the drug indefinitely; but they still find time to sleep and eat. If rats or monkeys have the opportunity to self-administer cocaine without limit, however, they will do virtually nothing else. They continue to push a drug-delivery lever for as long as they are physically capable of doing so. Within weeks, if not days, they will lose a substantial portion of their body weight – up to 40%. Within a month, they will be dead.
Humans don’t have this problem. So what keeps our mesolimbic dopamine and opioidergic systems so indolent? Why does a “hedonic treadmill” stop us escaping from a genetically-predisposed “set-point” of emotional ill-being? Why can’t social engineering, politico-economic reform or psychotherapy – as distinct from germ-line genetic re-writes – make us durably happy?
Evolutionary biology provides some plausible answers. A capacity to experience many different flavours of unhappiness – and short-lived joys too – was adaptive in the ancestral environment. Anger, fear, disgust, sadness, anxiety and other core emotions each played a distinctive information-theoretic role, enhancing the reproductive success of our forebears. Thus at least a partial explanation of endemic human misery today lies in ancient selection pressure and the state of the unreconstructed vertebrate genome. Selfish DNA makes its throwaway survival-machines feel discontented a lot of the time. A restless discontent is typically good for promoting its “inclusive fitness”, even if it’s bad news for us. Nature simply doesn’t care; and God has gone missing, presumed dead.
On the African savannah, naturally happy and un-anxious creatures typically got outbred or eaten or both. Rank theory suggests that the far greater incidence of the internalised correlate of the yielding sub-routine, depression, reflects how low spirits were frequently more adaptive among group-living organisms than manic self-assertion. Group living can be genetically adaptive for the individual members of the tribe in a predator-infested environment, but we’ve paid a very high psychological price.
Whatever the origins of malaise, a web of negative feedback mechanisms in the CNS conspires to prevent well-being – and (usually) extreme ill-being – from persisting for very long.
Life-enriching emotional superhealth will depend on subverting these homeostatic mechanisms. The hedonic set-point around which our lives fluctuate can be genetically switched to a far higher altitude plateau of well-being.
At the most immediate level, firing in the neurons of the ventral tegmental area is held in check mainly by gamma-aminobutyric acid (GABA), the major inhibitory neurotransmitter in the vertebrate central nervous system. Opioids act to diminish the braking action of GABA on the dopaminergic neurons of the VTA. In consequence, VTA neurons release more dopamine in the nucleus accumbens. The reuptake of dopamine in the nucleus accumbens is performed by the dopamine transporter. The transporter is blocked by cocaine. Dopamine reuptake inhibition induces euphoria, augmented by activation of the sigma1 receptors. [Why? We don’t know. Science has no understanding of why sentience – or insentience for that matter – exists at all.] Amphetamines block the dopamine transporter too; but they also act directly on the dopaminergic neurons and promote neurotransmitter release.
The mesolimbic dopamine pathway passes from the VTA to the nucleus accumbens and ascends to the frontal cortex where it innervates the higher brain. This architecture is explicable in the light of evolution. Raw limbic emotional highs and lows – in the absence of represented objects, events or properties to be (dis)satisfied about – would be genetically useless to the organism. To help self-replicating DNA differentially leave more copies of itself, the textures of subjective niceness and nastiness must infuse our representations of the world, and – by our lights – the world itself. Hedonic tone must be functionally coupled to motor-responses initiated on the basis of the perceived significance of the stimulus to the organism, and of the anticipated consequences – adaptively nice or nasty – of simulations of alternative courses of action that the agent can perform. Natural selection has engineered the “encephalisation of emotion”. We often get happy, sad or worried “about” the most obscure notions. One form this encephalisation takes is our revulsion at the prospect of turning ourselves into undignified wirehead rats – or soma-pacified dupes of a ruling elite. Both scenarios strike us as too distasteful to contemplate.
In any case, wouldn’t we get bored of life-long bliss?
Apparently not. That’s what’s so revealing about wireheading. Unlike food, drink or sex, the experience of pleasure itself exhibits no tolerance, even though our innumerable objects of desire certainly do so. Thus we can eventually get bored of anything – with a single exception. Stimulation of the pleasure-centres of the brain never palls. Fire them in the right way, and boredom is neurochemically impossible. Its substrates are missing. Electrical stimulation of the mesolimbic dopamine system is more intensely rewarding than eating, drinking, and love-making; and it never gets in the slightest a bit tedious. It stays exhilarating. The unlimited raw pleasure conjured up by wirehead bliss certainly inspires images of monotony in the electrode-naïve outsider; but that’s a different story altogether.
Yet are wireheading or supersoma really the only ways to ubiquitous ecstasy? Or does posing the very question reflect our stunted conception of the diverse family of paradise-engineering options in prospect?
This question isn’t an exercise in idle philosophising. As molecular neuroscience advances, not just boredom, but pain, terror, disgust, jealousy, anxiety, depression, malaise and any form of unpleasantness are destined to become truly optional. Their shifting gradients played a distinct information-theoretic role in the lives of our ancestors in the ancestral environment of adaptation. But their individual textures (i.e. “what it feels like”, “qualia”) can shortly be either abolished or genetically shifted to a more exalted plane of well-being instead. Our complicity in their awful persistence, and ultimately our responsibility for sustaining and creating them in the living world, is destined to increase as the new reproductive technologies mature and the revolution in post-genomic medicine unfolds. The biggest obstacles to a cruelty-free world – a world cured of any obligate suffering – are ideological, not technical. Yet whatever the exact time-scale of its replacement, in evolutionary terms we are on the brink of a Post-Darwinian Transition.
Natural selection has previously been “blind”. Complications aside, genetic mutations and meiotic shufflings are quasi-random i.e. random with respect to what is favoured by natural selection. Nature has no capacity for foresight or contingency-planning. During the primordial Darwinian Era of life on Earth, selfishness in the technical genetic sense has closely overlapped with selfishness in the popular sense: they are easily confused, and indeed selfishness in the technical sense is unavoidable. But in the new reproductive era – where (suites of) alleles will be societally chosen and actively designed by quasi-rational agents in anticipation of their likely behavioural effects – the character of fitness-enhancing traits will be radically different.
For a start, the elimination of such evolutionary relics as the ageing process will make any form of (post-)human reproduction on earth – whether sexual or clonal – a relatively rare and momentous event. It’s likely that designer post-human babies will be meticulously pre-planned. The notion that all reproductive decisions will be socially regulated in a post-ageing world is abhorrent to one’s libertarian instincts; but if they weren’t regulated, then the Earth would soon simply exceed its carrying capacity – whether it is 15 billion people or even 150 billion. If reproduction on earth does cease to be a personal affair and becomes a (democratically accountable?) state-sanctioned choice, then a major shift in the character of typically adaptive behavioural traits will inevitably occur. Taking a crude genes’ eye-view, a variant allele coding for, say, enhanced oxytocin expression, or a sub-type of serotonin receptor predisposing to unselfishness in the popular sense, will actually carry a higher payoff in the technical selfish sense – hugely increasing the likelihood that such alleles and their customised successors will be differentially pre-selected in preference to alleles promoting, say, anti-social behaviour.
Told like this, of course, the neurochemical story is a simplistic parody. It barely even hints at the complex biological, socio-economic and political issues at stake. Just who will take the decisions, and how? What will be the role in shaping post-human value systems, not just of exotic new technologies, but of alien forms of emotion whose metabolic pathways and substrates haven’t yet been disclosed to us? What kinds, if any, of inorganic organisms or non-DNA-driven states of consciousness will we want to design and implement? What will be the nature of the transitional era – when our genetic mastery of emotional mind-making is still incomplete? How can we be sure that unknown unknowns won’t make things go wrong? True, Darwinian life may often be dreadful, but couldn’t botched paradise-engineering make it even worse? And even if it couldn’t, might not there be some metaphysical sense in which life in a blissful biosphere could still be morally “wrong” – even if it strikes its inhabitants as self-evidently right?
Unfortunately, we will only begin to glimpse the implications of Post-Darwinism when paradise-engineering becomes a mature scientific discipline and mainstream research tradition. Yet as the vertebrate genome is rewritten, the two senses of “selfish” will foreseeably diverge. Today they are easily conflated. A tendency to quasi-psychopathic callousness to other sentient beings did indeed enhance the inclusive fitness of our DNA in the evolutionary past. In the new reproductive era, such traits are potentially maladaptive. They may even disappear as the Reproductive Revolution matures.
The possibility that we will become not just exceedingly happier, but nicer, may sound too good to be true. Perhaps we’ll just become happier egotists – in every sense. But if a genetic predisposition to niceness becomes systematically fitness-enhancing, then genetic selfishness – in the technical sense of “selfish” – ensures that benevolence will not just triumph; it will also be evolutionarily stable, in the games-theory sense, against “defectors”.
Needless to say, subtleties and technical complexities abound here. The very meaning of being “nice” to anyone or anything, for instance, is changed if well-being becomes a generic property of mental life. Either way, once suffering becomes biologically optional, then only sustained and systematic malice towards others could allow us to perpetuate it for ever. And although today we may sometimes be spiteful, there is no evidence that institutionalised malevolence will prevail.
From an ethical perspective, the task of hastening the Post-Darwinian Transition has a desperate moral urgency – brought home by studying just how nasty “natural” pain can be. Those who would resist the demise of unpleasantness may be asked: is it really permissible to compel others to suffer when any form of distress becomes purely optional? Should the metabolic pathways of our evolutionary past be forced on anyone who prefers an odyssey of life-long happiness instead? If so, what means of coercion should be employed, and by whom?
Or is paradise-engineering the only morally serious option? And much more fun.
Refs and further reading
Roborats James Olds Homeostasis Robert Heath Orgasmatrons Future Opioids BLTC Research Hypermotivation Superhappiness? Empathogens.com The Orgasmic Brain Social Media (2016) The Good Drug Guide The Abolitionist Project Utilitarianism On The Net The Hedonistic Imperative The Reproductive Revolution Critique of Brave New World MDMA: Utopian Pharmacology? When Is It Best To Take Crack Cocaine? Wireheads and Wireheading in Science Fiction Pleasure Evoked by Electrical Stimulation of the Brain Wireheads and wireheading: Definitions from Science Fiction
Continue reading here:
Posted: July 21, 2016 at 2:17 am
July 18, 2013 | By Louis Sahagun
More than a hundred explorers, scientists and government officials will gather at Long Beach’s Aquarium of the Pacific on Friday to draft a blueprint to solve a deep blue problem: About 95% of the world’s oceans remains unexplored. The invitation-only forum , hosted by the aquarium and the National Oceanic and Atmospheric Administration, aims to identify priorities, technologies and collaborative strategies that could advance understanding of the uncharted mega-wilderness that humans rely on for oxygen, food, medicines, commerce and recreation.
June 12, 2013 | By Brad Balukjian
Dancer , rapper , and, oh yeah, Man on the Moon Buzz Aldrin is talking, but are the right people listening? One of the original moonwalkers (Michael Jackson always did it backwards! Aldrin complained) challenged the United States to pick up the space slack Tuesday evening, mere hours after China sent three astronauts into orbit. Speaking in front of a friendly crowd of 880 at the Richard Nixon Presidential Library and Museum in Yorba Linda, Aldrin criticized the U.S. for not adequately leading the international community in space exploration, and suggested that we bump up our federal investment in space while still encouraging the private sector’s efforts.
February 2, 2013 | By Holly Myers
It will come as news to many, no doubt, that there is a Warhol on the moon. And a Rauschenberg and an Oldenburg – a whole “Moon Museum,” in fact, containing the work of six artists in all, in the form of drawings inscribed on the surface of a ceramic chip roughly the size of a thumbprint. Conceived by the artist Forrest Myers in 1969, the chip was fabricated in collaboration with scientists at Bell Laboratories and illicitly slipped by a willing engineer between some sheets of insulation on the Apollo 12 lander module.
January 29, 2013 | By Patrick J. McDonnell and Ramin Mostaghim, This post has been updated. See the note below for details.
BEIRUT – U.S. officials are not exactly welcoming Iran’s revelation this week that the Islamic Republic has sent a monkey into space and brought the creature back to Earth safely. The report by Iranian media recalled for many the early days of space flight, when both the United States and the Soviet Union launched animal-bearing spacecraft as a prelude to human space travel. But State Department spokeswoman Victoria Nuland told reporters in Washington on Monday that the reported mission raises concerns about possible Iranian violations of a United Nations ban on development of ballistic missiles capable of delivering nuclear weapons.
CALIFORNIA | LOCAL
December 22, 2012 | By Scott Gold, Los Angeles Times
WATERTON CANYON, Colo. – The concrete-floored room looks, at first glance, like little more than a garage. There is a red tool chest, its drawers labeled: “Hacksaws. ” “Allen wrenches. ” There are stepladders and vise grips. There is also, at one end of the room, a half-built spaceship, and everyone is wearing toe-to-fingertip protective suits. “Don’t. Touch. Anything. ” Bruce Jakosky says the words politely but tautly, like a protective father – which, effectively, he is. Jakosky is the principal investigator behind NASA’s next mission to Mars, putting him in the vanguard of an arcane niche of science: planetary protection – the science of exploring space without messing it up. PHOTOS: Stunning images of Earth at night As NASA pursues the search for life in the solar system, the cleanliness of robotic explorers is crucial to avoid contaminating other worlds.
December 6, 2012 | By Amina Khan and Rosie Mestel, Los Angeles Times
Years of trying to do too many things with too little money have put NASA at risk of ceding leadership in space exploration to other nations, according to a new report that calls on the space agency to make wrenching decisions about its long-term strategy and future scope. As other countries – including some potential adversaries – are investing heavily in space, federal funding for NASA is essentially flat and under constant threat of being cut. Without a clear vision, that fiscal uncertainty makes it all the more difficult for the agency to make progress on ambitious goals like sending astronauts to an asteroid or Mars while executing big-ticket science missions, such as the $8.8-billion James Webb Space Telescope, says the analysis released Wednesday by the National Research Council.
Posted: July 18, 2016 at 3:37 pm
There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles–all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.
A singularity is a sign that your model doesn’t apply past a certain point, not infinity arriving in real life
A singularity, as most commonly used, is a point at which expected rules break down. The term comes from mathematics, where a point on a curve that has a sudden break in slope is considered to have a slope of undefined or infinite value; such a point is known as a singularity.
The term has extended into other fields; the most notable use is in astrophysics, where a singularity is a point (usually, but perhaps not exclusively, at the center a of black hole) where curvature of spacetime approaches infinity.
This article, however, is not about the mathematical or physics uses of the term, but rather the borrowing of it by various futurists. They define a technological singularity as the point beyond which we can know nothing about the world. So, of course, they then write at length on the world after that time.
It’s intelligent design for the IQ 140 people. This proposition that we’re heading to this point at which everything is going to be just unimaginably different – it’s fundamentally, in my view, driven by a religious impulse. And all of the frantic arm-waving can’t obscure that fact for me, no matter what numbers he marshals in favor of it. He’s very good at having a lot of curves that point up to the right.
In transhumanist belief, the “technological singularity” refers to a hypothetical point beyond which human technology and civilization is no longer comprehensible to the current human mind. The theory of technological singularity states that at some point in time humans will invent a machine that through the use of artificial intelligence will be smarter than any human could ever be. This machine in turn will be capable of inventing new technologies that are even smarter. This event will trigger an exponential explosion of technological advances of which the outcome and effect on humankind is heavily debated by transhumanists and singularists.
Many proponents of the theory believe that the machines eventually will see no use for humans on Earth and simply wipe us out their intelligence being far superior to humans, there would be probably nothing we could do about it. They also fear that the use of extremely intelligent machines to solve complex mathematical problems may lead to our extinction. The machine may theoretically respond to our question by turning all matter in our solar system or our galaxy into a giant calculator, thus destroying all of humankind.
Critics, however, believe that humans will never be able to invent a machine that will match human intelligence, let alone exceed it. They also attack the methodology that is used to “prove” the theory by suggesting that Moore’s Law may be subject to the law of diminishing returns, or that other metrics used by proponents to measure progress are totally subjective and meaningless. Theorists like Theodore Modis argue that progress measured in metrics such as CPU clock speeds is decreasing, refuting Moore’s Law. (As of 2015, not only Moore’s Law is beginning to stall, Dennard scaling is also long dead, returns in raw compute power from transistors is subjected to diminishing returns as we use more and more of them, there is also Amdahl’s Law and Wirth’s law to take into account, and also that raw computing power simply doesn’t scale up linearly at providing real marginal utility. Then even after all those things, we still haven’t taken into account of the fundamental limitations of conventional computing architecture. Moore’s law suddenly doesn’t look to be the panacea to our problems now, does it?)
Transhumanist thinkers see a chance of the technological singularity arriving on Earth within the twenty first century, a concept that most[Who?]rationalists either consider a little too messianic in nature or ignore outright. Some of the wishful thinking may simply be the expression of a desire to avoid death, since the singularity is supposed to bring the technology to reverse human aging, or to upload human minds into computers. However, recent research, supported by singularitarian organizations including MIRI and the Future of Humanity Institute, does not support the hypothesis that near-term predictions of the singularity are motivated by a desire to avoid death, but instead provides some evidence that many optimistic predications about the timing of a singularity are motivated by a desire to “gain credit for working on something that will be of relevance, but without any possibility that their prediction could be shown to be false within their current career”.
Don’t bother quoting Ray Kurzweil to anyone who knows a damn thing about human cognition or, indeed, biology. He’s a computer science genius who has difficulty in perceiving when he’s well out of his area of expertise.
Eliezer Yudkowsky identifies three major schools of thinking when it comes to the singularity. While all share common ground in advancing intelligence and rapidly developing technology, they differ in how the singularity will occur and the evidence to support the position.
Under this school of thought, it is assumed that change and development of technology and human (or AI assisted) intelligence will accelerate at an exponential rate. So change a decade ago was much faster than change a century ago, which was faster than a millennium ago. While thinking in exponential terms can lead to predictions about the future and the developments that will occur, it does mean that past events are an unreliable source of evidence for making these predictions.
The “event horizon” school posits that the post-singularity world would be unpredictable. Here, the creation of a super-human artificial intelligence will change the world so dramatically that it would bear no resemblance to the current world, or even the wildest science fiction. This school of thought sees the singularity most like a single point event rather than a process indeed, it is this thesis that spawned the term “singularity.” However, this view of the singularity does treat transhuman intelligence as some kind of magic.
This posits that the singularity is driven by a feedback cycle between intelligence enhancing technology and intelligence itself. As Yudkowsky (who endorses this view) “What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that theyd design the next generation of brain-computer interfaces.” When this feedback loop of technology and intelligence begins to increase rapidly, the singularity is upon us.
There is also a fourth singularity school which is much more popular than the other three: It’s all a load of baloney! This position is not popular with high-tech billionaires.
This is largely dependent on your definition of “singularity”.
The intelligence explosion singularity is by far the most unlikely. According to present calculations, a hypothetical future supercomputer may well not be able to replicate a human brain in real time. We presently don’t even understand how intelligence works, and there is no evidence that intelligence is self-iterative in this manner – indeed, it is not unlikely that improvements on intelligence are actually more difficult the smarter you become, meaning that each improvement on intelligence is increasingly difficult to execute. Indeed, how much smarter it is possible for something to even be than a human being is an open question. Energy requirements are another issue; humans can run off of Doritos and Mountain Dew Dr. Pepper, while supercomputers require vast amounts of energy to function. Unless such an intelligence can solve problems better than groups of humans, its greater intelligence may well not matter, as it may not be as efficient as groups of humans working together to solve problems.
Another major issue arises from the nature of intellectual development; if an artificial intelligence needs to be raised and trained, it may well take twenty years or more between generations of artificial intelligences to get further improvements. More intelligent animals seem to generally require longer to mature, which may put another limitation on any such “explosion”.
Accelerating change is questionable; in real life, the rate of patents per capita actually peaked in the 20th century, with a minor decline since then, despite the fact that human beings have gotten more intelligent and gotten superior tools. As noted above, Moore’s Law has been in decline, and outside of the realm of computers, the rate of increase in other things has not been exponential – airplanes and cars continue to improve, but they do not improve at the ridiculous rate of computers. It is likely that once computers hit physical limits of transistor density, their rate of improvement will fall off dramatically, and already even today, computers which are “good enough” continue to operate for many years, something which was unheard of in the 1990s, where old computers were rapidly and obviously obsoleted by new ones.
According to this point of view, the Singularity is a past event, and we live in a post-Singularity world.
The rate of advancement has actually been in decline in recent times, as patents per-capita has gone down, and the rate of increase of technology has declined rather than risen, though the basal rate is higher than it was in centuries past. According to this point of view, the intelligence explosion and increasing rate of change already happened with computers, and now that everyone has handheld computing devices, the rate of increase is going to decline as we hit natural barriers in how much additional benefit we gain out of additional computing power. The densification of transistors on microchips has slowed by about a third, and the absolute limit to transistors is approaching – a true, physical barrier which cannot be bypassed or broken, and which would require an entirely different means of computing to create a denser still microchip.
From the point of view of travel, humans have gone from walking to sailing to railroads to highways to airplanes, but communication has now reached the point where a lot of travel is obsolete – the Internet is omnipresent and allows us to effectively communicate with people on any corner of the planet without travelling at all. From this point of view, there is no further point of advancement, because we’re already at the point where we can be anywhere on the planet instantly for many purposes, and with improvements in automation, the amount of physical travel necessary for the average human being has declined over recent years. Instant global communication and the ability to communicate and do calculations from anywhere are a natural physical barrier, beyond which further advancement is less meaningful, as it is mostly just making things more convenient – the cost is already extremely low.
The prevalence of computers and communications devices has completely changed the world, as has the presence of cheap, high-speed transportation technology. The world of the 21st century is almost unrecognizable to people from the founding of the United States in the latter half of the 18th century, or even to people from the height of the industrial era at the turn of the 20th century.
Extraterrestrial technological singularities might become evident from acts of stellar/cosmic engineering. One such possibility for example would be the construction of Dyson Spheres that would result in the altering of a star’s electromagnetic spectrum in a way detectable from Earth. Both SETI and Fermilab have incorporated that possibility into their searches for alien life. 
A different view of the concept of singularity is explored in the science fiction book Dragon’s Egg by Robert Lull Forward, in which an alien civilization on the surface of a neutron star, being observed by human space explorers, goes from Stone Age to technological singularity in the space of about an hour in human time, leaving behind a large quantity of encrypted data for the human explorers that are expected to take over a million years (for humanity) to even develop the technology to decrypt.
No signs of extraterrestrial civilizations have been found as of 2016.
Read the rest here: