Tag Archives: post

Ron Paul Lashes Out At WaPo’s Witch Hunt: "Expect Such …

Posted: December 2, 2016 at 12:20 pm

Washington Post Peddles Tarring of Ron Paul Institute as Russian Propaganda, via The Ron Paul Institute for Peace & Prosperity,

The Washington Post has a history of misrepresenting Ron Pauls views. Last year the supposed newspaper of record ran a feature article by David A. Fahrenthold in which Fahrenthold grossly mischaracterized Paul as an advocate for calamity, oppression, and poverty the opposite of the goals Paul routinely expresses and, indeed, expressed clearly in a speech at the event upon which Fahrentholds article purported to report. Such fraudulent attacks on the prominent advocate for liberty and a noninterventionist foreign policy fall in line with the newspapers agenda. As Future of Freedom Foundation President Jacob G. Hornberger put it in a February editorial, the Posts agenda is guided by the interventionist mindset that undergirds the mainstream media.

On Thursday, the Post published a new article by Craig Timberg complaining of a flood of so-called fake news supported by a sophisticated Russian propaganda campaign that created and spread misleading articles online with the goal of punishing Democrat Hillary Clinton, helping Republican Donald Trump and undermining faith in American democracy, To advance this conclusion, Timberg points to PropOrNot, an organization of anonymous individuals formed this year, as having identified more than 200 websites as routine peddlers of Russian propaganda during the election season. Look on the PropOrNot list. There is the Ron Paul Institute for Peace and Prosperitys (RPI) website RonPaulInstitute.org listed among websites termed Russian propaganda outlets.

What you will not find on the PropOrNot website is any particularized analysis of why the RPI website, or any website for that matter, is included on the list. Instead, you will see only sweeping generalizations from an anonymous organization. The very popular website drudgereport.com even makes the list. While listed websites span the gamut of political ideas, they tend to share in common an independence from the mainstream media.

Timbergs article can be seen as yet another big media attempt to shift the blame for Democratic presidential nominee Hillary Clintons loss of the presidential election away from Clinton, her campaign, and the Democratic National Committee (DNC) that undermined Sen Bernie Sanders (I-VT) challenge to Clinton in the Democratic primary.

The article may also be seen as another step in the effort to deter people from looking to alternative sources of information by labeling those information sources as traitorous or near-traitorous.

At the same time, the article may be seen as playing a role in the ongoing push to increase tensions between the United States and Russia a result that benefits people, including those involved in the military-industrial complex, who profit from the growth of US national security activity in America and overseas.

This is not the first time Ron Paul and his institute has been attacked for sounding pro-Russian or anti-American. Such attacks have been advanced even by self-proclaimed libertarians.

Expect that such attacks will continue. They are an effort to tar Paul and his institute so people will close themselves off from information Paul and RPI provide each day in furtherance of the institutes mission to continue and expand Pauls lifetime of public advocacy for a peaceful foreign policy and the protection of civil liberties at home. While peace and liberty will benefit most people, powerful interests seek to prevent the realization of these objectives. Indeed, expect attacks against RPI to escalate as the institute continues to reach growing numbers of people with its educational effort

Read the original:
Ron Paul Lashes Out At WaPo’s Witch Hunt: "Expect Such …

Posted in Ron Paul | Comments Off on Ron Paul Lashes Out At WaPo’s Witch Hunt: "Expect Such …

Word Games: What the NSA Means by Targeted Surveillance …

Posted: November 29, 2016 at 1:22 am

We all know that the NSA uses word games to hide and downplay its activities. Words like “collect,” “conversations,” “communications,” and even “surveillance” have suffered tortured definitions that create confusion rather than clarity.

Theres another one to watch: “targeted” v. “mass” surveillance.

Since 2008, the NSA has seized tens of billions of Internet communications. It uses the Upstream and PRISM programswhich the government claims are authorized under Section 702 of the FISA Amendments Actto collect hundreds of millions of those communications each year. The scope is breathtaking, including the ongoing seizure and searching of communications flowing through key Internet backbone junctures,[1]the searching of communications held by service providers like Google and Facebook, and, according to the government’s own investigators, the retention of significantly more than 250 million Internet communications per year.[2]

Yet somehow, the NSA and its defenders still try to pass 702 surveillance off as “targeted surveillance,” asserting that it is incorrect when EFF and many others call it “mass surveillance.”

Our answer: if “mass surveillance” includes the collection of the content of hundreds of millions of communications annually and the real-time search of billions more, then the PRISM and Upstream programs under Section 702 fully satisfy that definition.

This word game is important because Section 702 is set to expire in December 2017. EFF and our colleagues who banded together to stop the Section 215 telephone records surveillance are gathering our strength for this next step in reining in the NSA. At the same time, the government spin doctors are trying to avoid careful examination by convincing Congress and the American people that this is just “targeted” surveillance and doesnt impact innocent people.

PRISM and Upstream surveillance are two types of surveillance that the government admits that it conducts under Section 702 of the FISA Amendments Act, passed in 2008. Each kind of surveillance gives the U.S. government access to vast quantities of Internet communications.[3]

Upstream gives the NSA access to communications flowing through the fiber-optic Internet backbone cables within the United States.[4] This happens because the NSA, with the help of telecommunications companies like AT&T, makes wholesale copies of the communications streams passing through certain fiber-optic backbone cables. Upstream is at issue in EFFs Jewel v. NSA case.

PRISM gives the government access to communications in the possession of third-party Internet service providers, such as Google, Yahoo, or Facebook. Less is known about how PRISM actually works, something Congress should shine some light on between now and December 2017.[5]

Note that those two programs existed prior to 2008they were just done under a shifting set of legal theories and authorities.[6] EFF has had evidence of the Upstream program from whistleblower Mark Klein since 2006, and we have been suing to stop it ever since.

Despite government claims to the contrary, heres why PRISM and Upstream are “mass surveillance”:

(1) Breadth of acquisition: First, the scope of collection under both PRISM and Upstream surveillance is exceedingly broad. The NSA acquires hundreds of millions, if not billions, of communications under these programs annually.[7] Although, in the U.S. governments view, the programs are nominally “targeted,” that targeting sweeps so broadly that the communications of innocent third parties are inevitably and intentionally vacuumed up in the process. For example, a review of a “large cache of intercepted conversations” provided by Edward Snowden and analyzed by the Washington Post revealed that 9 out of 10 account holders “were not the intended surveillance targets but were caught in a net the agency had cast for somebody else.”[8] The material reviewed by the Post consisted of 160,000 intercepted e-mail and instant message conversations, 7,900 documents (including “medical records sent from one family member to another, resumes from job hunters and academic transcripts of schoolchildren”), and more than 5,000 private photos.[9] In all, the cache revealed the “daily lives of more than 10,000 account holders who were not targeted [but were] catalogued and recorded nevertheless.”[10] The Post estimated that, at the U.S. governments annual rate of “targeting,” collection under Section 702 would encompass more than 900,000 user accounts annually. By any definition, this is “mass surveillance.”

(2) Indiscriminate full-content searching. Second, in the course of accomplishing its so-called “targeted” Upstream surveillance, the U.S. government, in part through its agent AT&T, indiscriminately searches the contents of billions of Internet communications as they flow through the nations domestic, fiber-optic Internet backbone. This type of surveillance, known as “about surveillance,” involves the NSA’s retention of communications that are neither to nor from a target of surveillance; rather, it authorizes the NSA to obtain any communications “about” the target.[11] Even if the acquisition of communications containing information “about” a surveillance target could, somehow, still be considered “targeted,” the method for accomplishing that surveillance cannot be: “about” surveillance entails a content search of all, or substantially all, international Internet communications transiting the United States.[12] Again, by any definition, Upstream surveillance is “mass surveillance.” For PRISM, while less is known, it seems the government is able to search throughor require the companies like Google and Facebook to search throughall the customer data stored by the corporations for communications to or from its targets.

To accomplish Upstream surveillance, the NSA copies (or has its agents like AT&T copy) Internet traffic as it flows through the fiber-optic backbone. This copying, even if the messages are only retained briefly, matters under the law. Under U.S. constitutional law, when the federal government “meaningfully interferes”with an individuals protected communications, those communications have been “seized” for purposes of the U.S. Constitutions Fourth Amendment. Thus, when the U.S. government copies (or has copied) communications wholesale and diverts them for searching, it has “seized” those communications under the Fourth Amendment.

Similarly, U.S. wiretapping law triggers a wiretap at the point of “interception by a device,” which occurs when the Upstream mechanisms gain access to our communications.[13]

Why does the government insist that its targeted? For Upstream, it may be because the initial collection and searching of the communicationsdone by service providers like AT&T on the governments behalfis really, really fast and much of the information initially collected is then quickly disposed of. In this way the Upstream collection is unlike the telephone records collection where the NSA kept all of the records it seized for years. Yet this difference should not change the conclusion that the surveillance is “mass surveillance.” First, all communications flowing through the collection points upstream are seized and searched, including content and metadata. Second, as noted above, the amount of information retainedover 250 million Internet communications per yearis astonishing.

Thus, regardless of the time spent, the seizure and search are comprehensive and invasive. Using advanced computers, the NSA and its agents can do a full-text, content search within a blink of an eye through billions, if not trillions of your communications, including emails, social media, and web searches. Second, as demonstrated above, the government retains a huge amount of the communicationsfar more about innocent people than about its targetsso even based on what is retained the surveillance is better described as “mass” rather than “targeted.”

So it is completely correct to characterize Section 702 as mass surveillance. It stems from the confluence of: (1) the method NSA employs to accomplish its surveillance, particularly Upstream, and (2) the breadth of that surveillance.

Next time you see the government or its supporters claim that PRISM and Upstream are “targeted” surveillance programs, youll know better.

[1] See, e.g., Charlie Savage, NSA Said to Search Content of Messages to and From U.S., N.Y. Times (Aug 8, 2013) (The National Security Agency is searching the contents of vast amounts of Americans e-mail and text communications into and out of the country[.]). This article describes an NSA practice known as about surveillancea practice that involves searching the contents of communications as they flow through the nations fiber-optic Internet backbone.

[2] FISA Court Opinion by Judge Bates entitled [Caption Redacted], at 29 (NSA acquires more than two hundred fifty million Internet communications each year pursuant to Section 702), https://www.eff.org/document/october-3-2011-fisc-opinion-holding-nsa-surveillance-unconstitutional (Hereinafter, Bates Opinion). According to the PCLOB report, the current number is significantly higher than 250 million communications. PCLOB Report on 702 at 116.

[3] Bates Opinion at 29; PCLOB at 116.

[6] First, the Bush Administration relied solely on broad claims of Executive power, grounded in secret legal interpretations written by the Department of Justice. Many of those interpretations were subsequently abandoned by later Bush Administration officials. Beginning in 2006, DOJ was able to turn to the Foreign Intelligence Surveillance Court to sign off on its surveillance programs. In 2007, Congress finally stepped into the game, passing the Protect America Act; which, a year later, was substantially overhauled and passed again as the FISA Amendments Act. While neither of those statutes mention the breadth of the surveillance and it was not discussed publicly during the Congressional processes, both have been cited by the government as authorizing it.

[11] Bates Opinion at 15.

[12] PCLOB report at 119-120.

[13] See 18 U.S.C 2511(1)(a); U.S. v. Councilman, 418 F.3d 67, 70-71, 79 (1st Cir. 2005) (en banc).

Original post:
Word Games: What the NSA Means by Targeted Surveillance …

Posted in NSA | Comments Off on Word Games: What the NSA Means by Targeted Surveillance …

2 senior officials ask for head of NSA to be replaced …

Posted: November 25, 2016 at 10:09 am

The recommendation by Defense Secretary Ash Carter and Director of National Intelligence James Clapper was made last month, according to The Washington Post, which first reported the recommendation.

The replacement of such a senior person would be unprecedented at a time when the US intelligence community has repeatedly warned about the threat of cyberattacks.

A major reason for their recommendation is the belief that Rogers was not working fast enough on a critical reorganization to address the cyberthreat. The Obama administration has wanted to keep the NSA dealing with signals intelligence, which would be a civilian-led agency, and a separate cybercommand which would remain under the military, the official told CNN.

Right now, one man, Rogers, heads both. He took over as head of the NSA and Cyber Command in April 2014.

The official said the initial plan was to announce the reorganization and that given the shift of personnel, Rogers would be thanked for his service and then move on.

Another issue — but not the sole driving factor in removing Rogers, according to the source — is a continuing concern about security.

Harold Martin, a former contractor for Booz Allen who was working at the NSA, has been charged and is being held without bail after allegedly stealing a large amount of classified information. Prosecutors allege he stole the names of “numerous” covert US agents. He was arrested in August after federal authorities uncovered what they have described as mountains of highly classified intelligence in his car, home and shed, which they said had been accumulated over many years.

Martin’s motivation remains unclear, and federal authorities have not alleged that he gave or sold the information to anyone.

Separately, this comes as Rogers is one of those under consideration by President-elect Donald Trump to be the next director of national intelligence, CNN has previously reported. Rogers went on a private trip on Thursday to meet with Trump, a trip that took many administration officials by surprise.

Some officials also have complained about Rogers’ leadership style, according to the Post.

The Pentagon declined to comment, as did a spokesman for the director of national intelligence. The NSA did not return a request for comment.

The idea for dividing NSA’s efforts has been in the works for a while.

“So we had them both in the same location and able to work with one another. That has worked very well, but it’s not necessarily going to — the right approach to those missions overall in the long run. And we need to look at that and it’s not just a matter of NSA and CYBERCOM,” Carter told a tech industry group in September.

CNN’s Jim Sciutto contributed to this report.

More here:
2 senior officials ask for head of NSA to be replaced …

Posted in NSA | Comments Off on 2 senior officials ask for head of NSA to be replaced …

A Post-Human World Is Coming. Design Has Never Mattered …

Posted: November 21, 2016 at 10:55 am

Digital Design Theory (Princeton Architectural Press, 2016) is available on Amazon.

Futurist experts have estimated that by the year 2030 computers in the price range of inexpensive laptops will have a computational power that is equivalent to human intelligence. The implications of this change will be dramatic and revolutionary, presenting significant opportunities and challenges to designers. Already machines can process spoken language, recognize human faces, detect our emotions, and target us with highly personalized media content. While technology has tremendous potential to empower humans, soon it will also be used to make them thoroughly obsolete in the workplace, whether by replacing, displacing, or surveilling them. More than ever designers need to look beyond human intelligence and consider the effects of their practice on the world and on what it means to be human.

The question of how to design a secure human future is complicated by the uncertainties of predicting that future. As it is practiced today, design is strategically positioned to improve the usefulness and quality of human interactions with technology. Like all human endeavors, however, the practice of design risks marginalization if it is unable to evolve. When envisioning the future of design, our social and psychological frames of reference unavoidably and unconsciously bias our interpretation of the world. People systematically underestimate exponential trends such as Moores law, for example, which tells us that in 10 years we will have 32 times more total computing power than today. Indeed, as computer scientist Ray Kurzweil observes, “We wont experience 100 years of technological advances in the 21st century; we will witness on the order of 20,000 years of progress (again when measured by todays rate of progress), or about 1,000 times greater than what was achieved in the 20th century.”

Design-oriented research provides a possible means to anticipate and guide rapid changes, as design, predicated as it is on envisioning alternatives through “collective imagining,” is inherently more future-oriented than other fields. It therefore seems reasonable to ask how technology-design efforts might focus more effectively on enabling human-oriented systems that extend beyond design for humanity. In other words, is it possible to design intelligent systems that safely design themselves?

Imagine a future scenario in which extremely powerful computerized minds are simulated and shared across autonomous virtual or robotic bodies. Given the malleable nature of such super-intelligencesthey wont be limited by the hardwiring of DNA informationone can reasonably assume that they will be free of the limitations of a single material body, or the experience of a single lifetime, allowing them to tinker with their own genetic code, integrate survival knowledge directly from the learnings of others, and develop a radical new form of digital evolution that modifies itself through nearly instantaneous exponential cycles of imitation and learning, and passes on its adaptations to successive generations of “self.” We must transcend the limitations of human-centered design.

In such a post-human future, the simulation of alternative histories and futures could be used as a strategic evolutionary tool, allowing imaginary scenarios to be inhabited and played out before individuals or populations commit to actual change. Not only would the lineage of such beings be perpetually enhanced by automation, leading to radical new forms of social relationships and values, but the systems that realize or govern those values would likely become the instinctual mechanism of a synchronized and sentient “techno-cultural mind.”

Bringing such speculative and hypothetical scenarios into cultural awareness is one way that designers can evaluate possibilities and determine how best to proceed. What should designers do to prepare for such futures? What methods should be applied to their research and training?

Todays interaction designers shape human behavior through investigative research, systemic thinking, creative prototyping, and rapid iteration. Can these same methods be used to address the multitude of longer-term social and ethical issues that designers create? Do previous inventions, such as the internal combustion engine or nuclear power, provide relevant historical lessons to learn from? If little else, reflecting on super-intelligence through the lens of nuclear proliferation and global warming throws light on the existential consequences of poor design. It becomes clear that while systemic thinking and holistic research are useful methods for addressing existential risks, creative prototyping or rapid iteration with nuclear power or the environment as materials is probably unwise. Existential risks do not allow for a second chance to get it right. The only possible course of action when confronted with such challenges is to examine all possible future scenarios and use the best available subjective estimates of objective risk factors.

Simulations can also be leveraged to heighten designers awareness of trade-offs. Consider the consequences of contemporary interaction design, for example: intuitive interfaces, systemic experiences, and service economies. When current design methods are applied to designing future systems, each of these patterns can be extended through imagined simulations of posthuman design. Intuitive human-computer interfaces become interfaces between post- humans; they become new ways of mediating interdependent personal and cultural valuesnew social and political systems. Systemic experiences become new kinds of emergent post-human perception and awareness. Service economies become the synapses of tomorrows underlying system of techno-cultural values, new moral codes.

The first major triumph of interaction design, the design of the intuitive interface, merged technology with aesthetics. Designers adapted modernisms static typography and industrial styling and learned to address human factors and usability concerns. Today agile software practices and design thinking ensure the intuitive mediation of human and machine learning. We adapt to the design limitations of technological systems, and they adapt in return based on how we behave. This interplay is embodied by the design of the interface itself, between perception and action, affordance and feedback. As the adaptive intelligence of computer systems grows over time, design practices that emphasize the human aspects of interface design will extend beyond the one-sided human perspective of machine usability toward a reciprocal relationship that values intelligent systems as partners. In light of the rapid evolution of these new forms of artificial and synergetic life, the quality and safety of their mental and physical experiences may ultimately deserve equal if not greater consideration than ours. Post-human-centered design will teach intelligent machine systems to design the hierarchies of human behavior.

Interaction design can also define interconnected networks of interface touch-points and shape them into choose-your-own-adventures of human experience. We live in a world of increasingly seamless integration between Wi-Fi networks and thin clients, between phones, homes, watches, and cars. In the near future, crowdsourcing systems coupled with increasingly pervasive connectivity services and wearable computer interfaces will generate massive stockpiles of data that catalog human behavior to feed increasingly intuitive learning machines. Just as human-centered design crafts structure and experience to shape intuition, post-human-centered design will teach intelligent machine systems to design the hierarchies and compositions of human behavior. New systems will flourish as fluent extensions of our digital selves, facilitating seamless mobility throughout systems of virtual identity and the governance of shared thoughts and emotions.

Applying interaction design to post-human experience requires designers to think holistically beyond the interface to the protocols and exchanges that unify human and machine minds. Truly systemic post-human-centered designers recognize that such interfaces will ultimately manifest in the psychological fabric of post-human society at much deeper levels of meaning and value. Just as todays physical products have slid from ownership to on-demand digital services, our very conception of these services will become the new product. In the short term, advances in wearable and ubiquitous computing technology will render our inner dimensions of motivation and self-perception tangible as explicit and actionable cues. Ultimately such manifestations will be totally absorbed by the invisible hand of post-human cognition and emerge as new forms of social and self-engineering. Design interventions at this level will deeply control the post-human psyche, building on research methodologies of experience economics designed for the strategic realization of social and cognitive value. Can a market demand be designed for goodwill toward humans at this stage, or does the long tail of identity realization preclude it? Will we live in a utopian world of socialized techno-egalitarian fulfillment and love or become a eugenic cult of celebrity self-actualization?

It seems unlikely that humans will stem their fascination with technology or stop applying it to improve themselves and their immediate material condition. Tomorrows generation faces an explosion of wireless networks, ubiquitous computing, context-aware systems, intelligent machines, smart cars, robots, and strategic modifications to the human genome. While the precise form these changes will take is unclear, recent history suggests that they are likely to be welcomed at first and progressively advanced. It appears reasonable that human intelligence will become obsolete, economic wealth will reside primarily in the hands of super-intelligent machines, and our ability to survive will lie beyond our direct control. Adapting to cope with these changes, without alienating the new forms of intelligence that emerge, requires transcending the limitations of human-centered design. Instead, a new breed of post-human-centered designer is needed to maximize the potential of post-evolutionary life.

This essay was adapted with permission from Digital Design Theory (Princeton Architectural Press, 2016) edited by Helen Armstrong.

Photo: Jonathan Knowles/Getty Images

Read the original post:
A Post-Human World Is Coming. Design Has Never Mattered …

Posted in Post Human | Comments Off on A Post-Human World Is Coming. Design Has Never Mattered …

High Seas Fleet – Wikipedia

Posted: November 12, 2016 at 5:27 pm

The High Seas Fleet (Hochseeflotte) was the battle fleet of the German Imperial Navy and saw action during the First World War. The formation was created in February 1907, when the Home Fleet (Heimatflotte) was renamed as the High Seas Fleet. Admiral Alfred von Tirpitz was the architect of the fleet; he envisioned a force powerful enough to challenge the Royal Navy’s predominance. Kaiser Wilhelm II, the German Emperor, championed the fleet as the instrument by which he would seize overseas possessions and make Germany a global power. By concentrating a powerful battle fleet in the North Sea while the Royal Navy was required to disperse its forces around the British Empire, Tirpitz believed Germany could achieve a balance of force that could seriously damage British naval hegemony. This was the heart of Tirpitz’s “Risk Theory,” which held that Britain would not challenge Germany if the latter’s fleet posed such a significant threat to its own.

The primary component of the Fleet was its battleships, typically organized in eight-ship squadrons, though it also contained various other formations, including the I Scouting Group. At its creation in 1907, the High Seas Fleet consisted of two squadrons of battleships, and by 1914, a third squadron had been added. The dreadnought revolution in 1906 greatly affected the composition of the fleet; the twenty-four pre-dreadnoughts in the fleet were rendered obsolete and required replacement. Enough dreadnoughts for two full squadrons were completed by the outbreak of war in mid 1914; the eight most modern pre-dreadnoughts were used to constitute a third squadron. Two additional squadrons of older vessels were mobilized at the onset of hostilities, though by the end of the conflict, these formations were disbanded.

The fleet conducted a series of sorties into the North Sea during the war designed to lure out an isolated portion of the numerically superior British Grand Fleet. These operations frequently used the fast battlecruisers of the I Scouting Group to raid the British coast as the bait for the Royal Navy. These operations culminated in the Battle of Jutland, on 31 May1 June 1916, where the High Seas Fleet confronted the whole of the Grand Fleet. The battle was inconclusive, but the British won strategically, as it convinced Admiral Reinhard Scheer, the German fleet commander, that even a highly favorable outcome to a fleet action would not secure German victory in the war. Scheer and other leading admirals therefore advised the Kaiser to order a resumption of the unrestricted submarine warfare campaign. The primary responsibility of the High Seas Fleet in 1917 and 1918 was to secure the German naval bases in the North Sea for U-boat operations. Nevertheless, the fleet continued to conduct sorties into the North Sea and detached units for special operations in the Baltic Sea against the Russian Baltic Fleet. Following the German defeat in November 1918, the Allies interned the bulk of the High Seas Fleet in Scapa Flow, where it was ultimately scuttled by its crews in June 1919, days before the belligerents signed the Treaty of Versailles.

In 1898, Admiral Alfred von Tirpitz became the State Secretary for the Imperial Navy Office (ReichsmarineamtRMA);[1] Tirpitz was an ardent supporter of naval expansion. During a speech in support of the First Naval Law on 6 December 1897, Tirpitz stated that the navy was “a question of survival” for Germany.[2] He also viewed Great Britain, with its powerful Royal Navy, as the primary threat to Germany. In a discussion with the Kaiser during his first month in his post as State Secretary, he stated that “for Germany the most dangerous naval enemy at present is England.”[3] Tirpitz theorized that an attacking fleet would require a 33percent advantage in strength to achieve victory, and so decided that a 2:3 ratio would be required for the German navy. For a final total of 60 German battleships, Britain would be required to build 90 to meet the 2:3 ratio envisioned by Tirpitz.[3]

The Royal Navy had heretofore adhered to the so-called “two-power standard,” first formulated in the Naval Defence Act of 1889, which required a larger fleet than those of the next two largest naval powers combined.[4] The crux of Tirpitz’s “risk theory” was that by building a fleet to the 2:3 ratio, Germany would be strong enough that even in the event of a British naval victory, the Royal Navy would incur damage so serious as to allow the third-ranked naval power to rise to preeminence. Implicit in Tirpitz’s theory was the assumption that the British would adopt an offensive strategy that would allow the Germans to use mines and submarines to even the numerical odds before fighting a decisive battle between Heligoland and the Thames. Tirpitz in fact believed Germany would emerge victorious from a naval struggle with Britain, as he believed Germany to possess superior ships manned by better-trained crews, more effective tactics, and led by more capable officers.[3]

In his first program, Tirpitz envisioned a fleet of nineteen battleships, divided into two eight-ship squadrons, one ship as a flagship, and two in reserve. The squadrons were further divided into four-ship divisions. This would be supported by the eight Siegfried- and Odinclasses of coastal defense ships, six large and eighteen small cruisers, and twelve divisions of torpedo boats, all assigned to the Home Fleet (Heimatflotte).[5] This fleet was secured by the First Naval Law, which passed in the Reichstag on 28 March 1898.[6] Construction of the fleet was to be completed by 1 April 1904. Rising international tensions, particularly as a result of the outbreak of the Boer War in South Africa and the Boxer Rebellion in China, allowed Tirpitz to push through an expanded fleet plan in 1900. The Second Naval Law was passed on 14 June 1900; it doubled the size of the fleet to 38 battleships and 20 large and 38 small cruisers. Tirpitz planned an even larger fleet. As early as September 1899, he had informed the Kaiser that he sought at least 45 battleships, and potentially might secure a third double-squadron, for a total strength of 48 battleships.[7]

During the initial period of German naval expansion, Britain did not feel particularly threatened.[6] The Lords of the Admiralty felt the implications of the Second Naval Law were not a significantly more dangerous threat than the fleet set by the First Naval Law; they believed it was more important to focus on the practical situation rather than speculation on future programs that might easily be reduced or cut entirely. Segments of the British public, however, quickly seized on the perceived threat posed by the German construction programs.[8] Despite their dismissive reaction, the Admiralty resolved to surpass German battleship construction. Admiral John Fisher, who became the First Sea Lord and head of the Admiralty in 1904, introduced sweeping reforms in large part to counter the growing threat posed by the expanding German fleet. Training programs were modernized, old and obsolete vessels were discarded, and the scattered squadrons of battleships were consolidated into four main fleets, three of which were based in Europe. Britain also made a series of diplomatic arrangements, including an alliance with Japan that allowed a greater concentration of British battleships in the North Sea.[9]

Fisher’s reforms caused serious problems for Tirpitz’s plans; he counted on a dispersal of British naval forces early in a conflict that would allow Germany’s smaller but more concentrated fleet to achieve a local superiority. Tirpitz could also no longer depend on the higher level of training in both the German officer corps and the enlisted ranks, nor the superiority of the more modern and homogenized German squadrons over the heterogeneous British fleet. In 1904, Britain signed the Entente cordiale with France, Britain’s primary naval rival. The destruction of two Russian fleets during the Russo-Japanese War in 1905 further strengthened Britain’s position, as it removed the second of her two traditional naval rivals.[10] These developments allowed Britain to discard the “two power standard” and focus solely on out-building Germany. In October 1906, Admiral Fisher stated “our only probable enemy is Germany. Germany keeps her whole Fleet always concentrated within a few hours of England. We must therefore keep a Fleet twice as powerful concentrated within a few hours of Germany.”[11]

The most damaging blow to Tirpitz’s plan came with the launch of HMSDreadnought in February 1906. The new battleship, armed with a main battery of ten 12-inch (30cm) guns, was considerably more powerful than any battleship afloat. Ships capable of battle with Dreadnought would need to be significantly larger than the old pre-dreadnoughts, which increased their cost and necessitated expensive dredging of canals and harbors to accommodate them. The German naval budget was already stretched thin; without new funding, Tirpitz would have to abandon his challenge to Britain.[12] As a result, Tirpitz went before the Reichstag in May 1906 with a request for additional funding. The First Amendment to the Second Naval Law was passed on 19 May and appropriated funding for the new battleships, as well as for the dredging required by their increased size.[6]

The Reichstag passed a second amendment to the Naval Law in March 1908 to provide an additional billion marks to cope with the growing cost of the latest battleships. The law also reduced the service life of all battleships from 25 to 20 years, which allowed Tirpitz to push for the replacement of older vessels earlier. A third and final amendment was passed in May 1912 represented a compromise between Tirpitz and moderates in parliament. The amendment authorized three new battleships and two light cruisers. The amendment called for the High Seas Fleet to be equipped with three squadrons of eight battleships each, one squadron of eight battlecruisers, and eighteen light cruisers. Two 8-ship squadrons would be placed in reserve, along with two armored and twelve light cruisers.[13] By the outbreak of war in August 1914, only one eight-ship squadron of dreadnoughtsthe I Battle Squadronhad been assembled with the Nassau and Helgoland-classbattleships. The second squadron of dreadnoughtsthe III Battle Squadronwhich included four of the Kaiser-classbattleships, was only completed when the four Knig-classbattleships entered service by early 1915.[14] As a result, the third squadronthe II Battle Squadronremained composed of pre-dreadnoughts through 1916.[15]

Before the 1912 naval law was passed, Britain and Germany attempted to reach a compromise with the Haldane Mission, led by the British War Minister Richard Haldane. The arms reduction mission ended in failure, however, and the 1912 law was announced shortly thereafter. The Germans were aware at as early as 1911, the Royal Navy had abandoned the idea of a decisive battle with the German fleet, in favor of a distant blockade at the entrances to the North Sea, which the British could easily control due to their geographical position. There emerged the distinct possibility that the German fleet would be unable to force a battle on its own terms, which would render it militarily useless. When the war came in 1914, the British did in fact adopt this strategy. Coupled with the restrictive orders of the Kaiser, who preferred to keep the fleet intact to be used as a bargaining chip in the peace settlements, the ability of the High Seas Fleet to affect the military situation was markedly reduced.[16]

The German Navy’s pre-war planning held that the British would be compelled to mount either a direct attack on the German coast to defeat the High Seas Fleet, or to put in place a close blockade. Either course of action would permit the Germans to whittle away at the numerical superiority of the Grand Fleet with submarines and torpedo boats. Once a rough equality of forces could be achieved, the High Seas Fleet would be able to attack and destroy the British fleet.[17] Implicit in Tirpitz’s strategy was the assumption that German vessels were better-designed, had better-trained crews, and would be employed with superior tactics. In addition, Tirpitz assumed that Britain would not be able to concentrate its fleet in the North Sea, owing to the demands of its global empire. At the start of a conflict between the two powers, the Germans would therefore be able to attack the Royal Navy with local superiority.[18]

The British, however, did not accommodate Tirpitz’s projections; from his appointment as the First Sea Lord in 1904, Fisher began a major reorganization of the Royal Navy. He concentrated British battleship strength in home waters, launched the Dreadnought revolution, and introduced rigorous training for the fleet personnel.[19] In 1912, the British concluded a joint defense agreement with France that allowed the British to concentrate in the North Sea while the French defended the Mediterranean.[20] Worse still, the British began developing the strategy of the distant blockade of Germany starting in 1904;[21] this removed the ability of German light craft to reduce Britain’s superiority in numbers and essentially invalidated German naval planning before the start of World War I.[22]

The primary base for the High Seas Fleet in the North Sea was Wilhelmshaven on the western side of the Jade Bight; the port of Cuxhaven, located on the mouth of the Elbe, was also a major base in the North Sea. The island of Heligoland provided a fortified forward position in the German Bight.[23]Kiel was the most important base in the Baltic, which supported the forward bases at Pillau and Danzig.[24] The Kaiser Wilhelm Canal through Schleswig-Holstein connected the Baltic and North Seas and allowed the German Navy to quickly shift naval forces between the two seas.[25] In peacetime, all ships on active duty in the High Seas Fleet were stationed in Wilhelmshaven, Kiel, or Danzig.[26] Germany possessed only one major overseas base, at Kiautschou in China,[27] where the East Asia Squadron was stationed.[28]

Steam ships of the period, which burned coal to fire their boilers, were naturally tied to coaling stations in friendly ports. The German Navy lacked sufficient overseas bases for sustained operations, even for single ships operating as commerce raiders.[29] The Navy experimented with a device to transfer coal from colliers to warships while underway in 1907, though the practice was not put into general use.[30] Nevertheless, German capital ships had a cruising range of at least 4,000nmi (7,400km; 4,600mi),[31] more than enough to operate in the Atlantic Ocean.[Note 1]

In 1897, the year Tirpitz came to his position as State Secretary of the Navy Office, the Imperial Navy consisted of a total of around 26,000 officers, petty officers, and enlisted men of various ranks, branches, and positions. By the outbreak of war in 1914, this had increased significantly to about 80,000 officers, petty officers, and men.[35] Capital ships were typically commanded by a Kapitn zur See (Captain at Sea) or Korvettenkapitn (corvette captain).[26] Each of these ships typically had a total crew in excess of 1,000 officers and men;[31] the light cruisers that screened for the fleet had crew sizes between 300 and 550.[36] The fleet torpedo boats had crews of about 80 to 100 officers and men, though some later classes approached 200.[37]

In early 1907, enough battleshipsof the Braunschweig and Deutschlandclasseshad been constructed to allow for the creation of a second full squadron.[38] On 16 February 1907,[39] Kaiser Wilhelm renamed the Home Fleet the High Seas Fleet. Admiral Prince Heinrich of Prussia, Wilhelm II’s brother, became the first commander of the High Seas Fleet; his flagship was SMSDeutschland.[38] While in a peace-time footing, the Fleet conducted a routine pattern of training exercises, with individual ships, with squadrons, and with the combined fleet, throughout the year. The entire fleet conducted several cruises into the Atlantic Ocean and the Baltic Sea.[40] Prince Henry was replaced in late 1909 by Vice Admiral Henning von Holtzendorff, who served until April 1913. Vice Admiral Friedrich von Ingenohl, who would command the High Seas Fleet in the first months of World War I, took command following the departure of Vice Admiral von Holtzendorff.[41]SMSFriedrich der Grosse replaced Deutschland as the fleet flagship on 2 March 1913.[42]

Despite the rising international tensions following the assassination of Archduke Franz Ferdinand on 28 June, the High Seas Fleet began its summer cruise to Norway on 13 July. During the last peacetime cruise of the Imperial Navy, the fleet conducted drills off Skagen before proceeding to the Norwegian fjords on 25 July. The following day the fleet began to steam back to Germany, as a result of Austria-Hungary’s ultimatum to Serbia. On the 27th, the entire fleet assembled off Cape Skudenes before returning to port, where the ships remained at a heightened state of readiness.[42] War between Austria-Hungary and Serbia broke out the following day, and in the span of a week all of the major European powers had joined the conflict.[43]

The High Seas Fleet conducted a number of sweeps and advances into the North Sea. The first occurred on 23 November 1914, though no British forces were encountered. Admiral von Ingenohl, the commander of the High Seas Fleet, adopted a strategy in which the battlecruisers of Rear Admiral Franz von Hipper’s I Scouting Group raided British coastal towns to lure out portions of the Grand Fleet where they could be destroyed by the High Seas Fleet.[44] The raid on Scarborough, Hartlepool and Whitby on 1516 December 1914 was the first such operation.[45] On the evening of 15 December, the German battle fleet of some twelve dreadnoughts and eight pre-dreadnoughts came to within 10nmi (19km; 12mi) of an isolated squadron of six British battleships. However, skirmishes between the rival destroyer screens in the darkness convinced von Ingenohl that he was faced with the entire Grand Fleet. Under orders from the Kaiser to avoid risking the fleet unnecessarily, von Ingenohl broke off the engagement and turned the fleet back toward Germany.[46]

Following the loss of SMSBlcher at the Battle of Dogger Bank in January 1915, the Kaiser removed Admiral von Ingenohl from his post on 2 February. Admiral Hugo von Pohl replaced him as commander of the fleet.[47] Admiral von Pohl conducted a series of fleet advances in 1915; in the first one on 2930 March, the fleet steamed out to the north of Terschelling and returned without incident. Another followed on 1718 April, where the fleet covered a mining operation by the II Scouting Group. Three days later, on 2122 April, the High Seas Fleet advanced towards the Dogger Bank, though again failed to meet any British forces.[48] Another sortie followed on 2930 May, during which the fleet advanced as far as Schiermonnikoog before being forced to turn back by inclement weather. On 10 August, the fleet steamed to the north of Heligoland to cover the return of the auxiliary cruiser Meteor. A month later, on 1112 September, the fleet covered another mine-laying operation off the Swarte Bank. The last operation of the year, conducted on 2324 October, was an advance without result in the direction of Horns Reef.[48]

Vice Admiral Reinhard Scheer became Commander in chief of the High Seas Fleet on 18 January 1916 when Admiral von Pohl became too ill to continue in that post.[49] Scheer favored a much more aggressive policy than that of his predecessor, and advocated greater usage of U-boats and zeppelins in coordinated attacks on the Grand Fleet; Scheer received approval from the Kaiser in February 1916 to carry out his intentions.[50] Scheer ordered the fleet on sweeps of the North Sea on 26 March, 23 April, and 2122 April. The battlecruisers conducted another raid on the English coast on 2425 April, during which the fleet provided distant support.[51] Scheer planned another raid for mid-May, but the battlecruiser Seydlitz had struck a mine during the previous raid and the repair work forced the operation to be pushed back until the end of the month.[52]

Admiral Scheer’s fleet, composed of 16 dreadnoughts, six pre-dreadnoughts, six light cruisers, and 31 torpedo boats departed the Jade early on the morning of 31 May. The fleet sailed in concert with Hipper’s five battlecruisers and supporting cruisers and torpedo boats.[53] The British navy’s Room 40 had intercepted and decrypted German radio traffic containing plans of the operation. The Admiralty ordered the Grand Fleet, totaling some 28 dreadnoughts and 9 battlecruisers, to sortie the night before in order to cut off and destroy the High Seas Fleet.[54]

At 16:00 UTC, the two battlecruiser forces encountered each other and began a running gun fight south, back towards Scheer’s battle fleet.[55] Upon reaching the High Seas Fleet, Vice Admiral David Beatty’s battlecruisers turned back to the north to lure the Germans towards the rapidly approaching Grand Fleet, under the command of Admiral John Jellicoe.[56] During the run to the north, Scheer’s leading ships engaged the Queen Elizabeth-class battleships of the 5th Battle Squadron.[57] By 18:30, the Grand Fleet had arrived on the scene, and was deployed into a position that would cross Scheer’s “T” from the northeast. To extricate his fleet from this precarious position, Scheer ordered a 16-point turn to the south-west.[58] At 18:55, Scheer decided to conduct another 16-point turn to launch an attack on the British fleet.[59]

This maneuver again put Scheer in a dangerous position; Jellicoe had turned his fleet south and again crossed Scheer’s “T.”[60] A third 16-point turn followed; Hipper’s mauled battlecruisers charged the British line to cover the retreat.[61] Scheer then ordered the fleet to adopt the night cruising formation, which was completed by 23:40.[62] A series of ferocious engagements between Scheer’s battleships and Jellicoe’s destroyer screen ensued, though the Germans managed to punch their way through the destroyers and make for Horns Reef.[63] The High Seas Fleet reached the Jade between 13:00 and 14:45 on 1 June; Scheer ordered the undamaged battleships of the I Battle Squadron to take up defensive positions in the Jade roadstead while the Kaiser-class battleships were to maintain a state of readiness just outside Wilhelmshaven.[64] The High Seas Fleet had sunk more British vessels than the Grand Fleet had sunk German, though Scheer’s leading battleships had taken a terrible hammering. Several capital ships, including SMSKnig, which had been the first vessel in the line, and most of the battlecruisers, were in drydock for extensive repairs for at least two months. On 1 June, the British had twenty-four capital ships in fighting condition, compared to only ten German warships.[65]

By August, enough warships had been repaired to allow Scheer to undertake another fleet operation on 1819 August. Due to the serious damage incurred by Seydlitz and SMSDerfflinger and the loss of SMSLtzow at Jutland, the only battlecruisers available for the operation were SMSVon der Tann and SMSMoltke, which were joined by SMSMarkgraf, SMSGrosser Kurfrst, and the new battleship SMSBayern.[66] Scheer turned north after receiving a false report from a zeppelin about a British unit in the area.[48] As a result, the bombardment was not carried out, and by 14:35, Scheer had been warned of the Grand Fleet’s approach and so turned his forces around and retreated to German ports.[67] Another fleet sortie took place on 1819 October 1916 to attack enemy shipping east of Dogger Bank. Despite being forewarned by signal intelligence, the Grand Fleet did not attempt to intercept. The operation was however cancelled due to poor weather after the cruiser Mnchen was torpedoed by the British submarine HMSE38.[68] The fleet was reorganized on 1 December;[48] the four Knig-classbattleships remained in the III Squadron, along with the newly commissioned Bayern, while the five Kaiser-class ships were transferred to the IV Squadron.[69] In March 1917 the new battleship Baden, built to serve as fleet flagship, entered service;[70] on the 17th, Scheer hauled down his flag from Friedrich der Grosse and transferred it to Baden.[48]

The war, now in its fourth year, was by 1917 taking its toll on the crews of the ships of the High Seas Fleet. Acts of passive resistance, such as the posting of anti-war slogans in the battleships SMSOldenburg and SMSPosen in January 1917, began to appear.[71] In June and July, the crews began to conduct more active forms of resistance. These activities included work refusals, hunger strikes, and taking unauthorized leave from their ships.[72] The disruptions came to a head in August, when a series of protests, anti-war speeches, and demonstrations resulted in the arrest of dozens of sailors.[73] Scheer ordered the arrest of over 200 men from the battleship Prinzregent Luitpold, the center of the anti-war activities. A series of courts-martial followed, which resulted in 77 guilty verdicts; nine men were sentenced to death for their roles, though only two men, Albin Kbis and Max Reichpietsch, were executed.[74]

In early September 1917, following the German conquest of the Russian port of Riga, the German navy decided to eliminate the Russian naval forces that still held the Gulf of Riga. The Navy High Command (Admiralstab) planned an operation, codenamed Operation Albion, to seize the Baltic island of sel, and specifically the Russian gun batteries on the Sworbe Peninsula.[75] On 18 September, the order was issued for a joint operation with the army to capture sel and Moon Islands; the primary naval component was to comprise its flagship, Moltke, and the III and IVBattle Squadrons of the High Seas Fleet.[76] The operation began on the morning of 12 October, when Moltke and the IIISquadron ships engaged Russian positions in Tagga Bay while the IVSquadron shelled Russian gun batteries on the Sworbe Peninsula on sel.[77]By 20 October, the fighting on the islands was winding down; Moon, sel, and Dag were in German possession. The previous day, the Admiralstab had ordered the cessation of naval actions and the return of the dreadnoughts to the High Seas Fleet as soon as possible.[78]

Admiral Scheer had used light surface forces to attack British convoys to Norway beginning in late 1917. As a result, the Royal Navy attached a squadron of battleships to protect the convoys, which presented Scheer with the possibility of destroying a detached squadron of the Grand Fleet. The operation called for Hipper’s battlecruisers to attack the convoy and its escorts on 23 April while the battleships of the High Seas Fleet stood by in support. On 22 April, the German fleet assembled in the Schillig Roads outside Wilhelmshaven and departed the following morning.[79] Despite the success in reaching the convoy route undetected, the operation failed due to faulty intelligence. Reports from U-boats indicated to Scheer that the convoys sailed at the start and middle of each week, but a west-bound convoy had left Bergen on Tuesday the 22nd and an east-bound group left Methil, Scotland, on the 24th, a Thursday. As a result, there was no convoy for Hipper to attack.[80] Beatty sortied with a force of 31 battleships and four battlecruisers, but was too late to intercept the retreating Germans. The Germans reached their defensive minefields early on 25 April, though approximately 40nmi (74km; 46mi) off Heligoland Moltke was torpedoed by the submarine E42; she successfully returned to port.[81]

A final fleet action was planned for the end of October 1918, days before the Armistice was to take effect. The bulk of the High Seas Fleet was to have sortied from their base in Wilhelmshaven to engage the British Grand Fleet; Scheerby now the Grand Admiral (Grossadmiral) of the fleetintended to inflict as much damage as possible on the British navy, in order to retain a better bargaining position for Germany, despite the expected casualties. However, many of the war-weary sailors felt the operation would disrupt the peace process and prolong the war.[82] On the morning of 29 October 1918, the order was given to sail from Wilhelmshaven the following day. Starting on the night of 29 October, sailors on Thringen and then on several other battleships mutinied.[83] The unrest ultimately forced Hipper and Scheer to cancel the operation.[84] When informed of the situation, the Kaiser stated “I no longer have a navy.”[85]

Following the capitulation of Germany on November 1918, most of the High Seas Fleet, under the command of Rear Admiral Ludwig von Reuter, were interned in the British naval base of Scapa Flow.[84] Prior to the departure of the German fleet, Admiral Adolf von Trotha made clear to von Reuter that he could not allow the Allies to seize the ships, under any conditions.[86] The fleet rendezvoused with the British light cruiser Cardiff, which led the ships to the Allied fleet that was to escort the Germans to Scapa Flow. The massive flotilla consisted of some 370 British, American, and French warships.[87] Once the ships were interned, their guns were disabled through the removal of their breech blocks, and their crews were reduced to 200 officers and enlisted men on each of the capital ships.[88]

The fleet remained in captivity during the negotiations that ultimately produced the Treaty of Versailles. Von Reuter believed that the British intended to seize the German ships on 21 June 1919, which was the deadline for Germany to have signed the peace treaty. Unaware that the deadline had been extended to the 23rd, Reuter ordered the ships to be sunk at the next opportunity. On the morning of 21 June, the British fleet left Scapa Flow to conduct training maneuvers, and at 11:20 Reuter transmitted the order to his ships.[86] Out of the interned fleet, only one battleship, Baden, three light cruisers, and eighteen destroyers were saved from sinking by the British harbor personnel. The Royal Navy, initially opposed to salvage operations, decided to allow private firms to attempt to raise the vessels for scrapping.[89] Cox and Danks, a company founded by Ernest Cox handled most of the salvage operations, including those of the heaviest vessels raised.[90] After Cox’s withdrawal due to financial losses in the early 1930s, Metal Industries Group, Inc. took over the salvage operation for the remaining ships. Five more capital ships were raised, though threeSMS Knig, SMSKronprinz, and SMS Markgrafwere too deep to permit raising. They remain on the bottom of Scapa Flow, along with four light cruisers.[91]

The High Seas Fleet, particularly its wartime impotence and ultimate fate, strongly influenced the later German navies, the Reichsmarine and Kriegsmarine. Former Imperial Navy officers continued to serve in the subsequent institutions, including Admiral Erich Raeder, Hipper’s former chief of staff, who became the commander in chief of the Reichsmarine. Raeder advocated long-range commerce raiding by surface ships, rather than constructing a large surface fleet to challenge the Royal Navy, which he viewed to be a futile endeavor. His initial version of Plan Z, the construction program for the Kriegsmarine in the late 1930s, called for large number of P-classcruisers, long-range light cruisers, and reconnaissance forces for attacking enemy shipping, though he was overruled by Adolf Hitler, who advocated a large fleet of battleships.[92]

See the original post here:

High Seas Fleet – Wikipedia

Posted in High Seas | Comments Off on High Seas Fleet – Wikipedia

What are the Benefits of Mind Uploading? – Lifeboat

Posted: at 5:24 pm

by Lifeboat Foundation Scientific Advisory Board member Michael Anissimov. Overview Universal mind uploading, or universal uploading for short, is the concept, by no means original to me, that the technology of mind uploading will eventually become universally adopted by all who can afford it, similar to the adoption of modern agriculture, hygiene, or living in houses. The concept is rather infrequently discussed, due to a combination of 1) its supposedly speculative nature and 2) its far future time frame. Discussion Before I explore the idea, let me give a quick description of what mind uploading is and why the two roadblocks to its discussion are invalid. Mind uploading would involve simulating a human brain in a computer in enough detail that the simulation becomes, for all practical purposes, a perfect copy and experiences consciousness, just like protein-based human minds. If functionalism is true, like many cognitive scientists and philosophers correctly believe, then all the features of human consciousness that we know and love including all our memories, personality, and sexual quirks would be preserved through the transition. By simultaneously disassembling the protein brain as the computer brain is constructed, only one implementation of the person in question would exist at any one time, eliminating any unnecessary confusion. Still, even if two direct copies are made, the universe wont care you would have simply created two identical individuals with the same memories. The universe cant get confused only you can. Regardless of how perplexed one may be by contemplating this possibility for the first time from a 20th century perspective of personal identity, an upload of you with all your memories and personality intact is no different from you than the person you are today is different than the person you were yesterday when you went to sleep, or the person you were 10-30 seconds ago when quantum fluctuations momentarily destroyed and recreated all the particles in your brain. Regarding objections to talk of uploading, for anyone who 1) buys the silicon brain replacement thought experiment, 2) accepts arguments that the human brain operates at below about 1019 ops/sec, and 3) considers it plausible that 1019 ops/sec computers (plug in whatever value you believe for #2) will become manufactured this century, the topic is clearly worth broaching. Even if its 100 years off, thats just a blink of an eye relative to the entirety of human history, and universal uploading would be something more radical than anything thats occurred with life or intelligence in the entire known history of this solar system. We can afford to stop focusing exclusively on the near future for a potential event of such magnitude. Consider it intellectual masturbation, if you like, or a serious analysis of the near-term future of the human species, if you buy the three points. So, say that mind uploading becomes available as a technology sometime around 2050. If the early adopters dont go crazy and/or use their newfound abilities to turn the world into a totalitarian dictatorship, then they will concisely and vividly communicate the benefits of the technology to their non-uploaded family and friends. If affordable, others will then follow, but the degree of adoption will necessarily depend on whether the process is easily reversible or not. But suppose that millions of people choose to go for it. Effects Widespread uploading would have huge effects. Lets go over some of them in turn 1) Massive economic growth. By allowing human minds to run on substrates that can be accelerated by the addition of computing power, as well as the possibility of spinning off non-conscious daemons to accomplish rote tasks, economic growth at least insofar as it can be accelerated by intelligence and the robotics of 2050 alone will accelerate greatly. Instead of relying upon 1% per year population growth rates, humans might copy themselves or (more conducive to societal diversity) spin off already-mature progeny as quickly as available computing power allows. This could lead to growth rates in human capital of 1,000% per year or far more. More economic growth might ensue in the first year (or month) after uploading than in the entire 250,000 years between the evolution of Homo sapiens and the invention of uploading. The first country that widely adopts the technology might be able to solve global poverty by donating only 0.1% of its annual GDP. 2) Intelligence enhancement. Faster does not necessarily mean smarter. Weak superintelligence is a term sometimes used to describe accelerated intelligence that is not qualitatively enhanced, in contrast with strong superintelligence which is. The road from weak to strong superintelligence would likely be very short. By observing information flows in uploaded human brains, many of the details of human cognition would be elucidated. Running standard compression algorithms over such minds might make them more efficient than blind natural selection could manage, and this extra space could be used to introduce new information-processing modules with additional features. Collectively, these new modules could give rise to qualitatively better intelligence. At the very least, rapid trial-and-error experimentation without the risk of injury would become possible, eventually revealing paths to qualitative enhancements. 3) Greater subjective well-being. Like most other human traits, our happiness set points fall on a bell curve. No matter what happens to us, be it losing our home or winning the lottery, there is a tendency for our innate happiness level to revert back to our natural set point. Some lucky people are innately really happy. Some unlucky people have chronic depression. With uploading, we will be able to see exactly which neural features (happiness centers) correspond to high happiness set points and which dont, by combining prior knowledge with direct experimentation and investigation. This will make it possible for people to reprogram their own brains to raise their happiness set points in a way that biotechnological intervention might find difficult or dangerous. Experimental data and simple observation has shown that high happiness set-point people today dont have any mysterious handicaps, like inability to recognize when their body is in pain, or inappropriate social behavior. They still experience sadness, its just that their happiness returns to a higher level after the sad experience is over. Perennial tropes justifying the value of suffering will lose their appeal when anyone can be happier without any negative side effects. 4) Complete environmental recovery. (Im not just trying to kiss up to greens, I actually care about this.) By spending most of our time as programs running on a worldwide network, we will consume far less space and use less energy and natural resources than we would in a conventional human body. Because our food would be delicious cuisines generated only by electricity or light, we could avoid all the environmental destruction caused by clear-cutting land for farming and the ensuing agricultural runoff. People imagine dystopian futures to involve a lot of homogeneity well, were already here as far as our agriculture is concerned. Land that once had diverse flora and fauna now consists of a few dozen agricultural staples wheat, corn, oats, cattle pastures, factory farms. BORING. By transitioning from a proteinaceous to a digital substrate, well do more for our environment than any amount of conservation ever could. We could still experience this environment by inputting live-updating feeds of the biosphere into a corner of our expansive virtual worlds. Its the best of both worlds, literally virtual and natural in harmony. 5) Escape from direct governance by the laws of physics. Though this benefit sounds more abstract or philosophical, if we were to directly experience it, the visceral nature of this benefit would become immediately clear. In a virtual environment, the programmer is the complete master of everything he or she has editing rights to. A personal virtual sandbox could become ones canvas for creating the fantasy world of their choice. Today, this can be done in a very limited fashion in virtual worlds such as Second Life. (A trend which will continue to the fulfillment of everyones most escapist fantasies, even if uploading is impossible.) Worlds like Second Life are still limited by their system-wide operating rules and their low resolution and bandwidth. Any civilization that develops uploading would surely have the technology to develop virtual environments of great detail and flexibility, right up to the very boundaries of the possible. Anything that can become possible will be. People will be able to experience simulations of the past, travel to far-off stars and planets, and experience entirely novel worldscapes, all within the flickering bits of the worldwide network. 6) Closer connections with other human beings. Our interactions with other people today is limited by the very low bandwidth of human speech and facial expressions. By offering partial readouts of our cognitive state to others, we could engage in a deeper exchange of ideas and emotions. I predict that talking as communication will become pass well engage in much deeper forms of informational and emotional exchange that will make the talking and facial expressions of today seem downright empty and soulless. Spiritualists often talk a lot about connecting closer to one another are they aware that the best way they can go about that would be to contribute to researching neural scanning or brain-computer interfacing technology? Probably not. 7) Last but not least, indefinite lifespans. Here is the one that detractors of uploading are fond of targeting the fact that uploading could lead to practical immortality. Well, it really could. By being a string of flickering bits distributed over a worldwide network, killing you could become extremely difficult. The data and bits of everyone would be intertwined to kill someone, youll either need complete editing privileges of the entire worldwide network, or the ability to blow up the planet. Needless to say, true immortality would be a huge deal, a much bigger deal than the temporary fix of life extension therapies for biological bodies, which will do very little to combat infectious disease or exotic maladies such as being hit by a truck. Conclusion Its obvious that mind uploading would be incredibly beneficial. As stated near the beginning of this post, only three things are necessary for it to be a big deal 1) that you believe a brain could be incrementally replaced with functionally identical implants and retain its fundamental characteristics and identity, 2) that the computational capacity of the human brain is a reasonable number, very unlikely to be more than 1019 ops/sec, and 3) that at some point in the future well have computers that fast. Not so far-fetched. Many people consider these three points plausible, but just arent aware of their implications. If you believe those three points, then uploading becomes a fascinating goal to work towards. From a utilitarian perspective, it practically blows everything else away besides global risk mitigation, as the number of new minds leading worthwhile lives that could be created using the technology would be astronomical. The number of digital minds we could create using the matter on Earth alone would likely be over a quadrillion, more than 2,500 people for every star in the 400 billion star Milky Way. We could make a Galactic Civilization right here on Earth in the late 21st or 22nd century. I can scarcely imagine such a thing, but I can imagine that well be guffawing heartily as how unambitious most human goals were in the year 2010.

See more here:

What are the Benefits of Mind Uploading? – Lifeboat

Posted in Mind Uploading | Comments Off on What are the Benefits of Mind Uploading? – Lifeboat

Trying to install jitsi meet with apache2 – Stack Overflow

Posted: October 29, 2016 at 11:45 am

I know there are already post on this subject, but they don’t produce good results and I would like to share, here, my thinking on this subject. Feel free to moderate my post if you think it’s a bad idea.

Server: Ubuntu 16.04.1, Apache2.4.18

DNS conf:

Like I said I try to run Jitsi meet on apache2. By following the steps described in Quick install (https://github.com/jitsi/jitsi-meet/blob/master/doc/quick-install.md)

If I install Jitsi meet on my server just after installing Ubuntu so without Apache or Nginx. Jitsi works great. If I install Jitsi meet on my server after installing Nginx. Jitsi works great.

With the same method of installation, I try to install Jitsi meet after installing Apache2, so I notice that Jitsi meet does not configure itself apache2, so I tried this first configuration:

When I load the page meet.mydomain.xx I get the following error:

“It works! Now your customer BOSH points to this URL to connect to Prosody.

For more information see Prosody. Setting up BOSH ”

But when I look at the /etc/prosody/conf.avail/meet.mydomain.xx.cfg.lua file, I notice that bosh is already enabled and the rest of the configuration is ok with what is explain here https://github.com/jitsi/jitsi-meet/blob/master/doc/manual-install.md The log contains no errors. If you have an idea to fix this problem I’m interested.

Second configuration that I tested:

With this setup the result seems better, I can see the home page of Jitsi meet but without the text, without the logo and when I click on the go button, nothing happend. The log contains no errors.

So here I don’t no really what to do. If someone have some advices or ideas, thank you to share it !

Bye, thank you for reading

Gspohu

Original post:
Trying to install jitsi meet with apache2 – Stack Overflow

Posted in Jitsi | Comments Off on Trying to install jitsi meet with apache2 – Stack Overflow

The Artificial Intelligence Revolution: Part 2 – Wait But Why

Posted: October 27, 2016 at 12:05 pm

Note: This is Part 2 of a two-part series on AI. Part 1 is here.

PDF: We made a fancy PDF of this post for printing and offline viewing. Buy it here. (Or see a preview.)

___________

We have what may be an extremely difficult problem with an unknown time to solve it, on which quite possibly the entire future of humanity depends. Nick Bostrom

Welcome to Part 2 of the Wait how is this possibly what Im reading I dont get why everyone isnt talking about this series.

Part 1 started innocently enough, as we discussed Artificial Narrow Intelligence, or ANI (AI that specializes in one narrow task like coming up with driving routes or playing chess), and how its all around us in the world today. We then examined why it was such a huge challenge to get from ANI to Artificial General Intelligence, or AGI (AI thats at least as intellectually capable as a human, across the board), and we discussed why the exponential rate of technological advancement weve seen in the past suggests that AGI might not be as far away as it seems. Part 1 ended with me assaulting you with the fact that once our machines reach human-level intelligence, they might immediately do this:

This left us staring at the screen, confronting the intense concept of potentially-in-our-lifetime Artificial Superintelligence, or ASI (AI thats way smarter than any human, across the board), and trying to figure out which emotion we were supposed to have on as we thought about that.11 open these

Before we dive into things, lets remind ourselves what it would mean for a machine to be superintelligent.

A key distinction is the difference between speed superintelligence and quality superintelligence. Often, someones first thought when they imagine a super-smart computer is one thats as intelligent as a human but can think much, much faster2they might picture a machine that thinks like a human, except a million times quicker, which means it could figure out in five minutes what would take a human a decade.

That sounds impressive, and ASI would think much faster than any human couldbut the true separator would be its advantage in intelligence quality, which is something completely different. What makes humans so much more intellectually capable than chimps isnt a difference in thinking speedits that human brains contain a number of sophisticated cognitive modules that enable things like complex linguistic representations or longterm planning or abstract reasoning, that chimps brains do not. Speeding up a chimps brain by thousands of times wouldnt bring him to our leveleven with a decades time, he wouldnt be able to figure out how to use a set of custom tools to assemble an intricate model, something a human could knock out in a few hours. There are worlds of human cognitive function a chimp will simply never be capable of, no matter how much time he spends trying.

But its not just that a chimp cant do what we do, its that his brain is unable to grasp that those worlds even exista chimp can become familiar with what a human is and what a skyscraper is, but hell never be able to understand that the skyscraper was built by humans. In his world, anything that huge is part of nature, period, and not only is it beyond him to build a skyscraper, its beyond him to realize that anyone can build a skyscraper. Thats the result of a small difference in intelligence quality.

And in the scheme of the intelligence range were talking about today, or even the much smaller range among biological creatures, the chimp-to-human quality intelligence gap is tiny. In an earlier post, I depicted the range of biological cognitive capacity using a staircase:3

To absorb how big a deal a superintelligent machine would be, imagine one on the dark green step two steps above humans on that staircase. This machine would be only slightly superintelligent, but its increased cognitive ability over us would be as vast as the chimp-human gap we just described. And like the chimps incapacity to ever absorb that skyscrapers can be built, we will never be able to even comprehend the things a machine on the dark green step can do, even if the machine tried to explain it to uslet alone do it ourselves. And thats only two steps above us. A machine on the second-to-highest step on that staircase would be to us as we are to antsit could try for years to teach us the simplest inkling of what it knows and the endeavor would be hopeless.

But the kind of superintelligence were talking about today is something far beyond anything on this staircase. In an intelligence explosionwhere the smarter a machine gets, the quicker its able to increase its own intelligence, until it begins to soar upwardsa machine might take years to rise from the chimp step to the one above it, but perhaps only hours to jump up a step once its on the dark green step two above us, and by the time its ten steps above us, it might be jumping up in four-step leaps every second that goes by. Which is why we need to realize that its distinctly possible that very shortly after the big news story about the first machine reaching human-level AGI, we might be facing the reality of coexisting on the Earth with something thats here on the staircase (or maybe a million times higher):

And since we just established that its a hopeless activity to try to understand the power of a machine only two steps above us, lets very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us.Anyone who pretends otherwise doesnt understand what superintelligence means.

Evolution has advanced the biological brain slowly and gradually over hundreds of millions of years, and in that sense, if humans birth an ASI machine, well be dramatically stomping on evolution. Or maybe this is part of evolutionmaybe the way evolution works is that intelligence creeps up more and more until it hits the level where its capable of creating machine superintelligence, and that level is like a tripwire that triggers a worldwide game-changing explosion that determines a new future for all living things:

And for reasons well discuss later, a huge part of the scientific community believes that its not a matter of whether well hit that tripwire, but when. Kind of a crazy piece of information.

So where does that leave us?

Well no one in the world, especially not I, can tell you what will happen when we hit the tripwire. But Oxford philosopher and lead AI thinker Nick Bostrom believes we can boil down all potential outcomes into two broad categories.

First, looking at history, we can see that life works like this: species pop up, exist for a while, and after some time, inevitably, they fall off the existence balance beam and land on extinction

All species eventually go extinct has been almost as reliable a rule through history as All humans eventually die has been. So far, 99.9% of species have fallen off the balance beam, and it seems pretty clear that if a species keeps wobbling along down the beam, its only a matter of time before some other species, some gust of natures wind, or a sudden beam-shaking asteroid knocks it off. Bostrom calls extinction an attractor statea place species are all teetering on falling into and from which no species ever returns.

And while most scientists Ive come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASIs abilities could be used to bring individual humans, and the species as a whole, to a second attractor statespecies immortality. Bostrom believes species immortality is just as much of an attractor state as species extinction, i.e. if we manage to get there, well be impervious to extinction foreverwell have conquered mortality and conquered chance. So even though all species so far have fallen off the balance beam and landed on extinction, Bostrom believes there are two sides to the beam and its just that nothing on Earth has been intelligent enough yet to figure out how to fall off on the other side.

If Bostrom and others are right, and from everything Ive read, it seems like they really might be, we have two pretty shocking facts to absorb:

1) The advent of ASI will, for the first time, open up the possibility for a species to land on the immortality side of the balance beam.

2) The advent of ASI will make such an unimaginably dramatic impact that its likely to knock the human race off the beam, in one direction or the other.

It may very well be that when evolution hits the tripwire, it permanently ends humans relationship with the beam and creates a new world, with or without humans.

Kind of seems like the only question any human should currently be asking is: When are we going to hit the tripwire and which side of the beam will we land on when that happens?

No one in the world knows the answer to either part of that question, but a lot of the very smartest people have put decades of thought into it. Well spend the rest of this post exploring what theyve come up with.

___________

Lets start with the first part of the question: When are we going to hit the tripwire?

i.e. How long until the first machine reaches superintelligence?

Not shockingly, opinions vary wildly and this is a heated debate among scientists and thinkers. Many, like professor Vernor Vinge, scientist Ben Goertzel, Sun Microsystems co-founder Bill Joy, or, most famously, inventor and futurist Ray Kurzweil, agree with machine learning expert Jeremy Howard when he puts up this graph during a TED Talk:

Those people subscribe to the belief that this is happening soonthat exponential growth is at work and machine learning, though only slowly creeping up on us now, will blow right past us within the next few decades.

Others, like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, NYU computer scientist Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers like Kurzweil are vastly underestimating the magnitude of the challenge and believe that were not actually that close to the tripwire.

The Kurzweil camp would counter that the only underestimating thats happening is the underappreciation of exponential growth, and theyd compare the doubters to those who looked at the slow-growing seedling of the internet in 1985 and argued that there was no way it would amount to anything impactful in the near future.

The doubters might argue back that the progress needed to make advancements in intelligence also grows exponentially harder with each subsequent step, which will cancel out the typical exponential nature of technological progress. And so on.

A third camp, which includes Nick Bostrom, believes neither group has any ground to feel certain about the timeline and acknowledges both A) that this could absolutely happen in the near future and B) that theres no guarantee about that; it could also take a much longer time.

Still others, like philosopher Hubert Dreyfus, believe all three of these groups are naive for believing that there even is a tripwire, arguing that its more likely that ASI wont actually ever be achieved.

So what do you get when you put all of these opinions together?

In 2013, Vincent C. Mller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI4 to exist? Itasked them to name an optimistic year (one in which they believe theres a 10% chance well have AGI), a realistic guess (a year they believe theres a 50% chance of AGIi.e. after that year they think its more likely than not that well have AGI), and a safe guess (the earliest year by which they can say with 90% certainty well have AGI). Gathered together as one data set, here were the results:2

Median optimistic year (10% likelihood): 2022Median realistic year (50% likelihood): 2040Median pessimistic year (90% likelihood): 2075

So the median participant thinks its more likely than not that well have AGI 25 years from now. The 90% median answer of 2075 means that if youre a teenager right now, the median respondent, along with over half of the group of AI experts, is almost certain AGI will happen within your lifetime.

A separate study, conducted recently by author James Barrat at Ben Goertzels annual AGI Conference, did away with percentages and simply asked when participants thought AGI would be achievedby 2030, by 2050, by 2100, after 2100, or never. The results:3

By 2030: 42% of respondentsBy 2050: 25% By 2100: 20%After 2100: 10% Never: 2%

Pretty similar to Mller and Bostroms outcomes. In Barrats survey, over two thirds of participants believe AGI will be here by 2050 and a little less than half predict AGI within the next 15 years. Also striking is that only 2% of those surveyed dont think AGI is part of our future.

But AGI isnt the tripwire, ASI is. So when do the experts think well reach ASI?

Mller and Bostrom also asked the experts how likely they think it is that well reach ASI A) within two years of reaching AGI (i.e. an almost-immediate intelligence explosion), and B) within 30 years. The results:4

The median answer put a rapid (2 year) AGI ASI transition at only a 10% likelihood, but a longer transition of 30 years or less at a 75% likelihood.

We dont know from this data the length of this transition the median participant would have put at a 50% likelihood, but for ballpark purposes, based on the two answers above, lets estimate that theyd have said 20 years. So the median opinionthe one right in the center of the world of AI expertsbelieves the most realistic guess for when well hit the ASI tripwire is [the 2040 prediction for AGI + our estimated prediction of a 20-year transition from AGI to ASI] = 2060.

Of course, all of the above statistics are speculative, and theyre only representative of the center opinion of the AI expert community, but it tells us that a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI. Only 45 years from now.

Okay now how about the second part of the question above: When we hit the tripwire, which side of the beam will we fall to?

Superintelligence will yield tremendous powerthe critical question for us is:

Who or what will be in control of that power, and what will their motivation be?

The answer to this will determine whether ASI is an unbelievably great development, an unfathomably terrible development, or something in between.

Of course, the expert community is again all over the board and in a heated debate about the answer to this question. Mller and Bostroms survey asked participants to assign a probability to the possible impacts AGI would have on humanity and found that the mean response was that there was a 52% chance that the outcome will be either good or extremely good and a 31% chance the outcome will be either bad or extremely bad. For a relatively neutral outcome, the mean probability was only 17%. In other words, the people who know the most about this are pretty sure this will be a huge deal. Its also worth noting that those numbers refer to the advent of AGIif the question were about ASI, I imagine that the neutral percentage would be even lower.

Before we dive much further into this good vs. bad outcome part of the question, lets combine both the when will it happen? and the will it be good or bad? parts of this question into a chart that encompasses the views of most of the relevant experts:

Well talk more about the Main Camp in a minute, but firstwhats your deal? Actually I know what your deal is, because it was my deal too before I started researching this topic. Some reasons most people arent really thinking about this topic:

One of the goals of these two posts is to get you out of the I Like to Think About Other Things Camp and into one of the expert camps, even if youre just standing on the intersection of the two dotted lines in the square above, totally uncertain.

During my research, I came across dozens of varying opinions on this topic, but I quickly noticed that most peoples opinions fell somewhere in what I labeled the Main Camp, and in particular, over three quarters of the experts fell into two Subcamps inside the Main Camp:

Were gonna take a thorough dive into both of these camps. Lets start with the fun one

As I learned about the world of AI, I found a surprisingly large number of people standing here:

The people on Confident Corner are buzzing with excitement. They have their sights set on the fun side of the balance beam and theyre convinced thats where all of us are headed. For them, the future is everything they ever could have hoped for, just in time.

The thing that separates these people from the other thinkers well discuss later isnt their lust for the happy side of the beamits their confidence that thats the side were going to land on.

Where this confidence comes from is up for debate. Critics believe it comes from an excitement so blinding that they simply ignore or deny potential negative outcomes. But the believers say its naive to conjure up doomsday scenarios when on balance, technology has and will likely end up continuing to help us a lot more than it hurts us.

Well cover both sides, and you can form your own opinion about this as you read, but for this section, put your skepticism away and lets take a good hard look at whats over there on the fun side of the balance beamand try to absorb the fact that the things youre reading might really happen. If you had shown a hunter-gatherer our world of indoor comfort, technology, and endless abundance, it would have seemed like fictional magic to himwe have to be humble enough to acknowledge that its possible that an equally inconceivable transformation could be in our future.

Nick Bostrom describes three ways a superintelligent AI system could function:6

These questions and tasks, which seem complicated to us, would sound to a superintelligent system like someone asking you to improve upon the My pencil fell off the table situation, which youd do by picking it up and putting it back on the table.

Eliezer Yudkowsky, a resident of Anxious Avenue in our chart above, said it well:

There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from impossible to obvious. Move a substantial degree upwards, and all of them will become obvious.7

There are a lot of eager scientists, inventors, and entrepreneurs in Confident Cornerbut for a tour of brightest side of the AI horizon, theres only one person we want as our tour guide.

Ray Kurzweil is polarizing. In my reading, I heard everything from godlike worship of him and his ideas to eye-rolling contempt for them. Others were somewhere in the middleauthor Douglas Hofstadter, in discussing the ideas in Kurzweils books, eloquently put forth that it is as if you took a lot of very good food and some dog excrement and blended it all up so that you cant possibly figure out whats good or bad.8

Whether you like his ideas or not, everyone agrees that Kurzweil is impressive. He began inventing things as a teenager and in the following decades, he came up with several breakthrough inventions, including the first flatbed scanner, the first scanner that converted text to speech (allowing the blind to read standard texts), the well-known Kurzweil music synthesizer (the first true electric piano), and the first commercially marketed large-vocabulary speech recognition. Hes the author of five national bestselling books. Hes well-known for his bold predictions and has a pretty good record of having them come trueincluding his prediction in the late 80s, a time when the internet was an obscure thing, that by the early 2000s, it would become a global phenomenon. Kurzweil has been called a restless genius by The Wall Street Journal, the ultimate thinking machine by Forbes, Edisons rightful heir by Inc. Magazine, and the best person I know at predicting the future of artificial intelligence by Bill Gates.9 In 2012, Google co-founder Larry Page approached Kurzweil and asked him to be Googles Director of Engineering.5 In 2011, he co-founded Singularity University, which is hosted by NASA and sponsored partially by Google. Not bad for one life.

This biography is important. When Kurzweil articulates his vision of the future, he sounds fully like a crackpot, and the crazy thing is that hes nothes an extremely smart, knowledgeable, relevant man in the world. You may think hes wrong about the future, but hes not a fool. Knowing hes such a legit dude makes me happy, because as Ive learned about his predictions for the future, I badly want him to be right. And you do too. As you hear Kurzweils predictions, many shared by other Confident Corner thinkers like Peter Diamandis and Ben Goertzel, its not hard to see why he has such a large, passionate followingknown as the singularitarians. Heres what he thinks is going to happen:

Timeline

Kurzweil believes computers will reach AGI by 2029 and that by 2045, well have not only ASI, but a full-blown new worlda time he calls the singularity. His AI-related timeline used to be seen as outrageously overzealous, and it still is by many,6 but in the last 15 years, the rapid advances of ANI systems have brought the larger world of AI experts much closer to Kurzweils timeline. His predictions are still a bit more ambitious than the median respondent on Mller and Bostroms survey (AGI by 2040, ASI by 2060), but not by that much.

Kurzweils depiction of the 2045 singularity is brought about by three simultaneous revolutions in biotechnology, nanotechnology, and, most powerfully, AI.

Before we move onnanotechnology comes up in almost everything you read about the future of AI, so come into this blue box for a minute so we can discuss it

Nanotechnology Blue Box

Nanotechnology is our word for technology that deals with the manipulation of matter thats between 1 and 100 nanometers in size. A nanometer is a billionth of a meter, or a millionth of a millimeter, and this 1-100 range encompasses viruses (100 nm across), DNA (10 nm wide), and things as small as large molecules like hemoglobin (5 nm) and medium molecules like glucose (1 nm). If/when we conquer nanotechnology, the next step will be the ability to manipulate individual atoms, which are only one order of magnitude smaller (~.1 nm).7

To understand the challenge of humans trying to manipulate matter in that range, lets take the same thing on a larger scale. The International Space Station is 268 mi (431 km) above the Earth. If humans were giants so large their heads reached up to the ISS, theyd be about 250,000 times bigger than they are now. If you make the 1nm 100nm nanotech range 250,000 times bigger, you get .25mm 2.5cm. So nanotechnology is the equivalent of a human giant as tall as the ISS figuring out how to carefully build intricate objects using materials between the size of a grain of sand and an eyeball. To reach the next levelmanipulating individual atomsthe giant would have to carefully position objects that are 1/40th of a millimeterso small normal-size humans would need a microscope to see them.8

Nanotech was first discussed by Richard Feynman in a 1959 talk, when he explained: The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It would be, in principle, possible for a physicist to synthesize any chemical substance that the chemist writes down. How? Put the atoms down where the chemist says, and so you make the substance. Its as simple as that. If you can figure out how to move individual molecules or atoms around, you can make literally anything.

Nanotech became a serious field for the first time in 1986, when engineer Eric Drexler provided its foundations in his seminal book Engines of Creation, but Drexler suggests that those looking to learn about the most modern ideas in nanotechnology would be best off reading his 2013 book, Radical Abundance.

Gray Goo Bluer Box

Were now in a diversion in a diversion. This is very fun.9

Anyway, I brought you here because theres this really unfunny part of nanotechnology lore I need to tell you about. In older versions of nanotech theory, a proposed method of nanoassembly involved the creation of trillions of tiny nanobots that would work in conjunction to build something. One way to create trillions of nanobots would be to make one that could self-replicate and then let the reproduction process turn that one into two, those two then turn into four, four into eight, and in about a day, thered be a few trillion of them ready to go. Thats the power of exponential growth. Clever, right?

Its clever until it causes the grand and complete Earthwide apocalypse by accident. The issue is that the same power of exponential growth that makes it super convenient to quickly create a trillion nanobots makes self-replication a terrifying prospect. Because what if the system glitches, and instead of stopping replication once the total hits a few trillion as expected, they just keep replicating? The nanobots would be designed to consume any carbon-based material in order to feed the replication process, and unpleasantly, all life is carbon-based. The Earths biomass contains about 1045 carbon atoms. A nanobot would consist of about 106 carbon atoms, so 1039 nanobots would consume all life on Earth, which would happen in 130 replications (2130 is about 1039), as oceans of nanobots (thats the gray goo) rolled around the planet. Scientists think a nanobot could replicate in about 100 seconds, meaning this simple mistake would inconveniently end all life on Earth in 3.5 hours.

Follow this link:

The Artificial Intelligence Revolution: Part 2 – Wait But Why

Posted in Superintelligence | Comments Off on The Artificial Intelligence Revolution: Part 2 – Wait But Why

Should privacy legislation influence how courts interpret the …

Posted: September 18, 2016 at 8:12 am

I recently posted a revised draft of my forthcoming article, The Effect of Legislation on Fourth Amendment Interpretation, and I thought I would blog a bit about it. The article considers a recurring question in Fourth Amendment law: When courts are called on to interpret the Fourth Amendment, and there is privacy legislation on the books that relates to the governments conduct, should the existence of legislation have any effect on how the Fourth Amendment is interpreted? And if it should have an effect, what effect should it have?

I was led to this question by reading a lot of cases in which the issue came up and was answered in very different ways by particularly prominent judges. When I assembled all the cases, I found that judges had articulated three different answers. None of the judges seemed aware that the question had come up in other cases and had been answered differently there. Each of the three answers seemed plausible, and each tapped into important traditions in constitutional interpretation. So you have a pretty interesting situation: Really smart judges were running into the same question and answering it in very different ways, each rooted in substantial traditions, with no one approach predominating and no conversation about which approach was best. It seemed like a fun issue to explore in an article.

In this post Ill summarize the three approaches courts have taken. I call the approaches influence, displacement and independence. For each approach, Ill give one illustrative case. But theres a lot more where that came from: For more details on the three approaches and the cases supporting them, please read the draft article.

1. Influence. In the influence cases, legislation is considered a possible standard for judicial adoption under the Fourth Amendment. The influence cases rest on a pragmatic judgment: If courts must make difficult judgment calls about how to balance privacy and security, and legislatures have done so already in enacting legislation, courts can draw lessons from the thoughtful judgment of a co-equal branch. Investigative legislation provides an important standard for courts to consider in interpreting the Fourth Amendment. Its not binding on courts, but its a relevant consideration.

The Supreme Courts decision in United States v. Watsonis an example of the influence approach. Watson considered whether it is constitutionally reasonable for a postal inspector to make a public arrest for a felony offense based on probable cause but without a warrant. A federal statute expressly authorized such warrantless arrests. The court ruled that the arrests were constitutional without a warrant and that the statute was constitutional. Justice Whites majority opinion relied heavily on deference to Congresss legislative judgment. According to Justice White, the statute authorizing the arrests represents a judgment by Congress that it is not unreasonable under the Fourth Amendment for postal inspectors to arrest without a warrant provided they have probable cause to do so. That judgment was entitled to presumptive deference as the considered judgment of a co-equal branch. Because there is a strong presumption of constitutionality due to an Act of Congress, the court stated, especially when it turns on what is reasonable, then obviously the Court should be reluctant to decide that a search thus authorized by Congress was unreasonable and that the Act was therefore unconstitutional.

2. Displacement. In the displacement cases, the existence of legislation counsels against Fourth Amendment protection that might interrupt the statutory scheme. Because legislatures can often do a better job at balancing privacy and security in new technologies as compared to courts, courts should reject Fourth Amendment protection as long as legislatures are protecting privacy adequately to avoid interfering with the careful work of the legislative branch. The existence of investigative legislation effectively preempts the field and displaces Fourth Amendment protection that may otherwise exist.

Justice Alitos concurrence in Riley v. Californiais an example of the displacement approach. Riley held that the government must obtain a search warrant before searching a cellphone incident to a suspects lawful arrest. Justice Alito concurred, agreeing with the majority only in the absence of adequate legislation regulating cellphone searches. I would reconsider the question presented here, he wrote, if either Congress or state legislatures, after assessing the legitimate needs of law enforcement and the privacy interests of cell phone owners, enact legislation that draws reasonable distinctions based on categories of information or perhaps other variables.

The enactment of investigative legislation should discourage judicial intervention, Justice Alito reasoned, because [l]egislatures, elected by the people, are in a better position than we are to assess and respond to the changes that have already occurred and those that almost certainly will take place in the future. Although Fourth Amendment protection was necessary in the absence of legislation, the enactment of legislation might be reason to withdraw Fourth Amendment protection to avoid the very unfortunate result of federal courts using the blunt instrument of the Fourth Amendment to try to protect privacy in emerging technologies.

3. Independence. In the independence cases, courts treat legislation as irrelevant to the Fourth Amendment. Legislatures are free to supplement privacy protections by enacting statutes, of course. But from the independence perspective, legislation sheds no light on what the Fourth Amendment requires. Courts must independently interpret the Fourth Amendment, and what legislatures have done has no relevance.

An example of independence is Virginia v. Moore, where the Supreme Court decided whether the search incident to a lawful arrest exception incorporates the state law of arrest. Moore was arrested despite a state law saying his crime could not lead to arrest; the question was whether the state law violation rendered the arrest unconstitutional. According to the court, whether state law made the arrest lawful was irrelevant to the Fourth Amendment. It was the courts duty to interpret the Fourth Amendment, and what the legislature decided about when arrests could be made was a separate question. History suggested that the Fourth Amendment did not incorporate statutes. And the states decision of when to make arrests was not based on the Fourth Amendment and was based on other considerations, such as the costs of arrests and whether the legislature valued privacy more than the Fourth Amendment required. Constitutionalizing the state standard would only frustrate the states efforts to achieve those goals, as it would mean los[ing] control of the regulatory scheme and might lead the state to abandon restrictions on arrest altogether. For that reason, the statute regulating the police was independent of the Fourth Amendment standard.

Those are the three approaches. The next question is, which is best? Ill offer some thoughts on that in my next post.

See the original post here:
Should privacy legislation influence how courts interpret the …

Posted in Fourth Amendment | Comments Off on Should privacy legislation influence how courts interpret the …

DNA repair – Wikipedia, the free encyclopedia

Posted: September 8, 2016 at 6:32 am

DNA damage resulting in multiple broken chromosomes

DNA repair is a collection of processes by which a cell identifies and corrects damage to the DNA molecules that encode its genome. In human cells, both normal metabolic activities and environmental factors such as radiation can cause DNA damage, resulting in as many as 1 million individual molecular lesions per cell per day.[1] Many of these lesions cause structural damage to the DNA molecule and can alter or eliminate the cell’s ability to transcribe the gene that the affected DNA encodes. Other lesions induce potentially harmful mutations in the cell’s genome, which affect the survival of its daughter cells after it undergoes mitosis. As a consequence, the DNA repair process is constantly active as it responds to damage in the DNA structure. When normal repair processes fail, and when cellular apoptosis does not occur, irreparable DNA damage may occur, including double-strand breaks and DNA crosslinkages (interstrand crosslinks or ICLs).[2][3] This can eventually lead to malignant tumors, or cancer as per the two hit hypothesis.

The rate of DNA repair is dependent on many factors, including the cell type, the age of the cell, and the extracellular environment. A cell that has accumulated a large amount of DNA damage, or one that no longer effectively repairs damage incurred to its DNA, can enter one of three possible states:

The DNA repair ability of a cell is vital to the integrity of its genome and thus to the normal functionality of that organism. Many genes that were initially shown to influence life span have turned out to be involved in DNA damage repair and protection.[4]

The 2015 Nobel Prize in Chemistry was awarded to Tomas Lindahl, Paul Modrich, and Aziz Sancar for their work on the molecular mechanisms of DNA repair processes.[5][6]

DNA damage, due to environmental factors and normal metabolic processes inside the cell, occurs at a rate of 10,000 to 1,000,000 molecular lesions per cell per day.[1] While this constitutes only 0.000165% of the human genome’s approximately 6 billion bases (3 billion base pairs), unrepaired lesions in critical genes (such as tumor suppressor genes) can impede a cell’s ability to carry out its function and appreciably increase the likelihood of tumor formation and contribute to tumour heterogeneity.

The vast majority of DNA damage affects the primary structure of the double helix; that is, the bases themselves are chemically modified. These modifications can in turn disrupt the molecules’ regular helical structure by introducing non-native chemical bonds or bulky adducts that do not fit in the standard double helix. Unlike proteins and RNA, DNA usually lacks tertiary structure and therefore damage or disturbance does not occur at that level. DNA is, however, supercoiled and wound around “packaging” proteins called histones (in eukaryotes), and both superstructures are vulnerable to the effects of DNA damage.

DNA damage can be subdivided into two main types:

The replication of damaged DNA before cell division can lead to the incorporation of wrong bases opposite damaged ones. Daughter cells that inherit these wrong bases carry mutations from which the original DNA sequence is unrecoverable (except in the rare case of a back mutation, for example, through gene conversion).

There are several types of damage to DNA due to endogenous cellular processes:

Damage caused by exogenous agents comes in many forms. Some examples are:

UV damage, alkylation/methylation, X-ray damage and oxidative damage are examples of induced damage. Spontaneous damage can include the loss of a base, deamination, sugar ring puckering and tautomeric shift.

In human cells, and eukaryotic cells in general, DNA is found in two cellular locations inside the nucleus and inside the mitochondria. Nuclear DNA (nDNA) exists as chromatin during non-replicative stages of the cell cycle and is condensed into aggregate structures known as chromosomes during cell division. In either state the DNA is highly compacted and wound up around bead-like proteins called histones. Whenever a cell needs to express the genetic information encoded in its nDNA the required chromosomal region is unravelled, genes located therein are expressed, and then the region is condensed back to its resting conformation. Mitochondrial DNA (mtDNA) is located inside mitochondria organelles, exists in multiple copies, and is also tightly associated with a number of proteins to form a complex known as the nucleoid. Inside mitochondria, reactive oxygen species (ROS), or free radicals, byproducts of the constant production of adenosine triphosphate (ATP) via oxidative phosphorylation, create a highly oxidative environment that is known to damage mtDNA. A critical enzyme in counteracting the toxicity of these species is superoxide dismutase, which is present in both the mitochondria and cytoplasm of eukaryotic cells.

Senescence, an irreversible process in which the cell no longer divides, is a protective response to the shortening of the chromosome ends. The telomeres are long regions of repetitive noncoding DNA that cap chromosomes and undergo partial degradation each time a cell undergoes division (see Hayflick limit).[10] In contrast, quiescence is a reversible state of cellular dormancy that is unrelated to genome damage (see cell cycle). Senescence in cells may serve as a functional alternative to apoptosis in cases where the physical presence of a cell for spatial reasons is required by the organism,[11] which serves as a “last resort” mechanism to prevent a cell with damaged DNA from replicating inappropriately in the absence of pro-growth cellular signaling. Unregulated cell division can lead to the formation of a tumor (see cancer), which is potentially lethal to an organism. Therefore, the induction of senescence and apoptosis is considered to be part of a strategy of protection against cancer.[12]

It is important to distinguish between DNA damage and mutation, the two major types of error in DNA. DNA damages and mutation are fundamentally different. Damages are physical abnormalities in the DNA, such as single- and double-strand breaks, 8-hydroxydeoxyguanosine residues, and polycyclic aromatic hydrocarbon adducts. DNA damages can be recognized by enzymes, and, thus, they can be correctly repaired if redundant information, such as the undamaged sequence in the complementary DNA strand or in a homologous chromosome, is available for copying. If a cell retains DNA damage, transcription of a gene can be prevented, and, thus, translation into a protein will also be blocked. Replication may also be blocked or the cell may die.

In contrast to DNA damage, a mutation is a change in the base sequence of the DNA. A mutation cannot be recognized by enzymes once the base change is present in both DNA strands, and, thus, a mutation cannot be repaired. At the cellular level, mutations can cause alterations in protein function and regulation. Mutations are replicated when the cell replicates. In a population of cells, mutant cells will increase or decrease in frequency according to the effects of the mutation on the ability of the cell to survive and reproduce. Although distinctly different from each other, DNA damages and mutations are related because DNA damages often cause errors of DNA synthesis during replication or repair; these errors are a major source of mutation.

Given these properties of DNA damage and mutation, it can be seen that DNA damages are a special problem in non-dividing or slowly dividing cells, where unrepaired damages will tend to accumulate over time. On the other hand, in rapidly dividing cells, unrepaired DNA damages that do not kill the cell by blocking replication will tend to cause replication errors and thus mutation. The great majority of mutations that are not neutral in their effect are deleterious to a cell’s survival. Thus, in a population of cells composing a tissue with replicating cells, mutant cells will tend to be lost. However, infrequent mutations that provide a survival advantage will tend to clonally expand at the expense of neighboring cells in the tissue. This advantage to the cell is disadvantageous to the whole organism, because such mutant cells can give rise to cancer. Thus, DNA damages in frequently dividing cells, because they give rise to mutations, are a prominent cause of cancer. In contrast, DNA damages in infrequently dividing cells are likely a prominent cause of aging.[13]

Single-strand and double-strand DNA damage

Cells cannot function if DNA damage corrupts the integrity and accessibility of essential information in the genome (but cells remain superficially functional when non-essential genes are missing or damaged). Depending on the type of damage inflicted on the DNA’s double helical structure, a variety of repair strategies have evolved to restore lost information. If possible, cells use the unmodified complementary strand of the DNA or the sister chromatid as a template to recover the original information. Without access to a template, cells use an error-prone recovery mechanism known as translesion synthesis as a last resort.

Damage to DNA alters the spatial configuration of the helix, and such alterations can be detected by the cell. Once damage is localized, specific DNA repair molecules bind at or near the site of damage, inducing other molecules to bind and form a complex that enables the actual repair to take place.

Cells are known to eliminate three types of damage to their DNA by chemically reversing it. These mechanisms do not require a template, since the types of damage they counteract can occur in only one of the four bases. Such direct reversal mechanisms are specific to the type of damage incurred and do not involve breakage of the phosphodiester backbone. The formation of pyrimidine dimers upon irradiation with UV light results in an abnormal covalent bond between adjacent pyrimidine bases. The photoreactivation process directly reverses this damage by the action of the enzyme photolyase, whose activation is obligately dependent on energy absorbed from blue/UV light (300500nm wavelength) to promote catalysis.[14] Photolyase, an old enzyme present in bacteria, fungi, and most animals no longer functions in humans,[15] who instead use nucleotide excision repair to repair damage from UV irradiation. Another type of damage, methylation of guanine bases, is directly reversed by the protein methyl guanine methyl transferase (MGMT), the bacterial equivalent of which is called ogt. This is an expensive process because each MGMT molecule can be used only once; that is, the reaction is stoichiometric rather than catalytic.[16] A generalized response to methylating agents in bacteria is known as the adaptive response and confers a level of resistance to alkylating agents upon sustained exposure by upregulation of alkylation repair enzymes.[17] The third type of DNA damage reversed by cells is certain methylation of the bases cytosine and adenine.

When only one of the two strands of a double helix has a defect, the other strand can be used as a template to guide the correction of the damaged strand. In order to repair damage to one of the two paired molecules of DNA, there exist a number of excision repair mechanisms that remove the damaged nucleotide and replace it with an undamaged nucleotide complementary to that found in the undamaged DNA strand.[16]

Double-strand breaks, in which both strands in the double helix are severed, are particularly hazardous to the cell because they can lead to genome rearrangements. Three mechanisms exist to repair double-strand breaks (DSBs): non-homologous end joining (NHEJ), microhomology-mediated end joining (MMEJ), and homologous recombination.[16] PVN Acharya noted that double-strand breaks and a “cross-linkage joining both strands at the same point is irreparable because neither strand can then serve as a template for repair. The cell will die in the next mitosis or in some rare instances, mutate.”[2][3]

In NHEJ, DNA Ligase IV, a specialized DNA ligase that forms a complex with the cofactor XRCC4, directly joins the two ends.[21] To guide accurate repair, NHEJ relies on short homologous sequences called microhomologies present on the single-stranded tails of the DNA ends to be joined. If these overhangs are compatible, repair is usually accurate.[22][23][24][25] NHEJ can also introduce mutations during repair. Loss of damaged nucleotides at the break site can lead to deletions, and joining of nonmatching termini forms insertions or translocations. NHEJ is especially important before the cell has replicated its DNA, since there is no template available for repair by homologous recombination. There are “backup” NHEJ pathways in higher eukaryotes.[26] Besides its role as a genome caretaker, NHEJ is required for joining hairpin-capped double-strand breaks induced during V(D)J recombination, the process that generates diversity in B-cell and T-cell receptors in the vertebrate immune system.[27]

MMEJ starts with short-range end resection by MRE11 nuclease on either side of a double-strand break to reveal microhomology regions.[28] In further steps,[29] PARP1 is required and may be an early step in MMEJ. There is pairing of microhomology regions followed by recruitment of flap structure-specific endonuclease 1 (FEN1) to remove overhanging flaps. This is followed by recruitment of XRCC1LIG3 to the site for ligating the DNA ends, leading to an intact DNA.

DNA double strand breaks in mammalian cells are primarily repaired by homologous recombination (HR) and non-homologous end joining (NHEJ).[30] In an in vitro system, MMEJ occurred in mammalian cells at the levels of 1020% of HR when both HR and NHEJ mechanisms were also available.[28] MMEJ is always accompanied by a deletion, so that MMEJ is a mutagenic pathway for DNA repair.[31]

Homologous recombination requires the presence of an identical or nearly identical sequence to be used as a template for repair of the break. The enzymatic machinery responsible for this repair process is nearly identical to the machinery responsible for chromosomal crossover during meiosis. This pathway allows a damaged chromosome to be repaired using a sister chromatid (available in G2 after DNA replication) or a homologous chromosome as a template. DSBs caused by the replication machinery attempting to synthesize across a single-strand break or unrepaired lesion cause collapse of the replication fork and are typically repaired by recombination.

Topoisomerases introduce both single- and double-strand breaks in the course of changing the DNA’s state of supercoiling, which is especially common in regions near an open replication fork. Such breaks are not considered DNA damage because they are a natural intermediate in the topoisomerase biochemical mechanism and are immediately repaired by the enzymes that created them.

A team of French researchers bombarded Deinococcus radiodurans to study the mechanism of double-strand break DNA repair in that bacterium. At least two copies of the genome, with random DNA breaks, can form DNA fragments through annealing. Partially overlapping fragments are then used for synthesis of homologous regions through a moving D-loop that can continue extension until they find complementary partner strands. In the final step there is crossover by means of RecA-dependent homologous recombination.[32]

Translesion synthesis (TLS) is a DNA damage tolerance process that allows the DNA replication machinery to replicate past DNA lesions such as thymine dimers or AP sites.[33] It involves switching out regular DNA polymerases for specialized translesion polymerases (i.e. DNA polymerase IV or V, from the Y Polymerase family), often with larger active sites that can facilitate the insertion of bases opposite damaged nucleotides. The polymerase switching is thought to be mediated by, among other factors, the post-translational modification of the replication processivity factor PCNA. Translesion synthesis polymerases often have low fidelity (high propensity to insert wrong bases) on undamaged templates relative to regular polymerases. However, many are extremely efficient at inserting correct bases opposite specific types of damage. For example, Pol mediates error-free bypass of lesions induced by UV irradiation, whereas Pol introduces mutations at these sites. Pol is known to add the first adenine across the T^T photodimer using Watson-Crick base pairing and the second adenine will be added in its syn conformation using Hoogsteen base pairing. From a cellular perspective, risking the introduction of point mutations during translesion synthesis may be preferable to resorting to more drastic mechanisms of DNA repair, which may cause gross chromosomal aberrations or cell death. In short, the process involves specialized polymerases either bypassing or repairing lesions at locations of stalled DNA replication. For example, Human DNA polymerase eta can bypass complex DNA lesions like guanine-thymine intra-strand crosslink, G[8,5-Me]T, although can cause targeted and semi-targeted mutations.[34] Paromita Raychaudhury and Ashis Basu[35] studied the toxicity and mutagenesis of the same lesion in Escherichia coli by replicating a G[8,5-Me]T-modified plasmid in E. coli with specific DNA polymerase knockouts. Viability was very low in a strain lacking pol II, pol IV, and pol V, the three SOS-inducible DNA polymerases, indicating that translesion synthesis is conducted primarily by these specialized DNA polymerases. A bypass platform is provided to these polymerases by Proliferating cell nuclear antigen (PCNA). Under normal circumstances, PCNA bound to polymerases replicates the DNA. At a site of lesion, PCNA is ubiquitinated, or modified, by the RAD6/RAD18 proteins to provide a platform for the specialized polymerases to bypass the lesion and resume DNA replication.[36][37] After translesion synthesis, extension is required. This extension can be carried out by a replicative polymerase if the TLS is error-free, as in the case of Pol , yet if TLS results in a mismatch, a specialized polymerase is needed to extend it; Pol . Pol is unique in that it can extend terminal mismatches, whereas more processive polymerases cannot. So when a lesion is encountered, the replication fork will stall, PCNA will switch from a processive polymerase to a TLS polymerase such as Pol to fix the lesion, then PCNA may switch to Pol to extend the mismatch, and last PCNA will switch to the processive polymerase to continue replication.

Cells exposed to ionizing radiation, ultraviolet light or chemicals are prone to acquire multiple sites of bulky DNA lesions and double-strand breaks. Moreover, DNA damaging agents can damage other biomolecules such as proteins, carbohydrates, lipids, and RNA. The accumulation of damage, to be specific, double-strand breaks or adducts stalling the replication forks, are among known stimulation signals for a global response to DNA damage.[38] The global response to damage is an act directed toward the cells’ own preservation and triggers multiple pathways of macromolecular repair, lesion bypass, tolerance, or apoptosis. The common features of global response are induction of multiple genes, cell cycle arrest, and inhibition of cell division.

After DNA damage, cell cycle checkpoints are activated. Checkpoint activation pauses the cell cycle and gives the cell time to repair the damage before continuing to divide. DNA damage checkpoints occur at the G1/S and G2/M boundaries. An intra-S checkpoint also exists. Checkpoint activation is controlled by two master kinases, ATM and ATR. ATM responds to DNA double-strand breaks and disruptions in chromatin structure,[39] whereas ATR primarily responds to stalled replication forks. These kinases phosphorylate downstream targets in a signal transduction cascade, eventually leading to cell cycle arrest. A class of checkpoint mediator proteins including BRCA1, MDC1, and 53BP1 has also been identified.[40] These proteins seem to be required for transmitting the checkpoint activation signal to downstream proteins.

DNA damage checkpoint is a signal transduction pathway that blocks cell cycle progression in G1, G2 and metaphase and slows down the rate of S phase progression when DNA is damaged. It leads to a pause in cell cycle allowing the cell time to repair the damage before continuing to divide.

Checkpoint Proteins can be separated into four groups: phosphatidylinositol 3-kinase (PI3K)-like protein kinase, proliferating cell nuclear antigen (PCNA)-like group, two serine/threonine(S/T) kinases and their adaptors. Central to all DNA damage induced checkpoints responses is a pair of large protein kinases belonging to the first group of PI3K-like protein kinases-the ATM (Ataxia telangiectasia mutated) and ATR (Ataxia- and Rad-related) kinases, whose sequence and functions have been well conserved in evolution. All DNA damage response requires either ATM or ATR because they have the ability to bind to the chromosomes at the site of DNA damage, together with accessory proteins that are platforms on which DNA damage response components and DNA repair complexes can be assembled.

An important downstream target of ATM and ATR is p53, as it is required for inducing apoptosis following DNA damage.[41] The cyclin-dependent kinase inhibitor p21 is induced by both p53-dependent and p53-independent mechanisms and can arrest the cell cycle at the G1/S and G2/M checkpoints by deactivating cyclin/cyclin-dependent kinase complexes.[42]

The SOS response is the changes in gene expression in Escherichia coli and other bacteria in response to extensive DNA damage. The prokaryotic SOS system is regulated by two key proteins: LexA and RecA. The LexA homodimer is a transcriptional repressor that binds to operator sequences commonly referred to as SOS boxes. In Escherichia coli it is known that LexA regulates transcription of approximately 48 genes including the lexA and recA genes.[43] The SOS response is known to be widespread in the Bacteria domain, but it is mostly absent in some bacterial phyla, like the Spirochetes.[44] The most common cellular signals activating the SOS response are regions of single-stranded DNA (ssDNA), arising from stalled replication forks or double-strand breaks, which are processed by DNA helicase to separate the two DNA strands.[38] In the initiation step, RecA protein binds to ssDNA in an ATP hydrolysis driven reaction creating RecAssDNA filaments. RecAssDNA filaments activate LexA autoprotease activity, which ultimately leads to cleavage of LexA dimer and subsequent LexA degradation. The loss of LexA repressor induces transcription of the SOS genes and allows for further signal induction, inhibition of cell division and an increase in levels of proteins responsible for damage processing.

In Escherichia coli, SOS boxes are 20-nucleotide long sequences near promoters with palindromic structure and a high degree of sequence conservation. In other classes and phyla, the sequence of SOS boxes varies considerably, with different length and composition, but it is always highly conserved and one of the strongest short signals in the genome.[44] The high information content of SOS boxes permits differential binding of LexA to different promoters and allows for timing of the SOS response. The lesion repair genes are induced at the beginning of SOS response. The error-prone translesion polymerases, for example, UmuCD’2 (also called DNA polymerase V), are induced later on as a last resort.[45] Once the DNA damage is repaired or bypassed using polymerases or through recombination, the amount of single-stranded DNA in cells is decreased, lowering the amounts of RecA filaments decreases cleavage activity of LexA homodimer, which then binds to the SOS boxes near promoters and restores normal gene expression.

Eukaryotic cells exposed to DNA damaging agents also activate important defensive pathways by inducing multiple proteins involved in DNA repair, cell cycle checkpoint control, protein trafficking and degradation. Such genome wide transcriptional response is very complex and tightly regulated, thus allowing coordinated global response to damage. Exposure of yeast Saccharomyces cerevisiae to DNA damaging agents results in overlapping but distinct transcriptional profiles. Similarities to environmental shock response indicates that a general global stress response pathway exist at the level of transcriptional activation. In contrast, different human cell types respond to damage differently indicating an absence of a common global response. The probable explanation for this difference between yeast and human cells may be in the heterogeneity of mammalian cells. In an animal different types of cells are distributed among different organs that have evolved different sensitivities to DNA damage.[46]

In general global response to DNA damage involves expression of multiple genes responsible for postreplication repair, homologous recombination, nucleotide excision repair, DNA damage checkpoint, global transcriptional activation, genes controlling mRNA decay, and many others. A large amount of damage to a cell leaves it with an important decision: undergo apoptosis and die, or survive at the cost of living with a modified genome. An increase in tolerance to damage can lead to an increased rate of survival that will allow a greater accumulation of mutations. Yeast Rev1 and human polymerase are members of [Y family translesion DNA polymerases present during global response to DNA damage and are responsible for enhanced mutagenesis during a global response to DNA damage in eukaryotes.[38]

DNA repair rate is an important determinant of cell pathology

Experimental animals with genetic deficiencies in DNA repair often show decreased life span and increased cancer incidence.[13] For example, mice deficient in the dominant NHEJ pathway and in telomere maintenance mechanisms get lymphoma and infections more often, and, as a consequence, have shorter lifespans than wild-type mice.[47] In similar manner, mice deficient in a key repair and transcription protein that unwinds DNA helices have premature onset of aging-related diseases and consequent shortening of lifespan.[48] However, not every DNA repair deficiency creates exactly the predicted effects; mice deficient in the NER pathway exhibited shortened life span without correspondingly higher rates of mutation.[49]

If the rate of DNA damage exceeds the capacity of the cell to repair it, the accumulation of errors can overwhelm the cell and result in early senescence, apoptosis, or cancer. Inherited diseases associated with faulty DNA repair functioning result in premature aging,[13] increased sensitivity to carcinogens, and correspondingly increased cancer risk (see below). On the other hand, organisms with enhanced DNA repair systems, such as Deinococcus radiodurans, the most radiation-resistant known organism, exhibit remarkable resistance to the double-strand break-inducing effects of radioactivity, likely due to enhanced efficiency of DNA repair and especially NHEJ.[50]

Most life span influencing genes affect the rate of DNA damage

A number of individual genes have been identified as influencing variations in life span within a population of organisms. The effects of these genes is strongly dependent on the environment, in particular, on the organism’s diet. Caloric restriction reproducibly results in extended lifespan in a variety of organisms, likely via nutrient sensing pathways and decreased metabolic rate. The molecular mechanisms by which such restriction results in lengthened lifespan are as yet unclear (see[51] for some discussion); however, the behavior of many genes known to be involved in DNA repair is altered under conditions of caloric restriction.

For example, increasing the gene dosage of the gene SIR-2, which regulates DNA packaging in the nematode worm Caenorhabditis elegans, can significantly extend lifespan.[52] The mammalian homolog of SIR-2 is known to induce downstream DNA repair factors involved in NHEJ, an activity that is especially promoted under conditions of caloric restriction.[53] Caloric restriction has been closely linked to the rate of base excision repair in the nuclear DNA of rodents,[54] although similar effects have not been observed in mitochondrial DNA.[55]

The C. elegans gene AGE-1, an upstream effector of DNA repair pathways, confers dramatically extended life span under free-feeding conditions but leads to a decrease in reproductive fitness under conditions of caloric restriction.[56] This observation supports the pleiotropy theory of the biological origins of aging, which suggests that genes conferring a large survival advantage early in life will be selected for even if they carry a corresponding disadvantage late in life.

Defects in the NER mechanism are responsible for several genetic disorders, including:

Mental retardation often accompanies the latter two disorders, suggesting increased vulnerability of developmental neurons.

Other DNA repair disorders include:

All of the above diseases are often called “segmental progerias” (“accelerated aging diseases”) because their victims appear elderly and suffer from aging-related diseases at an abnormally young age, while not manifesting all the symptoms of old age.

Other diseases associated with reduced DNA repair function include Fanconi anemia, hereditary breast cancer and hereditary colon cancer.

Because of inherent limitations in the DNA repair mechanisms, if humans lived long enough, they would all eventually develop cancer.[57][58] There are at least 34 Inherited human DNA repair gene mutations that increase cancer risk. Many of these mutations cause DNA repair to be less effective than normal. In particular, Hereditary nonpolyposis colorectal cancer (HNPCC) is strongly associated with specific mutations in the DNA mismatch repair pathway. BRCA1 and BRCA2, two famous genes whose mutations confer a hugely increased risk of breast cancer on carriers, are both associated with a large number of DNA repair pathways, especially NHEJ and homologous recombination.

Cancer therapy procedures such as chemotherapy and radiotherapy work by overwhelming the capacity of the cell to repair DNA damage, resulting in cell death. Cells that are most rapidly dividing most typically cancer cells are preferentially affected. The side-effect is that other non-cancerous but rapidly dividing cells such as progenitor cells in the gut, skin, and hematopoietic system are also affected. Modern cancer treatments attempt to localize the DNA damage to cells and tissues only associated with cancer, either by physical means (concentrating the therapeutic agent in the region of the tumor) or by biochemical means (exploiting a feature unique to cancer cells in the body).

Classically, cancer has been viewed as a set of diseases that are driven by progressive genetic abnormalities that include mutations in tumour-suppressor genes and oncogenes, and chromosomal aberrations. However, it has become apparent that cancer is also driven by epigenetic alterations.[59]

Epigenetic alterations refer to functionally relevant modifications to the genome that do not involve a change in the nucleotide sequence. Examples of such modifications are changes in DNA methylation (hypermethylation and hypomethylation) and histone modification,[60] changes in chromosomal architecture (caused by inappropriate expression of proteins such as HMGA2 or HMGA1)[61] and changes caused by microRNAs. Each of these epigenetic alterations serves to regulate gene expression without altering the underlying DNA sequence. These changes usually remain through cell divisions, last for multiple cell generations, and can be considered to be epimutations (equivalent to mutations).

While large numbers of epigenetic alterations are found in cancers, the epigenetic alterations in DNA repair genes, causing reduced expression of DNA repair proteins, appear to be particularly important. Such alterations are thought to occur early in progression to cancer and to be a likely cause of the genetic instability characteristic of cancers.[62][63][64][65]

Reduced expression of DNA repair genes causes deficient DNA repair. When DNA repair is deficient DNA damages remain in cells at a higher than usual level and these excess damages cause increased frequencies of mutation or epimutation. Mutation rates increase substantially in cells defective in DNA mismatch repair[66][67] or in homologous recombinational repair (HRR).[68] Chromosomal rearrangements and aneuploidy also increase in HRR defective cells.[69]

Higher levels of DNA damage not only cause increased mutation, but also cause increased epimutation. During repair of DNA double strand breaks, or repair of other DNA damages, incompletely cleared sites of repair can cause epigenetic gene silencing.[70][71]

Deficient expression of DNA repair proteins due to an inherited mutation can cause increased risk of cancer. Individuals with an inherited impairment in any of 34 DNA repair genes (see article DNA repair-deficiency disorder) have an increased risk of cancer, with some defects causing up to a 100% lifetime chance of cancer (e.g. p53 mutations).[72] However, such germline mutations (which cause highly penetrant cancer syndromes) are the cause of only about 1 percent of cancers.[73]

Deficiencies in DNA repair enzymes are occasionally caused by a newly arising somatic mutation in a DNA repair gene, but are much more frequently caused by epigenetic alterations that reduce or silence expression of DNA repair genes. For example, when 113 colorectal cancers were examined in sequence, only four had a missense mutation in the DNA repair gene MGMT, while the majority had reduced MGMT expression due to methylation of the MGMT promoter region (an epigenetic alteration).[74] Five different studies found that between 40% and 90% of colorectal cancers have reduced MGMT expression due to methylation of the MGMT promoter region.[75][76][77][78][79]

Similarly, out of 119 cases of mismatch repair-deficient colorectal cancers that lacked DNA repair gene PMS2 expression, PMS2 was deficient in 6 due to mutations in the PMS2 gene, while in 103 cases PMS2 expression was deficient because its pairing partner MLH1 was repressed due to promoter methylation (PMS2 protein is unstable in the absence of MLH1).[80] In the other 10 cases, loss of PMS2 expression was likely due to epigenetic overexpression of the microRNA, miR-155, which down-regulates MLH1.[81]

In further examples (tabulated in Table 4 of this reference[82]), epigenetic defects were found at frequencies of between 13%-100% for the DNA repair genes BRCA1, WRN, FANCB, FANCF, MGMT, MLH1, MSH2, MSH4, ERCC1, XPF, NEIL1 and ATM. These epigenetic defects occurred in various cancers (e.g. breast, ovarian, colorectal and head and neck). Two or three deficiencies in the expression of ERCC1, XPF or PMS2 occur simultaneously in the majority of the 49 colon cancers evaluated by Facista et al.[83]

The chart in this section shows some frequent DNA damaging agents, examples of DNA lesions they cause, and the pathways that deal with these DNA damages. At least 169 enzymes are either directly employed in DNA repair or influence DNA repair processes.[84] Of these, 83 are directly employed in repairing the 5 types of DNA damages illustrated in the chart.

Some of the more well studied genes central to these repair processes are shown in the chart. The gene designations shown in red, gray or cyan indicate genes frequently epigenetically altered in various types of cancers. Wikipedia articles on each of the genes high-lighted by red, gray or cyan describe the epigenetic alteration(s) and the cancer(s) in which these epimutations are found. Two review articles,[82][85] and two broad experimental survey articles[86][87] also document most of these epigenetic DNA repair deficiencies in cancers.

Red-highlighted genes are frequently reduced or silenced by epigenetic mechanisms in various cancers. When these genes have low or absent expression, DNA damages can accumulate. Replication errors past these damages (see translesion synthesis) can lead to increased mutations and, ultimately, cancer. Epigenetic repression of DNA repair genes in accurate DNA repair pathways appear to be central to carcinogenesis.

The two gray-highlighted genes RAD51 and BRCA2, are required for homologous recombinational repair. They are sometimes epigenetically over-expressed and sometimes under-expressed in certain cancers. As indicated in the Wikipedia articles on RAD51 and BRCA2, such cancers ordinarily have epigenetic deficiencies in other DNA repair genes. These repair deficiencies would likely cause increased unrepaired DNA damages. The over-expression of RAD51 and BRCA2 seen in these cancers may reflect selective pressures for compensatory RAD51 or BRCA2 over-expression and increased homologous recombinational repair to at least partially deal with such excess DNA damages. In those cases where RAD51 or BRCA2 are under-expressed, this would itself lead to increased unrepaired DNA damages. Replication errors past these damages (see translesion synthesis) could cause increased mutations and cancer, so that under-expression of RAD51 or BRCA2 would be carcinogenic in itself.

Cyan-highlighted genes are in the microhomology-mediated end joining (MMEJ) pathway and are up-regulated in cancer. MMEJ is an additional error-prone inaccurate repair pathway for double-strand breaks. In MMEJ repair of a double-strand break, an homology of 5-25 complementary base pairs between both paired strands is sufficient to align the strands, but mismatched ends (flaps) are usually present. MMEJ removes the extra nucleotides (flaps) where strands are joined, and then ligates the strands to create an intact DNA double helix. MMEJ almost always involves at least a small deletion, so that it is a mutagenic pathway.[88]FEN1, the flap endonuclease in MMEJ, is epigenetically increased by promoter hypomethylation and is over-expressed in the majority of cancers of the breast,[89] prostate,[90] stomach,[91][92] neuroblastomas,[93] pancreas,[94] and lung.[95] PARP1 is also over-expressed when its promoter region ETS site is epigenetically hypomethylated, and this contributes to progression to endometrial cancer,[96] BRCA-mutated ovarian cancer,[97] and BRCA-mutated serous ovarian cancer.[98] Other genes in the MMEJ pathway are also over-expressed in a number of cancers (see MMEJ for summary), and are also shown in cyan.

The basic processes of DNA repair are highly conserved among both prokaryotes and eukaryotes and even among bacteriophage (viruses that infect bacteria); however, more complex organisms with more complex genomes have correspondingly more complex repair mechanisms.[99] The ability of a large number of protein structural motifs to catalyze relevant chemical reactions has played a significant role in the elaboration of repair mechanisms during evolution. For an extremely detailed review of hypotheses relating to the evolution of DNA repair, see.[100]

The fossil record indicates that single-cell life began to proliferate on the planet at some point during the Precambrian period, although exactly when recognizably modern life first emerged is unclear. Nucleic acids became the sole and universal means of encoding genetic information, requiring DNA repair mechanisms that in their basic form have been inherited by all extant life forms from their common ancestor. The emergence of Earth’s oxygen-rich atmosphere (known as the “oxygen catastrophe”) due to photosynthetic organisms, as well as the presence of potentially damaging free radicals in the cell due to oxidative phosphorylation, necessitated the evolution of DNA repair mechanisms that act specifically to counter the types of damage induced by oxidative stress.

On some occasions, DNA damage is not repaired, or is repaired by an error-prone mechanism that results in a change from the original sequence. When this occurs, mutations may propagate into the genomes of the cell’s progeny. Should such an event occur in a germ line cell that will eventually produce a gamete, the mutation has the potential to be passed on to the organism’s offspring. The rate of evolution in a particular species (or, in a particular gene) is a function of the rate of mutation. As a consequence, the rate and accuracy of DNA repair mechanisms have an influence over the process of evolutionary change.[101] Since the normal adaptation of populations of organisms to changing circumstances (for instance the adaptation of the beaks of a population of finches to the changing presence of hard seeds or insects) proceeds by gene regulation and the recombination and selection of gene variations alleles and not by passing on irreparable DNA damages to the offspring,[102] DNA damage protection and repair does not influence the rate of adaptation by gene regulation and by recombination and selection of alleles. On the other hand, DNA damage repair and protection does influence the rate of accumulation of irreparable, advantageous, code expanding, inheritable mutations, and slows down the evolutionary mechanism for expansion of the genome of organisms with new functionalities. The tension between evolvability and mutation repair and protection needs further investigation.

A technology named clustered regularly interspaced short palindromic repeat shortened to CRISPR-Cas9 was discovered in 2012. The new technology allows anyone with molecular biology training to alter the genes of any species with precision.[103]

Read the original:
DNA repair – Wikipedia, the free encyclopedia

Posted in DNA | Comments Off on DNA repair – Wikipedia, the free encyclopedia