Tag Archives: post

[Release] BF1 Internal Hack – unknowncheats.me

Posted: January 4, 2017 at 6:17 pm

Hey guys, I’ll upload my BF1 hack, it’s the best BF series hack I ever made so enjoy it

The download contains 2 files, the injector (huh I’m generous…) and the hack. Open the injector and it’ll automatically inject the hack into BF1, enter BF1 and then press END. Put the injector and the hack together in the same directory.

The features are the following:

ESP Features: -Use NUMPAD0 to show/hide the menu. -Enable/disable ESP -ESP Distance: Render the distance you want. -Show Friends: Enable this when you want to see your friends or teammates. -Show Bones: Enable this if you want to see the player bones. -Show Names: Enable this if you want to see the players names. -Show HP: Enable or disable to show the HP bar. -Show Distance: Enable or disable to show the distance to the enemy. -Show FOV Circle: If you have the aimbot enabled and the enemy is inside of your circle the aimbot will aim to him.

Aimbot Features: -Enable/disable Aimbot (Right Mouse button and Left Alt to use aimbot) -Prioritize distance: Enabling this will make you aim depending in 2 things: the fov and the distance you are from the enemy. (IT’S BETTER TO ENABLE THIS) -Max Distance: Choose a max distance where the aimbot will work. -FOV: Max aimbot FOV to aim better. -Smooth Factor: Recommended to keep this a bit low, less than 0.1. -Retarget Time: Time the aimbot will pause to change between targets. -Bones: Choose one of 3 bones available now to aim. -Random Bones: Watch out with this. Use this only when your smooth factor is very low because the aimbot will aim randomly to each random bone so if you have a high smooth factor it’ll snap fast to each bone.

Misc Features: -No Sway/Recoil (can be risky) -Instant Hit (can be risky)

Anti-Cheat Features: -DX11 Screen Cleaner (Just in case although PB is disabled in this game) -BitBlt Screen Cleaner (For FF)

To open the menu press “END” button which is under the “HOME” one. Press F8 to close the hack.

Known Issues: -I’d like to get a feedback of this hack because I want to improve the aimbot a lot more. -Sometimes randomly the map changes the color due to some problems saving the DX buffer I guess. -Don’t rage if you don’t want to get banned. Play normally with cheats.

IMPORTANT NOTES (*): -When you inject the hack you should press the key “END” to enable the hack. -Game should be fullscreen windowed (BORDERLESS). -Download Redistributable x64 2015 in windows official site.

Credits: -@RozenMaiden for this post. -@Extropian helped me with DirectX SS hooks. -@stevemk14ebr for his great PolyHook library. -@Extasy Hosting for his ImGui Style. -@evolution536 for his great dx universal hook. -@GuTu he helped me to set up the GetAABB and GetTransform in the correct position and shared some vehicle code. [emailprotected] for BB injection.

DOWNLOAD (19/10/2016): CLICK DOWNLOAD Trial and Enlister version (20/10/2016): CLICK DOWNLOAD (24/10/2016): CLICK DOWNLOAD (15/11/2016): CLICK v1.1 DOWNLOAD (11/12/2016): CLICK v1.2 DOWNLOAD (19/12/2016): CLICK v1.3

Change Log 20/10/16 -Added Enlister version compatibility -Added no recoil and sway -Added Instant Hit

Change Log 24/10/16 -Added ESP features: show health, FOV circle, names and distance -The aimbot has been fixed drastically, now the FOV is highly better, accurate and also it’s taking into account the enemies you’re watching or not. -Prioritize distance is fixed too and now it takes into account the distance of the closest enemy. -Added Retarget Time feature that makes you more legit than before, you can choose a value from 1 to 1000 and it’s in milliseconds, defines the time that the aimbot will wait to target another enemy. -Windows 7 users can now inject the hack into the game.

Change Log 19/12/16 -Added ability to remove 2D boxes.

Do you want to donate? You can clicking here!

See the article here:

[Release] BF1 Internal Hack – unknowncheats.me

Posted in Extropian | Comments Off on [Release] BF1 Internal Hack – unknowncheats.me

International Law in the Age of Trump: A Post-Human Rights …

Posted: December 21, 2016 at 6:40 pm

The Trump presidency will have a significant impact on international law, including a potential withdrawal from or re-negotiation of the Paris Agreement on Climate Change and the Iran nuclear deal.Although those two examples would pit the United States against much of the rest of the world, in other respects Trumps election is consistent with ongoing global changes.To take a well-known example, Trumps opposition to NAFTA appears to align with world-wide populism and hostility to trade agreements, as illustrated by Brexit.

Trumps election is also consistent with other trends in international law.As I argued before the election, we are in the midst of a world-wide decline in international human rights and a related rise in power by China and Russia over the content of international law, a theme discussed last week by Anne Peters here.Liberal intervention on behalf of human rightsopposed by China and Russiawould almost certainly have received a boost from a Hillary Clinton administration.Although it is difficult to predict what direction the new administration will take, it is likely that the U.S. will expend little energy on promoting the international legal protection of human rights (putting aside here international humanitarian law, the law of armed conflict, and other related areas of international law).

We are, in other words, probably already in the post-human rights era of international law, meaning that the enforcement and expansion of human rights through binding international law will decline. Fortunately, thanks in part to the historic successes of the human rights movement, there are many other ways to advance the cause of human rights, including regional human rights institutions, soft international norms (such as the historic Helsinki Accords), and domestic or transnational political reform and activism.Promoting civil liberties and human rights at home and abroad should be an important objective in the coming years, all the more so with Trump as President, but perhaps not through the enforcement of binding international law.

The Trump administration should use the post-human rights era as an opportunity to promote a different international law agenda:a strong core of international law dedicated to protecting international peace and security. The pursuit of human rights by the West through international law has weakened other norms of international law. Kosovo is an illustration.President Clintons 1999 humanitarian intervention in Kosovo lacked the authorization of the U.N. Security Council and violated international law; the intervention ultimately led to the creation of the new state of Kosovo over the bitter opposition of Russia and Serbia. The Kosovo precedent was used by Russia to support the right to self-determination for South Ossetia and Crimea.More broadly, doctrinal innovations like universal jurisdiction and the lifting of immunity for human rights violations can generate regional tensions and disagreements.

Quite simply, the West has lost its bid to promote human rights as politically neutral standards binding upon all nations as a matter of international law. That effort foundered most visibly on the shoals of selective, coercive enforcement, including in Iraq, but also including the use of force to effectuate regime change in Libya and the limited effectiveness of the Human Rights Council. A turn away from using international law to promote human rightswhether or not the first best choice in an ideal worldcreates an opportunity to strengthen other vitally important norms of international law.

Political science research (examples here and here) tells us that border and territorial disputes have historically been especially likely to lead to militarized armed conflict and to war.Indeed, the long peace may be as much a territorial peace as it is a democratic peace. Accordingly, a priority under the new administration should be to strengthen international legal rules which may reduce conflict over territory and borders such as Article 2(4) of U.N. Charter. Territorial conquests declined during the 20th century as the international rule limiting the use of force hardened. The norm began to emerge after World War I, as reflected in the Charter of the League of Nations and in mandate systems of the interwar period, which replaced the traditional system of simply awarding territory (including colonies) to the victorious states.The hopes of territorial conquest by (and the scope of territorial disagreements between) the Russian, Qing, Ottoman, Austro-Hungarian, and Japanese Empires at the beginning of the 20th century vividly illustrate how international laws permissive posture toward violent territorial acquisition led to conflict and war. The prohibition on the use of force for territorial conquest was strengthened in the U.N. Charter and became the cornerstone of the post-World War II international legal order.Geopolitically, concern about territorial and border disputes today means we need to remain focused on the South China Sea, the Ukraine/Eastern Europe, and the Turkish/Syria/greater Kurdistan border as especially potent threats to international peace and security (as well as to other U.S. interests).

Institutionally, we should seek to return in some respects to the immediate post World War II settlement with the U.N. Security Council focused on protecting international peace and security.For better or for worse, recent global developments, including the deployment of Russian military power and Russias growing alliance with China, have put the Russian-Chinese-U.S. relationship at the center of global importance when it comes to international law and to international peace and security. The veto-wielding members of the United Nations Security Council may not be broadly representative of the worlds countries, but the growing importance of the relationships among those five countries gives the Security Council a renewed significance.It is an important forum for the advancement of U.S. medium- and long-term interests.Turning our back on the United Nations would be a mistake.

During the Trump Administration, the United States and the world will need to focus on protecting civil liberties, the rights of minorities, free speech, and other rights from violation by individuals own governments.Thanks in part to the international human rights movement and to generations of activists, today we have a variety of legal tools to help us do so. But the enforcement of binding norms of international law through the United Nations or foreign domestic courts may not always be an effective means of doing so, especially in light of todays political realities.In a post-human rights era, binding norms of international law are often better used to pursue other objectives such as the maintenance of international peace and security.

View post:
International Law in the Age of Trump: A Post-Human Rights …

Posted in Post Human | Comments Off on International Law in the Age of Trump: A Post-Human Rights …

What is Cryptocurrency: Everything You Need To Know [Ultimate …

Posted: December 16, 2016 at 11:57 am

What is cryptocurrency: 21st-century unicorn or the money of the future?

This introduction explains the most important thing about cryptocurrencies. After youve read it, youll know more about it than most other humans.

Today cryptocurrencies have become a global phenomenon known to most people. While still somehow geeky and not understood by most people, banks, governments and many companies are aware of its importance.

In 2016, youll have a hard time finding a major bank, a big accounting firm, a prominent software company or a government that did not research cryptocurrencies, publish a paper about it or start a so-called blockchain-project.

Virtual currencies, perhaps most notably Bitcoin, have captured the imagination of some, struck fear among others, and confused the heck out of the rest of us. Thomas Carper, US-Senator

But beyond the noise and the press releases the overwhelming majority of people even bankers, consultants, scientists, and developers have a very limited knowledge about cryptocurrencies. They often fail to even understand the basic concepts.

So lets walk through the whole story. What are cryptocurrencies?

Where did cryptocurrency originate?

Why should you learn about cryptocurrency?

And what do you need to know about cryptocurrency?

Few people know, but cryptocurrencies emerged as a side product of another invention. Satoshi Nakamoto, the unknown inventor of Bitcoin, the first and still most important cryptocurrency, never intended to invent a currency.

In his announcement of Bitcoin in late 2008, Satoshi said he developed A Peer-to-Peer Electronic Cash System.

His goal was to invent something; many people failed to create before digital cash.

The single most important part of Satoshis invention was that he found a way to build a decentralized digital cash system. In the nineties, there have been many attempts to create digital money, but they all failed.

After seeing all the centralized attempts fail, Satoshi tried to build a digital cash system without a central entity. Like a Peer-to-Peer network for file sharing.

This decision became the birth of cryptocurrency. They are the missing piece Satoshi found to realize digital cash. The reason why is a bit technical and complex, but if you get it, youll know more about cryptocurrencies than most people do. So, lets try to make it as easy as possible:

To realize digital cash you need a payment network with accounts, balances, and transaction. Thats easy to understand. One major problem every payment network has to solve is to prevent the so-called double spending: to prevent that one entity spends the same amount twice. Usually, this is done by a central server who keeps record about the balances.

In a decentralized network, you dont have this server. So you need every single entity of the network to do this job. Every peer in the network needs to have a list with all transactions to check if future transactions are valid or an attempt to double spend.

But how can these entities keep a consensus about this records?

If the peers of the network disagree about only one single, minor balance, everything is broken. They need an absolute consensus. Usually, you take, again, a central authority to declare the correct state of balances. But how can you achieve consensus without a central authority?

Nobody did know until Satoshi emerged out of nowhere. In fact, nobody believed it was even possible.

Satoshi proved it was. His major innovation was to achieve consensus without a central authority. Cryptocurrencies are a part of this solution the part that made the solution thrilling, fascinating and helped it to roll over the world.

[optin-monster-shortcode id=kzvnolsiafjpwmac]

If you take away all the noise around cryptocurrencies and reduce it to a simple definition, you find it to be just limited entries in a database no one can change without fulfilling specific conditions. This may seem ordinary, but, believe it or not: this is exactly how you can define a currency.

Take the money on your bank account: What is it more than entries in a database that can only be changed under specific conditions? You can even take physical coins and notes: What are they else than limited entries in a public physical database that can only be changed if you match the condition than you physically own the coins and notes? Money is all about a verified entry in some kind of database of accounts, balances, and transactions.

How miners create coins and confirm transactions

Lets have a look at the mechanism ruling the databases of cryptocurrencies. A cryptocurrency like Bitcoin consists of a network of peers. Every peer has a record of the complete history of all transactions and thus of the balance of every account.

A transaction is a file that says, Bob gives X Bitcoin to Alice and is signed by Bobs private key. Its basic public key cryptography, nothing special at all. After signed, a transaction is broadcasted in the network, sent from one peer to every other peer. This is basic p2p-technology. Nothing special at all, again.

The transaction is known almost immediately by the whole network. But only after a specific amount of time it gets confirmed.

Confirmation is a critical concept in cryptocurrencies. You could say that cryptocurrencies are all about confirmation.

As long as a transaction is unconfirmed, it is pending and can be forged. When a transaction is confirmed, it is set in stone. It is no longer forgeable, it cant be reversed, it is part of an immutable record of historical transactions: of the so-called blockchain.

Only miners can confirm transactions. This is their job in a cryptocurrency-network. They take transactions, stamp them as legit and spread them in the network. After a transaction is confirmed by a miner, every node has to add it to its database. It has become part of the blockchain.

For this job, the miners get rewarded with a token of the cryptocurrency, for example with Bitcoins. Since the miners activity is the single most important part of cryptocurrency-system we should stay for a moment and take a deeper look on it.

Principally everybody can be a miner. Since a decentralized network has no authority to delegate this task, a cryptocurrency needs some kind of mechanism to prevent one ruling party from abusing it. Imagine someone creates thousands of peers and spreads forged transactions. The system would break immediately.

So, Satoshi set the rule that the miners need to invest some work of their computers to qualify for this task. In fact, they have to find a hash a product of a cryptographic function that connects the new block with its predecessor. This is called the Proof-of-Work. In Bitcoin, it is based on the SHA 256 Hash algorithm.

Read Next What is Bitcoin? A Step-By-Step Guide For Beginners

You dont need to understand details about SHA 256. Its only important you know that it can be the basis of a cryptologic puzzle the miners compete to solve. After finding a solution, a miner can build a block and add it to the blockchain. As an incentive, he has the right to add a so-called coinbase transaction that gives him a specific number of Bitcoins. This is the only way to create valid Bitcoins.

Bitcoins can only be created ifminers solve a cryptographic puzzle. Since the difficulty of this puzzle increases with the amount of computer power the whole miners invest, there is only a specific amount of cryptocurrency token than can be created in a given amount of time. This is part of the consensus no peer in the network can break.

If you really think about it, Bitcoin, as a decentralized network of peers which keep a consensus about accounts and balances, is more a currency than the numbers you see in your bank account. What are these numbers more than entries in a database a database which can be changed by people you dont see and by rules you dont know?

It is that narrative of human development under which we now have other fights to fight, and I would say in the realm of Bitcoin it is mainly the separation of money and state.

Erik Voorhees,cryptocurrency entrepreneur

Basically, cryptocurrencies are entries about token in decentralized consensus-databases. They are called CRYPTOcurrencies because the consensus-keeping process is secured by strong cryptography. Cryptocurrencies are built on cryptography. They are not secured by people or by trust, but by math. It is more probable that an asteroid falls on your house than that a bitcoin address is compromised.

Describing the properties of cryptocurrencies we need to separate between transactional and monetary properties. While most cryptocurrencies share a common set of properties, they are not carved in stone.

1.) Irreversible: After confirmation, a transaction cant be reversed. By nobody. And nobody means nobody. Not you, not your bank, not the president of the United States, not Satoshi, not your miner. Nobody. If you send money, you send it. Period. No one can help you, if you sent your funds to a scammer or if a hacker stole them from your computer. There is no safety net.

2.) Pseudonymous: Neither transactions nor accounts are connected to real world identities. You receive Bitcoins on so-called addresses, which are randomly seeming chains of around 30 characters. While it is usually possible to analyze the transaction flow, it is not necessarily possible to connect the real world identity of users with those addresses.

3.) Fast and global: Transaction are propagated nearly instantly in the network and are confirmed in a couple of minutes. Since they happen in a global network of computers they are completely indifferent of your physical location. It doesnt matter if I send Bitcoin to my neighbour or to someone on the other side of the world.

4.) Secure: Cryptocurrency funds are locked in a public key cryptography system. Only the owner of the private key can send cryptocurrency. Strong cryptography and the magic of big numbers makes it impossible to break this scheme. A Bitcoin address is more secure than Fort Knox.

5.) Permissionless: You dont have to ask anybody to use cryptocurrency. Its just a software that everybody can download for free. After you installed it, you can receive and send Bitcoins or other cryptocurrencies. No one can prevent you. There is no gatekeeper.

1.) Controlled supply: Most cryptocurrencies limit the supply of the tokens. In Bitcoin, the supply decreases in time and will reach its final number somewhere in around 2140. All cryptocurrencies control the supply of the token by a schedule written in the code. This means the monetary supply of a cryptocurrency in every given moment in the future can roughly be calculated today. There is no surprise.

2.) No debt but bearer: The Fiat-money on your bank account is created by debt, and the numbers, you see on your ledger represent nothing but debts. Its a system of IOU. Cryptocurrencies dont represent debts. They just represent themselves. They are money as hard as coins of gold.

To understand the revolutionary impact of cryptocurrencies you need to consider both properties. Bitcoin as a permissionless, irreversible and pseudonymous means of payment is an attack on the control of banks and governments over the monetary transactions of their citizens. You cant hinder someone to use Bitcoin, you cant prohibit someone to accept a payment, you cant undo a transaction.

As money with a limited, controlled supply that is not changeable by a government, a bank or any other central institution, cryptocurrencies attack the scope of the monetary policy. They take away the control central banks take on inflation or deflation by manipulating the monetary supply.

While its still fairly new and unstable relative to the gold standard, cryptocurrency is definitely gaining traction and will most certainly have more normalized uses in the next few years. Right now, in particular, its increasing in popularity with the post-election market uncertainty. The key will be in making it easy for large-scale adoption (as with anything involving crypto) including developing safeguards and protections for buyers / investors. I expect that within two years, well be in a place where people can shove their money under the virtual mattress through cryptocurrency, and theyll know that wherever they go, that money will be there. Sarah Granger, Author, and Speaker.

Mostly due to its revolutionary properties cryptocurrencies have become a success their inventor, Satoshi Nakamoto, didnt dare to dream ofit. While every other attempt to create a digital cash system didnt attract a critical mass of users, Bitcoin had something that provoked enthusiasm and fascination. Sometimes it feels more like religion than technology.

Cryptocurrencies are digital gold. Sound money that is secure from political influence. Money that promises to preserve and increase its value over time. Cryptocurrencies are also a fast and comfortable means of payment with a worldwide scope, and they are private and anonymous enough to serve as a means of payment for black markets and any other outlawed economic activity.

Go here to see the original:
What is Cryptocurrency: Everything You Need To Know [Ultimate …

Posted in Cryptocurrency | Comments Off on What is Cryptocurrency: Everything You Need To Know [Ultimate …

Ron Paul Lashes Out At WaPo’s Witch Hunt: "Expect Such …

Posted: December 2, 2016 at 12:20 pm

Washington Post Peddles Tarring of Ron Paul Institute as Russian Propaganda, via The Ron Paul Institute for Peace & Prosperity,

The Washington Post has a history of misrepresenting Ron Pauls views. Last year the supposed newspaper of record ran a feature article by David A. Fahrenthold in which Fahrenthold grossly mischaracterized Paul as an advocate for calamity, oppression, and poverty the opposite of the goals Paul routinely expresses and, indeed, expressed clearly in a speech at the event upon which Fahrentholds article purported to report. Such fraudulent attacks on the prominent advocate for liberty and a noninterventionist foreign policy fall in line with the newspapers agenda. As Future of Freedom Foundation President Jacob G. Hornberger put it in a February editorial, the Posts agenda is guided by the interventionist mindset that undergirds the mainstream media.

On Thursday, the Post published a new article by Craig Timberg complaining of a flood of so-called fake news supported by a sophisticated Russian propaganda campaign that created and spread misleading articles online with the goal of punishing Democrat Hillary Clinton, helping Republican Donald Trump and undermining faith in American democracy, To advance this conclusion, Timberg points to PropOrNot, an organization of anonymous individuals formed this year, as having identified more than 200 websites as routine peddlers of Russian propaganda during the election season. Look on the PropOrNot list. There is the Ron Paul Institute for Peace and Prosperitys (RPI) website RonPaulInstitute.org listed among websites termed Russian propaganda outlets.

What you will not find on the PropOrNot website is any particularized analysis of why the RPI website, or any website for that matter, is included on the list. Instead, you will see only sweeping generalizations from an anonymous organization. The very popular website drudgereport.com even makes the list. While listed websites span the gamut of political ideas, they tend to share in common an independence from the mainstream media.

Timbergs article can be seen as yet another big media attempt to shift the blame for Democratic presidential nominee Hillary Clintons loss of the presidential election away from Clinton, her campaign, and the Democratic National Committee (DNC) that undermined Sen Bernie Sanders (I-VT) challenge to Clinton in the Democratic primary.

The article may also be seen as another step in the effort to deter people from looking to alternative sources of information by labeling those information sources as traitorous or near-traitorous.

At the same time, the article may be seen as playing a role in the ongoing push to increase tensions between the United States and Russia a result that benefits people, including those involved in the military-industrial complex, who profit from the growth of US national security activity in America and overseas.

This is not the first time Ron Paul and his institute has been attacked for sounding pro-Russian or anti-American. Such attacks have been advanced even by self-proclaimed libertarians.

Expect that such attacks will continue. They are an effort to tar Paul and his institute so people will close themselves off from information Paul and RPI provide each day in furtherance of the institutes mission to continue and expand Pauls lifetime of public advocacy for a peaceful foreign policy and the protection of civil liberties at home. While peace and liberty will benefit most people, powerful interests seek to prevent the realization of these objectives. Indeed, expect attacks against RPI to escalate as the institute continues to reach growing numbers of people with its educational effort

Read the original:
Ron Paul Lashes Out At WaPo’s Witch Hunt: "Expect Such …

Posted in Ron Paul | Comments Off on Ron Paul Lashes Out At WaPo’s Witch Hunt: "Expect Such …

Word Games: What the NSA Means by Targeted Surveillance …

Posted: November 29, 2016 at 1:22 am

We all know that the NSA uses word games to hide and downplay its activities. Words like “collect,” “conversations,” “communications,” and even “surveillance” have suffered tortured definitions that create confusion rather than clarity.

Theres another one to watch: “targeted” v. “mass” surveillance.

Since 2008, the NSA has seized tens of billions of Internet communications. It uses the Upstream and PRISM programswhich the government claims are authorized under Section 702 of the FISA Amendments Actto collect hundreds of millions of those communications each year. The scope is breathtaking, including the ongoing seizure and searching of communications flowing through key Internet backbone junctures,[1]the searching of communications held by service providers like Google and Facebook, and, according to the government’s own investigators, the retention of significantly more than 250 million Internet communications per year.[2]

Yet somehow, the NSA and its defenders still try to pass 702 surveillance off as “targeted surveillance,” asserting that it is incorrect when EFF and many others call it “mass surveillance.”

Our answer: if “mass surveillance” includes the collection of the content of hundreds of millions of communications annually and the real-time search of billions more, then the PRISM and Upstream programs under Section 702 fully satisfy that definition.

This word game is important because Section 702 is set to expire in December 2017. EFF and our colleagues who banded together to stop the Section 215 telephone records surveillance are gathering our strength for this next step in reining in the NSA. At the same time, the government spin doctors are trying to avoid careful examination by convincing Congress and the American people that this is just “targeted” surveillance and doesnt impact innocent people.

PRISM and Upstream surveillance are two types of surveillance that the government admits that it conducts under Section 702 of the FISA Amendments Act, passed in 2008. Each kind of surveillance gives the U.S. government access to vast quantities of Internet communications.[3]

Upstream gives the NSA access to communications flowing through the fiber-optic Internet backbone cables within the United States.[4] This happens because the NSA, with the help of telecommunications companies like AT&T, makes wholesale copies of the communications streams passing through certain fiber-optic backbone cables. Upstream is at issue in EFFs Jewel v. NSA case.

PRISM gives the government access to communications in the possession of third-party Internet service providers, such as Google, Yahoo, or Facebook. Less is known about how PRISM actually works, something Congress should shine some light on between now and December 2017.[5]

Note that those two programs existed prior to 2008they were just done under a shifting set of legal theories and authorities.[6] EFF has had evidence of the Upstream program from whistleblower Mark Klein since 2006, and we have been suing to stop it ever since.

Despite government claims to the contrary, heres why PRISM and Upstream are “mass surveillance”:

(1) Breadth of acquisition: First, the scope of collection under both PRISM and Upstream surveillance is exceedingly broad. The NSA acquires hundreds of millions, if not billions, of communications under these programs annually.[7] Although, in the U.S. governments view, the programs are nominally “targeted,” that targeting sweeps so broadly that the communications of innocent third parties are inevitably and intentionally vacuumed up in the process. For example, a review of a “large cache of intercepted conversations” provided by Edward Snowden and analyzed by the Washington Post revealed that 9 out of 10 account holders “were not the intended surveillance targets but were caught in a net the agency had cast for somebody else.”[8] The material reviewed by the Post consisted of 160,000 intercepted e-mail and instant message conversations, 7,900 documents (including “medical records sent from one family member to another, resumes from job hunters and academic transcripts of schoolchildren”), and more than 5,000 private photos.[9] In all, the cache revealed the “daily lives of more than 10,000 account holders who were not targeted [but were] catalogued and recorded nevertheless.”[10] The Post estimated that, at the U.S. governments annual rate of “targeting,” collection under Section 702 would encompass more than 900,000 user accounts annually. By any definition, this is “mass surveillance.”

(2) Indiscriminate full-content searching. Second, in the course of accomplishing its so-called “targeted” Upstream surveillance, the U.S. government, in part through its agent AT&T, indiscriminately searches the contents of billions of Internet communications as they flow through the nations domestic, fiber-optic Internet backbone. This type of surveillance, known as “about surveillance,” involves the NSA’s retention of communications that are neither to nor from a target of surveillance; rather, it authorizes the NSA to obtain any communications “about” the target.[11] Even if the acquisition of communications containing information “about” a surveillance target could, somehow, still be considered “targeted,” the method for accomplishing that surveillance cannot be: “about” surveillance entails a content search of all, or substantially all, international Internet communications transiting the United States.[12] Again, by any definition, Upstream surveillance is “mass surveillance.” For PRISM, while less is known, it seems the government is able to search throughor require the companies like Google and Facebook to search throughall the customer data stored by the corporations for communications to or from its targets.

To accomplish Upstream surveillance, the NSA copies (or has its agents like AT&T copy) Internet traffic as it flows through the fiber-optic backbone. This copying, even if the messages are only retained briefly, matters under the law. Under U.S. constitutional law, when the federal government “meaningfully interferes”with an individuals protected communications, those communications have been “seized” for purposes of the U.S. Constitutions Fourth Amendment. Thus, when the U.S. government copies (or has copied) communications wholesale and diverts them for searching, it has “seized” those communications under the Fourth Amendment.

Similarly, U.S. wiretapping law triggers a wiretap at the point of “interception by a device,” which occurs when the Upstream mechanisms gain access to our communications.[13]

Why does the government insist that its targeted? For Upstream, it may be because the initial collection and searching of the communicationsdone by service providers like AT&T on the governments behalfis really, really fast and much of the information initially collected is then quickly disposed of. In this way the Upstream collection is unlike the telephone records collection where the NSA kept all of the records it seized for years. Yet this difference should not change the conclusion that the surveillance is “mass surveillance.” First, all communications flowing through the collection points upstream are seized and searched, including content and metadata. Second, as noted above, the amount of information retainedover 250 million Internet communications per yearis astonishing.

Thus, regardless of the time spent, the seizure and search are comprehensive and invasive. Using advanced computers, the NSA and its agents can do a full-text, content search within a blink of an eye through billions, if not trillions of your communications, including emails, social media, and web searches. Second, as demonstrated above, the government retains a huge amount of the communicationsfar more about innocent people than about its targetsso even based on what is retained the surveillance is better described as “mass” rather than “targeted.”

So it is completely correct to characterize Section 702 as mass surveillance. It stems from the confluence of: (1) the method NSA employs to accomplish its surveillance, particularly Upstream, and (2) the breadth of that surveillance.

Next time you see the government or its supporters claim that PRISM and Upstream are “targeted” surveillance programs, youll know better.

[1] See, e.g., Charlie Savage, NSA Said to Search Content of Messages to and From U.S., N.Y. Times (Aug 8, 2013) (The National Security Agency is searching the contents of vast amounts of Americans e-mail and text communications into and out of the country[.]). This article describes an NSA practice known as about surveillancea practice that involves searching the contents of communications as they flow through the nations fiber-optic Internet backbone.

[2] FISA Court Opinion by Judge Bates entitled [Caption Redacted], at 29 (NSA acquires more than two hundred fifty million Internet communications each year pursuant to Section 702), https://www.eff.org/document/october-3-2011-fisc-opinion-holding-nsa-surveillance-unconstitutional (Hereinafter, Bates Opinion). According to the PCLOB report, the current number is significantly higher than 250 million communications. PCLOB Report on 702 at 116.

[3] Bates Opinion at 29; PCLOB at 116.

[6] First, the Bush Administration relied solely on broad claims of Executive power, grounded in secret legal interpretations written by the Department of Justice. Many of those interpretations were subsequently abandoned by later Bush Administration officials. Beginning in 2006, DOJ was able to turn to the Foreign Intelligence Surveillance Court to sign off on its surveillance programs. In 2007, Congress finally stepped into the game, passing the Protect America Act; which, a year later, was substantially overhauled and passed again as the FISA Amendments Act. While neither of those statutes mention the breadth of the surveillance and it was not discussed publicly during the Congressional processes, both have been cited by the government as authorizing it.

[11] Bates Opinion at 15.

[12] PCLOB report at 119-120.

[13] See 18 U.S.C 2511(1)(a); U.S. v. Councilman, 418 F.3d 67, 70-71, 79 (1st Cir. 2005) (en banc).

Original post:
Word Games: What the NSA Means by Targeted Surveillance …

Posted in NSA | Comments Off on Word Games: What the NSA Means by Targeted Surveillance …

2 senior officials ask for head of NSA to be replaced …

Posted: November 25, 2016 at 10:09 am

The recommendation by Defense Secretary Ash Carter and Director of National Intelligence James Clapper was made last month, according to The Washington Post, which first reported the recommendation.

The replacement of such a senior person would be unprecedented at a time when the US intelligence community has repeatedly warned about the threat of cyberattacks.

A major reason for their recommendation is the belief that Rogers was not working fast enough on a critical reorganization to address the cyberthreat. The Obama administration has wanted to keep the NSA dealing with signals intelligence, which would be a civilian-led agency, and a separate cybercommand which would remain under the military, the official told CNN.

Right now, one man, Rogers, heads both. He took over as head of the NSA and Cyber Command in April 2014.

The official said the initial plan was to announce the reorganization and that given the shift of personnel, Rogers would be thanked for his service and then move on.

Another issue — but not the sole driving factor in removing Rogers, according to the source — is a continuing concern about security.

Harold Martin, a former contractor for Booz Allen who was working at the NSA, has been charged and is being held without bail after allegedly stealing a large amount of classified information. Prosecutors allege he stole the names of “numerous” covert US agents. He was arrested in August after federal authorities uncovered what they have described as mountains of highly classified intelligence in his car, home and shed, which they said had been accumulated over many years.

Martin’s motivation remains unclear, and federal authorities have not alleged that he gave or sold the information to anyone.

Separately, this comes as Rogers is one of those under consideration by President-elect Donald Trump to be the next director of national intelligence, CNN has previously reported. Rogers went on a private trip on Thursday to meet with Trump, a trip that took many administration officials by surprise.

Some officials also have complained about Rogers’ leadership style, according to the Post.

The Pentagon declined to comment, as did a spokesman for the director of national intelligence. The NSA did not return a request for comment.

The idea for dividing NSA’s efforts has been in the works for a while.

“So we had them both in the same location and able to work with one another. That has worked very well, but it’s not necessarily going to — the right approach to those missions overall in the long run. And we need to look at that and it’s not just a matter of NSA and CYBERCOM,” Carter told a tech industry group in September.

CNN’s Jim Sciutto contributed to this report.

More here:
2 senior officials ask for head of NSA to be replaced …

Posted in NSA | Comments Off on 2 senior officials ask for head of NSA to be replaced …

A Post-Human World Is Coming. Design Has Never Mattered …

Posted: November 21, 2016 at 10:55 am

Digital Design Theory (Princeton Architectural Press, 2016) is available on Amazon.

Futurist experts have estimated that by the year 2030 computers in the price range of inexpensive laptops will have a computational power that is equivalent to human intelligence. The implications of this change will be dramatic and revolutionary, presenting significant opportunities and challenges to designers. Already machines can process spoken language, recognize human faces, detect our emotions, and target us with highly personalized media content. While technology has tremendous potential to empower humans, soon it will also be used to make them thoroughly obsolete in the workplace, whether by replacing, displacing, or surveilling them. More than ever designers need to look beyond human intelligence and consider the effects of their practice on the world and on what it means to be human.

The question of how to design a secure human future is complicated by the uncertainties of predicting that future. As it is practiced today, design is strategically positioned to improve the usefulness and quality of human interactions with technology. Like all human endeavors, however, the practice of design risks marginalization if it is unable to evolve. When envisioning the future of design, our social and psychological frames of reference unavoidably and unconsciously bias our interpretation of the world. People systematically underestimate exponential trends such as Moores law, for example, which tells us that in 10 years we will have 32 times more total computing power than today. Indeed, as computer scientist Ray Kurzweil observes, “We wont experience 100 years of technological advances in the 21st century; we will witness on the order of 20,000 years of progress (again when measured by todays rate of progress), or about 1,000 times greater than what was achieved in the 20th century.”

Design-oriented research provides a possible means to anticipate and guide rapid changes, as design, predicated as it is on envisioning alternatives through “collective imagining,” is inherently more future-oriented than other fields. It therefore seems reasonable to ask how technology-design efforts might focus more effectively on enabling human-oriented systems that extend beyond design for humanity. In other words, is it possible to design intelligent systems that safely design themselves?

Imagine a future scenario in which extremely powerful computerized minds are simulated and shared across autonomous virtual or robotic bodies. Given the malleable nature of such super-intelligencesthey wont be limited by the hardwiring of DNA informationone can reasonably assume that they will be free of the limitations of a single material body, or the experience of a single lifetime, allowing them to tinker with their own genetic code, integrate survival knowledge directly from the learnings of others, and develop a radical new form of digital evolution that modifies itself through nearly instantaneous exponential cycles of imitation and learning, and passes on its adaptations to successive generations of “self.” We must transcend the limitations of human-centered design.

In such a post-human future, the simulation of alternative histories and futures could be used as a strategic evolutionary tool, allowing imaginary scenarios to be inhabited and played out before individuals or populations commit to actual change. Not only would the lineage of such beings be perpetually enhanced by automation, leading to radical new forms of social relationships and values, but the systems that realize or govern those values would likely become the instinctual mechanism of a synchronized and sentient “techno-cultural mind.”

Bringing such speculative and hypothetical scenarios into cultural awareness is one way that designers can evaluate possibilities and determine how best to proceed. What should designers do to prepare for such futures? What methods should be applied to their research and training?

Todays interaction designers shape human behavior through investigative research, systemic thinking, creative prototyping, and rapid iteration. Can these same methods be used to address the multitude of longer-term social and ethical issues that designers create? Do previous inventions, such as the internal combustion engine or nuclear power, provide relevant historical lessons to learn from? If little else, reflecting on super-intelligence through the lens of nuclear proliferation and global warming throws light on the existential consequences of poor design. It becomes clear that while systemic thinking and holistic research are useful methods for addressing existential risks, creative prototyping or rapid iteration with nuclear power or the environment as materials is probably unwise. Existential risks do not allow for a second chance to get it right. The only possible course of action when confronted with such challenges is to examine all possible future scenarios and use the best available subjective estimates of objective risk factors.

Simulations can also be leveraged to heighten designers awareness of trade-offs. Consider the consequences of contemporary interaction design, for example: intuitive interfaces, systemic experiences, and service economies. When current design methods are applied to designing future systems, each of these patterns can be extended through imagined simulations of posthuman design. Intuitive human-computer interfaces become interfaces between post- humans; they become new ways of mediating interdependent personal and cultural valuesnew social and political systems. Systemic experiences become new kinds of emergent post-human perception and awareness. Service economies become the synapses of tomorrows underlying system of techno-cultural values, new moral codes.

The first major triumph of interaction design, the design of the intuitive interface, merged technology with aesthetics. Designers adapted modernisms static typography and industrial styling and learned to address human factors and usability concerns. Today agile software practices and design thinking ensure the intuitive mediation of human and machine learning. We adapt to the design limitations of technological systems, and they adapt in return based on how we behave. This interplay is embodied by the design of the interface itself, between perception and action, affordance and feedback. As the adaptive intelligence of computer systems grows over time, design practices that emphasize the human aspects of interface design will extend beyond the one-sided human perspective of machine usability toward a reciprocal relationship that values intelligent systems as partners. In light of the rapid evolution of these new forms of artificial and synergetic life, the quality and safety of their mental and physical experiences may ultimately deserve equal if not greater consideration than ours. Post-human-centered design will teach intelligent machine systems to design the hierarchies of human behavior.

Interaction design can also define interconnected networks of interface touch-points and shape them into choose-your-own-adventures of human experience. We live in a world of increasingly seamless integration between Wi-Fi networks and thin clients, between phones, homes, watches, and cars. In the near future, crowdsourcing systems coupled with increasingly pervasive connectivity services and wearable computer interfaces will generate massive stockpiles of data that catalog human behavior to feed increasingly intuitive learning machines. Just as human-centered design crafts structure and experience to shape intuition, post-human-centered design will teach intelligent machine systems to design the hierarchies and compositions of human behavior. New systems will flourish as fluent extensions of our digital selves, facilitating seamless mobility throughout systems of virtual identity and the governance of shared thoughts and emotions.

Applying interaction design to post-human experience requires designers to think holistically beyond the interface to the protocols and exchanges that unify human and machine minds. Truly systemic post-human-centered designers recognize that such interfaces will ultimately manifest in the psychological fabric of post-human society at much deeper levels of meaning and value. Just as todays physical products have slid from ownership to on-demand digital services, our very conception of these services will become the new product. In the short term, advances in wearable and ubiquitous computing technology will render our inner dimensions of motivation and self-perception tangible as explicit and actionable cues. Ultimately such manifestations will be totally absorbed by the invisible hand of post-human cognition and emerge as new forms of social and self-engineering. Design interventions at this level will deeply control the post-human psyche, building on research methodologies of experience economics designed for the strategic realization of social and cognitive value. Can a market demand be designed for goodwill toward humans at this stage, or does the long tail of identity realization preclude it? Will we live in a utopian world of socialized techno-egalitarian fulfillment and love or become a eugenic cult of celebrity self-actualization?

It seems unlikely that humans will stem their fascination with technology or stop applying it to improve themselves and their immediate material condition. Tomorrows generation faces an explosion of wireless networks, ubiquitous computing, context-aware systems, intelligent machines, smart cars, robots, and strategic modifications to the human genome. While the precise form these changes will take is unclear, recent history suggests that they are likely to be welcomed at first and progressively advanced. It appears reasonable that human intelligence will become obsolete, economic wealth will reside primarily in the hands of super-intelligent machines, and our ability to survive will lie beyond our direct control. Adapting to cope with these changes, without alienating the new forms of intelligence that emerge, requires transcending the limitations of human-centered design. Instead, a new breed of post-human-centered designer is needed to maximize the potential of post-evolutionary life.

This essay was adapted with permission from Digital Design Theory (Princeton Architectural Press, 2016) edited by Helen Armstrong.

Photo: Jonathan Knowles/Getty Images

Read the original post:
A Post-Human World Is Coming. Design Has Never Mattered …

Posted in Post Human | Comments Off on A Post-Human World Is Coming. Design Has Never Mattered …

High Seas Fleet – Wikipedia

Posted: November 12, 2016 at 5:27 pm

The High Seas Fleet (Hochseeflotte) was the battle fleet of the German Imperial Navy and saw action during the First World War. The formation was created in February 1907, when the Home Fleet (Heimatflotte) was renamed as the High Seas Fleet. Admiral Alfred von Tirpitz was the architect of the fleet; he envisioned a force powerful enough to challenge the Royal Navy’s predominance. Kaiser Wilhelm II, the German Emperor, championed the fleet as the instrument by which he would seize overseas possessions and make Germany a global power. By concentrating a powerful battle fleet in the North Sea while the Royal Navy was required to disperse its forces around the British Empire, Tirpitz believed Germany could achieve a balance of force that could seriously damage British naval hegemony. This was the heart of Tirpitz’s “Risk Theory,” which held that Britain would not challenge Germany if the latter’s fleet posed such a significant threat to its own.

The primary component of the Fleet was its battleships, typically organized in eight-ship squadrons, though it also contained various other formations, including the I Scouting Group. At its creation in 1907, the High Seas Fleet consisted of two squadrons of battleships, and by 1914, a third squadron had been added. The dreadnought revolution in 1906 greatly affected the composition of the fleet; the twenty-four pre-dreadnoughts in the fleet were rendered obsolete and required replacement. Enough dreadnoughts for two full squadrons were completed by the outbreak of war in mid 1914; the eight most modern pre-dreadnoughts were used to constitute a third squadron. Two additional squadrons of older vessels were mobilized at the onset of hostilities, though by the end of the conflict, these formations were disbanded.

The fleet conducted a series of sorties into the North Sea during the war designed to lure out an isolated portion of the numerically superior British Grand Fleet. These operations frequently used the fast battlecruisers of the I Scouting Group to raid the British coast as the bait for the Royal Navy. These operations culminated in the Battle of Jutland, on 31 May1 June 1916, where the High Seas Fleet confronted the whole of the Grand Fleet. The battle was inconclusive, but the British won strategically, as it convinced Admiral Reinhard Scheer, the German fleet commander, that even a highly favorable outcome to a fleet action would not secure German victory in the war. Scheer and other leading admirals therefore advised the Kaiser to order a resumption of the unrestricted submarine warfare campaign. The primary responsibility of the High Seas Fleet in 1917 and 1918 was to secure the German naval bases in the North Sea for U-boat operations. Nevertheless, the fleet continued to conduct sorties into the North Sea and detached units for special operations in the Baltic Sea against the Russian Baltic Fleet. Following the German defeat in November 1918, the Allies interned the bulk of the High Seas Fleet in Scapa Flow, where it was ultimately scuttled by its crews in June 1919, days before the belligerents signed the Treaty of Versailles.

In 1898, Admiral Alfred von Tirpitz became the State Secretary for the Imperial Navy Office (ReichsmarineamtRMA);[1] Tirpitz was an ardent supporter of naval expansion. During a speech in support of the First Naval Law on 6 December 1897, Tirpitz stated that the navy was “a question of survival” for Germany.[2] He also viewed Great Britain, with its powerful Royal Navy, as the primary threat to Germany. In a discussion with the Kaiser during his first month in his post as State Secretary, he stated that “for Germany the most dangerous naval enemy at present is England.”[3] Tirpitz theorized that an attacking fleet would require a 33percent advantage in strength to achieve victory, and so decided that a 2:3 ratio would be required for the German navy. For a final total of 60 German battleships, Britain would be required to build 90 to meet the 2:3 ratio envisioned by Tirpitz.[3]

The Royal Navy had heretofore adhered to the so-called “two-power standard,” first formulated in the Naval Defence Act of 1889, which required a larger fleet than those of the next two largest naval powers combined.[4] The crux of Tirpitz’s “risk theory” was that by building a fleet to the 2:3 ratio, Germany would be strong enough that even in the event of a British naval victory, the Royal Navy would incur damage so serious as to allow the third-ranked naval power to rise to preeminence. Implicit in Tirpitz’s theory was the assumption that the British would adopt an offensive strategy that would allow the Germans to use mines and submarines to even the numerical odds before fighting a decisive battle between Heligoland and the Thames. Tirpitz in fact believed Germany would emerge victorious from a naval struggle with Britain, as he believed Germany to possess superior ships manned by better-trained crews, more effective tactics, and led by more capable officers.[3]

In his first program, Tirpitz envisioned a fleet of nineteen battleships, divided into two eight-ship squadrons, one ship as a flagship, and two in reserve. The squadrons were further divided into four-ship divisions. This would be supported by the eight Siegfried- and Odinclasses of coastal defense ships, six large and eighteen small cruisers, and twelve divisions of torpedo boats, all assigned to the Home Fleet (Heimatflotte).[5] This fleet was secured by the First Naval Law, which passed in the Reichstag on 28 March 1898.[6] Construction of the fleet was to be completed by 1 April 1904. Rising international tensions, particularly as a result of the outbreak of the Boer War in South Africa and the Boxer Rebellion in China, allowed Tirpitz to push through an expanded fleet plan in 1900. The Second Naval Law was passed on 14 June 1900; it doubled the size of the fleet to 38 battleships and 20 large and 38 small cruisers. Tirpitz planned an even larger fleet. As early as September 1899, he had informed the Kaiser that he sought at least 45 battleships, and potentially might secure a third double-squadron, for a total strength of 48 battleships.[7]

During the initial period of German naval expansion, Britain did not feel particularly threatened.[6] The Lords of the Admiralty felt the implications of the Second Naval Law were not a significantly more dangerous threat than the fleet set by the First Naval Law; they believed it was more important to focus on the practical situation rather than speculation on future programs that might easily be reduced or cut entirely. Segments of the British public, however, quickly seized on the perceived threat posed by the German construction programs.[8] Despite their dismissive reaction, the Admiralty resolved to surpass German battleship construction. Admiral John Fisher, who became the First Sea Lord and head of the Admiralty in 1904, introduced sweeping reforms in large part to counter the growing threat posed by the expanding German fleet. Training programs were modernized, old and obsolete vessels were discarded, and the scattered squadrons of battleships were consolidated into four main fleets, three of which were based in Europe. Britain also made a series of diplomatic arrangements, including an alliance with Japan that allowed a greater concentration of British battleships in the North Sea.[9]

Fisher’s reforms caused serious problems for Tirpitz’s plans; he counted on a dispersal of British naval forces early in a conflict that would allow Germany’s smaller but more concentrated fleet to achieve a local superiority. Tirpitz could also no longer depend on the higher level of training in both the German officer corps and the enlisted ranks, nor the superiority of the more modern and homogenized German squadrons over the heterogeneous British fleet. In 1904, Britain signed the Entente cordiale with France, Britain’s primary naval rival. The destruction of two Russian fleets during the Russo-Japanese War in 1905 further strengthened Britain’s position, as it removed the second of her two traditional naval rivals.[10] These developments allowed Britain to discard the “two power standard” and focus solely on out-building Germany. In October 1906, Admiral Fisher stated “our only probable enemy is Germany. Germany keeps her whole Fleet always concentrated within a few hours of England. We must therefore keep a Fleet twice as powerful concentrated within a few hours of Germany.”[11]

The most damaging blow to Tirpitz’s plan came with the launch of HMSDreadnought in February 1906. The new battleship, armed with a main battery of ten 12-inch (30cm) guns, was considerably more powerful than any battleship afloat. Ships capable of battle with Dreadnought would need to be significantly larger than the old pre-dreadnoughts, which increased their cost and necessitated expensive dredging of canals and harbors to accommodate them. The German naval budget was already stretched thin; without new funding, Tirpitz would have to abandon his challenge to Britain.[12] As a result, Tirpitz went before the Reichstag in May 1906 with a request for additional funding. The First Amendment to the Second Naval Law was passed on 19 May and appropriated funding for the new battleships, as well as for the dredging required by their increased size.[6]

The Reichstag passed a second amendment to the Naval Law in March 1908 to provide an additional billion marks to cope with the growing cost of the latest battleships. The law also reduced the service life of all battleships from 25 to 20 years, which allowed Tirpitz to push for the replacement of older vessels earlier. A third and final amendment was passed in May 1912 represented a compromise between Tirpitz and moderates in parliament. The amendment authorized three new battleships and two light cruisers. The amendment called for the High Seas Fleet to be equipped with three squadrons of eight battleships each, one squadron of eight battlecruisers, and eighteen light cruisers. Two 8-ship squadrons would be placed in reserve, along with two armored and twelve light cruisers.[13] By the outbreak of war in August 1914, only one eight-ship squadron of dreadnoughtsthe I Battle Squadronhad been assembled with the Nassau and Helgoland-classbattleships. The second squadron of dreadnoughtsthe III Battle Squadronwhich included four of the Kaiser-classbattleships, was only completed when the four Knig-classbattleships entered service by early 1915.[14] As a result, the third squadronthe II Battle Squadronremained composed of pre-dreadnoughts through 1916.[15]

Before the 1912 naval law was passed, Britain and Germany attempted to reach a compromise with the Haldane Mission, led by the British War Minister Richard Haldane. The arms reduction mission ended in failure, however, and the 1912 law was announced shortly thereafter. The Germans were aware at as early as 1911, the Royal Navy had abandoned the idea of a decisive battle with the German fleet, in favor of a distant blockade at the entrances to the North Sea, which the British could easily control due to their geographical position. There emerged the distinct possibility that the German fleet would be unable to force a battle on its own terms, which would render it militarily useless. When the war came in 1914, the British did in fact adopt this strategy. Coupled with the restrictive orders of the Kaiser, who preferred to keep the fleet intact to be used as a bargaining chip in the peace settlements, the ability of the High Seas Fleet to affect the military situation was markedly reduced.[16]

The German Navy’s pre-war planning held that the British would be compelled to mount either a direct attack on the German coast to defeat the High Seas Fleet, or to put in place a close blockade. Either course of action would permit the Germans to whittle away at the numerical superiority of the Grand Fleet with submarines and torpedo boats. Once a rough equality of forces could be achieved, the High Seas Fleet would be able to attack and destroy the British fleet.[17] Implicit in Tirpitz’s strategy was the assumption that German vessels were better-designed, had better-trained crews, and would be employed with superior tactics. In addition, Tirpitz assumed that Britain would not be able to concentrate its fleet in the North Sea, owing to the demands of its global empire. At the start of a conflict between the two powers, the Germans would therefore be able to attack the Royal Navy with local superiority.[18]

The British, however, did not accommodate Tirpitz’s projections; from his appointment as the First Sea Lord in 1904, Fisher began a major reorganization of the Royal Navy. He concentrated British battleship strength in home waters, launched the Dreadnought revolution, and introduced rigorous training for the fleet personnel.[19] In 1912, the British concluded a joint defense agreement with France that allowed the British to concentrate in the North Sea while the French defended the Mediterranean.[20] Worse still, the British began developing the strategy of the distant blockade of Germany starting in 1904;[21] this removed the ability of German light craft to reduce Britain’s superiority in numbers and essentially invalidated German naval planning before the start of World War I.[22]

The primary base for the High Seas Fleet in the North Sea was Wilhelmshaven on the western side of the Jade Bight; the port of Cuxhaven, located on the mouth of the Elbe, was also a major base in the North Sea. The island of Heligoland provided a fortified forward position in the German Bight.[23]Kiel was the most important base in the Baltic, which supported the forward bases at Pillau and Danzig.[24] The Kaiser Wilhelm Canal through Schleswig-Holstein connected the Baltic and North Seas and allowed the German Navy to quickly shift naval forces between the two seas.[25] In peacetime, all ships on active duty in the High Seas Fleet were stationed in Wilhelmshaven, Kiel, or Danzig.[26] Germany possessed only one major overseas base, at Kiautschou in China,[27] where the East Asia Squadron was stationed.[28]

Steam ships of the period, which burned coal to fire their boilers, were naturally tied to coaling stations in friendly ports. The German Navy lacked sufficient overseas bases for sustained operations, even for single ships operating as commerce raiders.[29] The Navy experimented with a device to transfer coal from colliers to warships while underway in 1907, though the practice was not put into general use.[30] Nevertheless, German capital ships had a cruising range of at least 4,000nmi (7,400km; 4,600mi),[31] more than enough to operate in the Atlantic Ocean.[Note 1]

In 1897, the year Tirpitz came to his position as State Secretary of the Navy Office, the Imperial Navy consisted of a total of around 26,000 officers, petty officers, and enlisted men of various ranks, branches, and positions. By the outbreak of war in 1914, this had increased significantly to about 80,000 officers, petty officers, and men.[35] Capital ships were typically commanded by a Kapitn zur See (Captain at Sea) or Korvettenkapitn (corvette captain).[26] Each of these ships typically had a total crew in excess of 1,000 officers and men;[31] the light cruisers that screened for the fleet had crew sizes between 300 and 550.[36] The fleet torpedo boats had crews of about 80 to 100 officers and men, though some later classes approached 200.[37]

In early 1907, enough battleshipsof the Braunschweig and Deutschlandclasseshad been constructed to allow for the creation of a second full squadron.[38] On 16 February 1907,[39] Kaiser Wilhelm renamed the Home Fleet the High Seas Fleet. Admiral Prince Heinrich of Prussia, Wilhelm II’s brother, became the first commander of the High Seas Fleet; his flagship was SMSDeutschland.[38] While in a peace-time footing, the Fleet conducted a routine pattern of training exercises, with individual ships, with squadrons, and with the combined fleet, throughout the year. The entire fleet conducted several cruises into the Atlantic Ocean and the Baltic Sea.[40] Prince Henry was replaced in late 1909 by Vice Admiral Henning von Holtzendorff, who served until April 1913. Vice Admiral Friedrich von Ingenohl, who would command the High Seas Fleet in the first months of World War I, took command following the departure of Vice Admiral von Holtzendorff.[41]SMSFriedrich der Grosse replaced Deutschland as the fleet flagship on 2 March 1913.[42]

Despite the rising international tensions following the assassination of Archduke Franz Ferdinand on 28 June, the High Seas Fleet began its summer cruise to Norway on 13 July. During the last peacetime cruise of the Imperial Navy, the fleet conducted drills off Skagen before proceeding to the Norwegian fjords on 25 July. The following day the fleet began to steam back to Germany, as a result of Austria-Hungary’s ultimatum to Serbia. On the 27th, the entire fleet assembled off Cape Skudenes before returning to port, where the ships remained at a heightened state of readiness.[42] War between Austria-Hungary and Serbia broke out the following day, and in the span of a week all of the major European powers had joined the conflict.[43]

The High Seas Fleet conducted a number of sweeps and advances into the North Sea. The first occurred on 23 November 1914, though no British forces were encountered. Admiral von Ingenohl, the commander of the High Seas Fleet, adopted a strategy in which the battlecruisers of Rear Admiral Franz von Hipper’s I Scouting Group raided British coastal towns to lure out portions of the Grand Fleet where they could be destroyed by the High Seas Fleet.[44] The raid on Scarborough, Hartlepool and Whitby on 1516 December 1914 was the first such operation.[45] On the evening of 15 December, the German battle fleet of some twelve dreadnoughts and eight pre-dreadnoughts came to within 10nmi (19km; 12mi) of an isolated squadron of six British battleships. However, skirmishes between the rival destroyer screens in the darkness convinced von Ingenohl that he was faced with the entire Grand Fleet. Under orders from the Kaiser to avoid risking the fleet unnecessarily, von Ingenohl broke off the engagement and turned the fleet back toward Germany.[46]

Following the loss of SMSBlcher at the Battle of Dogger Bank in January 1915, the Kaiser removed Admiral von Ingenohl from his post on 2 February. Admiral Hugo von Pohl replaced him as commander of the fleet.[47] Admiral von Pohl conducted a series of fleet advances in 1915; in the first one on 2930 March, the fleet steamed out to the north of Terschelling and returned without incident. Another followed on 1718 April, where the fleet covered a mining operation by the II Scouting Group. Three days later, on 2122 April, the High Seas Fleet advanced towards the Dogger Bank, though again failed to meet any British forces.[48] Another sortie followed on 2930 May, during which the fleet advanced as far as Schiermonnikoog before being forced to turn back by inclement weather. On 10 August, the fleet steamed to the north of Heligoland to cover the return of the auxiliary cruiser Meteor. A month later, on 1112 September, the fleet covered another mine-laying operation off the Swarte Bank. The last operation of the year, conducted on 2324 October, was an advance without result in the direction of Horns Reef.[48]

Vice Admiral Reinhard Scheer became Commander in chief of the High Seas Fleet on 18 January 1916 when Admiral von Pohl became too ill to continue in that post.[49] Scheer favored a much more aggressive policy than that of his predecessor, and advocated greater usage of U-boats and zeppelins in coordinated attacks on the Grand Fleet; Scheer received approval from the Kaiser in February 1916 to carry out his intentions.[50] Scheer ordered the fleet on sweeps of the North Sea on 26 March, 23 April, and 2122 April. The battlecruisers conducted another raid on the English coast on 2425 April, during which the fleet provided distant support.[51] Scheer planned another raid for mid-May, but the battlecruiser Seydlitz had struck a mine during the previous raid and the repair work forced the operation to be pushed back until the end of the month.[52]

Admiral Scheer’s fleet, composed of 16 dreadnoughts, six pre-dreadnoughts, six light cruisers, and 31 torpedo boats departed the Jade early on the morning of 31 May. The fleet sailed in concert with Hipper’s five battlecruisers and supporting cruisers and torpedo boats.[53] The British navy’s Room 40 had intercepted and decrypted German radio traffic containing plans of the operation. The Admiralty ordered the Grand Fleet, totaling some 28 dreadnoughts and 9 battlecruisers, to sortie the night before in order to cut off and destroy the High Seas Fleet.[54]

At 16:00 UTC, the two battlecruiser forces encountered each other and began a running gun fight south, back towards Scheer’s battle fleet.[55] Upon reaching the High Seas Fleet, Vice Admiral David Beatty’s battlecruisers turned back to the north to lure the Germans towards the rapidly approaching Grand Fleet, under the command of Admiral John Jellicoe.[56] During the run to the north, Scheer’s leading ships engaged the Queen Elizabeth-class battleships of the 5th Battle Squadron.[57] By 18:30, the Grand Fleet had arrived on the scene, and was deployed into a position that would cross Scheer’s “T” from the northeast. To extricate his fleet from this precarious position, Scheer ordered a 16-point turn to the south-west.[58] At 18:55, Scheer decided to conduct another 16-point turn to launch an attack on the British fleet.[59]

This maneuver again put Scheer in a dangerous position; Jellicoe had turned his fleet south and again crossed Scheer’s “T.”[60] A third 16-point turn followed; Hipper’s mauled battlecruisers charged the British line to cover the retreat.[61] Scheer then ordered the fleet to adopt the night cruising formation, which was completed by 23:40.[62] A series of ferocious engagements between Scheer’s battleships and Jellicoe’s destroyer screen ensued, though the Germans managed to punch their way through the destroyers and make for Horns Reef.[63] The High Seas Fleet reached the Jade between 13:00 and 14:45 on 1 June; Scheer ordered the undamaged battleships of the I Battle Squadron to take up defensive positions in the Jade roadstead while the Kaiser-class battleships were to maintain a state of readiness just outside Wilhelmshaven.[64] The High Seas Fleet had sunk more British vessels than the Grand Fleet had sunk German, though Scheer’s leading battleships had taken a terrible hammering. Several capital ships, including SMSKnig, which had been the first vessel in the line, and most of the battlecruisers, were in drydock for extensive repairs for at least two months. On 1 June, the British had twenty-four capital ships in fighting condition, compared to only ten German warships.[65]

By August, enough warships had been repaired to allow Scheer to undertake another fleet operation on 1819 August. Due to the serious damage incurred by Seydlitz and SMSDerfflinger and the loss of SMSLtzow at Jutland, the only battlecruisers available for the operation were SMSVon der Tann and SMSMoltke, which were joined by SMSMarkgraf, SMSGrosser Kurfrst, and the new battleship SMSBayern.[66] Scheer turned north after receiving a false report from a zeppelin about a British unit in the area.[48] As a result, the bombardment was not carried out, and by 14:35, Scheer had been warned of the Grand Fleet’s approach and so turned his forces around and retreated to German ports.[67] Another fleet sortie took place on 1819 October 1916 to attack enemy shipping east of Dogger Bank. Despite being forewarned by signal intelligence, the Grand Fleet did not attempt to intercept. The operation was however cancelled due to poor weather after the cruiser Mnchen was torpedoed by the British submarine HMSE38.[68] The fleet was reorganized on 1 December;[48] the four Knig-classbattleships remained in the III Squadron, along with the newly commissioned Bayern, while the five Kaiser-class ships were transferred to the IV Squadron.[69] In March 1917 the new battleship Baden, built to serve as fleet flagship, entered service;[70] on the 17th, Scheer hauled down his flag from Friedrich der Grosse and transferred it to Baden.[48]

The war, now in its fourth year, was by 1917 taking its toll on the crews of the ships of the High Seas Fleet. Acts of passive resistance, such as the posting of anti-war slogans in the battleships SMSOldenburg and SMSPosen in January 1917, began to appear.[71] In June and July, the crews began to conduct more active forms of resistance. These activities included work refusals, hunger strikes, and taking unauthorized leave from their ships.[72] The disruptions came to a head in August, when a series of protests, anti-war speeches, and demonstrations resulted in the arrest of dozens of sailors.[73] Scheer ordered the arrest of over 200 men from the battleship Prinzregent Luitpold, the center of the anti-war activities. A series of courts-martial followed, which resulted in 77 guilty verdicts; nine men were sentenced to death for their roles, though only two men, Albin Kbis and Max Reichpietsch, were executed.[74]

In early September 1917, following the German conquest of the Russian port of Riga, the German navy decided to eliminate the Russian naval forces that still held the Gulf of Riga. The Navy High Command (Admiralstab) planned an operation, codenamed Operation Albion, to seize the Baltic island of sel, and specifically the Russian gun batteries on the Sworbe Peninsula.[75] On 18 September, the order was issued for a joint operation with the army to capture sel and Moon Islands; the primary naval component was to comprise its flagship, Moltke, and the III and IVBattle Squadrons of the High Seas Fleet.[76] The operation began on the morning of 12 October, when Moltke and the IIISquadron ships engaged Russian positions in Tagga Bay while the IVSquadron shelled Russian gun batteries on the Sworbe Peninsula on sel.[77]By 20 October, the fighting on the islands was winding down; Moon, sel, and Dag were in German possession. The previous day, the Admiralstab had ordered the cessation of naval actions and the return of the dreadnoughts to the High Seas Fleet as soon as possible.[78]

Admiral Scheer had used light surface forces to attack British convoys to Norway beginning in late 1917. As a result, the Royal Navy attached a squadron of battleships to protect the convoys, which presented Scheer with the possibility of destroying a detached squadron of the Grand Fleet. The operation called for Hipper’s battlecruisers to attack the convoy and its escorts on 23 April while the battleships of the High Seas Fleet stood by in support. On 22 April, the German fleet assembled in the Schillig Roads outside Wilhelmshaven and departed the following morning.[79] Despite the success in reaching the convoy route undetected, the operation failed due to faulty intelligence. Reports from U-boats indicated to Scheer that the convoys sailed at the start and middle of each week, but a west-bound convoy had left Bergen on Tuesday the 22nd and an east-bound group left Methil, Scotland, on the 24th, a Thursday. As a result, there was no convoy for Hipper to attack.[80] Beatty sortied with a force of 31 battleships and four battlecruisers, but was too late to intercept the retreating Germans. The Germans reached their defensive minefields early on 25 April, though approximately 40nmi (74km; 46mi) off Heligoland Moltke was torpedoed by the submarine E42; she successfully returned to port.[81]

A final fleet action was planned for the end of October 1918, days before the Armistice was to take effect. The bulk of the High Seas Fleet was to have sortied from their base in Wilhelmshaven to engage the British Grand Fleet; Scheerby now the Grand Admiral (Grossadmiral) of the fleetintended to inflict as much damage as possible on the British navy, in order to retain a better bargaining position for Germany, despite the expected casualties. However, many of the war-weary sailors felt the operation would disrupt the peace process and prolong the war.[82] On the morning of 29 October 1918, the order was given to sail from Wilhelmshaven the following day. Starting on the night of 29 October, sailors on Thringen and then on several other battleships mutinied.[83] The unrest ultimately forced Hipper and Scheer to cancel the operation.[84] When informed of the situation, the Kaiser stated “I no longer have a navy.”[85]

Following the capitulation of Germany on November 1918, most of the High Seas Fleet, under the command of Rear Admiral Ludwig von Reuter, were interned in the British naval base of Scapa Flow.[84] Prior to the departure of the German fleet, Admiral Adolf von Trotha made clear to von Reuter that he could not allow the Allies to seize the ships, under any conditions.[86] The fleet rendezvoused with the British light cruiser Cardiff, which led the ships to the Allied fleet that was to escort the Germans to Scapa Flow. The massive flotilla consisted of some 370 British, American, and French warships.[87] Once the ships were interned, their guns were disabled through the removal of their breech blocks, and their crews were reduced to 200 officers and enlisted men on each of the capital ships.[88]

The fleet remained in captivity during the negotiations that ultimately produced the Treaty of Versailles. Von Reuter believed that the British intended to seize the German ships on 21 June 1919, which was the deadline for Germany to have signed the peace treaty. Unaware that the deadline had been extended to the 23rd, Reuter ordered the ships to be sunk at the next opportunity. On the morning of 21 June, the British fleet left Scapa Flow to conduct training maneuvers, and at 11:20 Reuter transmitted the order to his ships.[86] Out of the interned fleet, only one battleship, Baden, three light cruisers, and eighteen destroyers were saved from sinking by the British harbor personnel. The Royal Navy, initially opposed to salvage operations, decided to allow private firms to attempt to raise the vessels for scrapping.[89] Cox and Danks, a company founded by Ernest Cox handled most of the salvage operations, including those of the heaviest vessels raised.[90] After Cox’s withdrawal due to financial losses in the early 1930s, Metal Industries Group, Inc. took over the salvage operation for the remaining ships. Five more capital ships were raised, though threeSMS Knig, SMSKronprinz, and SMS Markgrafwere too deep to permit raising. They remain on the bottom of Scapa Flow, along with four light cruisers.[91]

The High Seas Fleet, particularly its wartime impotence and ultimate fate, strongly influenced the later German navies, the Reichsmarine and Kriegsmarine. Former Imperial Navy officers continued to serve in the subsequent institutions, including Admiral Erich Raeder, Hipper’s former chief of staff, who became the commander in chief of the Reichsmarine. Raeder advocated long-range commerce raiding by surface ships, rather than constructing a large surface fleet to challenge the Royal Navy, which he viewed to be a futile endeavor. His initial version of Plan Z, the construction program for the Kriegsmarine in the late 1930s, called for large number of P-classcruisers, long-range light cruisers, and reconnaissance forces for attacking enemy shipping, though he was overruled by Adolf Hitler, who advocated a large fleet of battleships.[92]

See the original post here:

High Seas Fleet – Wikipedia

Posted in High Seas | Comments Off on High Seas Fleet – Wikipedia

What are the Benefits of Mind Uploading? – Lifeboat

Posted: at 5:24 pm

by Lifeboat Foundation Scientific Advisory Board member Michael Anissimov. Overview Universal mind uploading, or universal uploading for short, is the concept, by no means original to me, that the technology of mind uploading will eventually become universally adopted by all who can afford it, similar to the adoption of modern agriculture, hygiene, or living in houses. The concept is rather infrequently discussed, due to a combination of 1) its supposedly speculative nature and 2) its far future time frame. Discussion Before I explore the idea, let me give a quick description of what mind uploading is and why the two roadblocks to its discussion are invalid. Mind uploading would involve simulating a human brain in a computer in enough detail that the simulation becomes, for all practical purposes, a perfect copy and experiences consciousness, just like protein-based human minds. If functionalism is true, like many cognitive scientists and philosophers correctly believe, then all the features of human consciousness that we know and love including all our memories, personality, and sexual quirks would be preserved through the transition. By simultaneously disassembling the protein brain as the computer brain is constructed, only one implementation of the person in question would exist at any one time, eliminating any unnecessary confusion. Still, even if two direct copies are made, the universe wont care you would have simply created two identical individuals with the same memories. The universe cant get confused only you can. Regardless of how perplexed one may be by contemplating this possibility for the first time from a 20th century perspective of personal identity, an upload of you with all your memories and personality intact is no different from you than the person you are today is different than the person you were yesterday when you went to sleep, or the person you were 10-30 seconds ago when quantum fluctuations momentarily destroyed and recreated all the particles in your brain. Regarding objections to talk of uploading, for anyone who 1) buys the silicon brain replacement thought experiment, 2) accepts arguments that the human brain operates at below about 1019 ops/sec, and 3) considers it plausible that 1019 ops/sec computers (plug in whatever value you believe for #2) will become manufactured this century, the topic is clearly worth broaching. Even if its 100 years off, thats just a blink of an eye relative to the entirety of human history, and universal uploading would be something more radical than anything thats occurred with life or intelligence in the entire known history of this solar system. We can afford to stop focusing exclusively on the near future for a potential event of such magnitude. Consider it intellectual masturbation, if you like, or a serious analysis of the near-term future of the human species, if you buy the three points. So, say that mind uploading becomes available as a technology sometime around 2050. If the early adopters dont go crazy and/or use their newfound abilities to turn the world into a totalitarian dictatorship, then they will concisely and vividly communicate the benefits of the technology to their non-uploaded family and friends. If affordable, others will then follow, but the degree of adoption will necessarily depend on whether the process is easily reversible or not. But suppose that millions of people choose to go for it. Effects Widespread uploading would have huge effects. Lets go over some of them in turn 1) Massive economic growth. By allowing human minds to run on substrates that can be accelerated by the addition of computing power, as well as the possibility of spinning off non-conscious daemons to accomplish rote tasks, economic growth at least insofar as it can be accelerated by intelligence and the robotics of 2050 alone will accelerate greatly. Instead of relying upon 1% per year population growth rates, humans might copy themselves or (more conducive to societal diversity) spin off already-mature progeny as quickly as available computing power allows. This could lead to growth rates in human capital of 1,000% per year or far more. More economic growth might ensue in the first year (or month) after uploading than in the entire 250,000 years between the evolution of Homo sapiens and the invention of uploading. The first country that widely adopts the technology might be able to solve global poverty by donating only 0.1% of its annual GDP. 2) Intelligence enhancement. Faster does not necessarily mean smarter. Weak superintelligence is a term sometimes used to describe accelerated intelligence that is not qualitatively enhanced, in contrast with strong superintelligence which is. The road from weak to strong superintelligence would likely be very short. By observing information flows in uploaded human brains, many of the details of human cognition would be elucidated. Running standard compression algorithms over such minds might make them more efficient than blind natural selection could manage, and this extra space could be used to introduce new information-processing modules with additional features. Collectively, these new modules could give rise to qualitatively better intelligence. At the very least, rapid trial-and-error experimentation without the risk of injury would become possible, eventually revealing paths to qualitative enhancements. 3) Greater subjective well-being. Like most other human traits, our happiness set points fall on a bell curve. No matter what happens to us, be it losing our home or winning the lottery, there is a tendency for our innate happiness level to revert back to our natural set point. Some lucky people are innately really happy. Some unlucky people have chronic depression. With uploading, we will be able to see exactly which neural features (happiness centers) correspond to high happiness set points and which dont, by combining prior knowledge with direct experimentation and investigation. This will make it possible for people to reprogram their own brains to raise their happiness set points in a way that biotechnological intervention might find difficult or dangerous. Experimental data and simple observation has shown that high happiness set-point people today dont have any mysterious handicaps, like inability to recognize when their body is in pain, or inappropriate social behavior. They still experience sadness, its just that their happiness returns to a higher level after the sad experience is over. Perennial tropes justifying the value of suffering will lose their appeal when anyone can be happier without any negative side effects. 4) Complete environmental recovery. (Im not just trying to kiss up to greens, I actually care about this.) By spending most of our time as programs running on a worldwide network, we will consume far less space and use less energy and natural resources than we would in a conventional human body. Because our food would be delicious cuisines generated only by electricity or light, we could avoid all the environmental destruction caused by clear-cutting land for farming and the ensuing agricultural runoff. People imagine dystopian futures to involve a lot of homogeneity well, were already here as far as our agriculture is concerned. Land that once had diverse flora and fauna now consists of a few dozen agricultural staples wheat, corn, oats, cattle pastures, factory farms. BORING. By transitioning from a proteinaceous to a digital substrate, well do more for our environment than any amount of conservation ever could. We could still experience this environment by inputting live-updating feeds of the biosphere into a corner of our expansive virtual worlds. Its the best of both worlds, literally virtual and natural in harmony. 5) Escape from direct governance by the laws of physics. Though this benefit sounds more abstract or philosophical, if we were to directly experience it, the visceral nature of this benefit would become immediately clear. In a virtual environment, the programmer is the complete master of everything he or she has editing rights to. A personal virtual sandbox could become ones canvas for creating the fantasy world of their choice. Today, this can be done in a very limited fashion in virtual worlds such as Second Life. (A trend which will continue to the fulfillment of everyones most escapist fantasies, even if uploading is impossible.) Worlds like Second Life are still limited by their system-wide operating rules and their low resolution and bandwidth. Any civilization that develops uploading would surely have the technology to develop virtual environments of great detail and flexibility, right up to the very boundaries of the possible. Anything that can become possible will be. People will be able to experience simulations of the past, travel to far-off stars and planets, and experience entirely novel worldscapes, all within the flickering bits of the worldwide network. 6) Closer connections with other human beings. Our interactions with other people today is limited by the very low bandwidth of human speech and facial expressions. By offering partial readouts of our cognitive state to others, we could engage in a deeper exchange of ideas and emotions. I predict that talking as communication will become pass well engage in much deeper forms of informational and emotional exchange that will make the talking and facial expressions of today seem downright empty and soulless. Spiritualists often talk a lot about connecting closer to one another are they aware that the best way they can go about that would be to contribute to researching neural scanning or brain-computer interfacing technology? Probably not. 7) Last but not least, indefinite lifespans. Here is the one that detractors of uploading are fond of targeting the fact that uploading could lead to practical immortality. Well, it really could. By being a string of flickering bits distributed over a worldwide network, killing you could become extremely difficult. The data and bits of everyone would be intertwined to kill someone, youll either need complete editing privileges of the entire worldwide network, or the ability to blow up the planet. Needless to say, true immortality would be a huge deal, a much bigger deal than the temporary fix of life extension therapies for biological bodies, which will do very little to combat infectious disease or exotic maladies such as being hit by a truck. Conclusion Its obvious that mind uploading would be incredibly beneficial. As stated near the beginning of this post, only three things are necessary for it to be a big deal 1) that you believe a brain could be incrementally replaced with functionally identical implants and retain its fundamental characteristics and identity, 2) that the computational capacity of the human brain is a reasonable number, very unlikely to be more than 1019 ops/sec, and 3) that at some point in the future well have computers that fast. Not so far-fetched. Many people consider these three points plausible, but just arent aware of their implications. If you believe those three points, then uploading becomes a fascinating goal to work towards. From a utilitarian perspective, it practically blows everything else away besides global risk mitigation, as the number of new minds leading worthwhile lives that could be created using the technology would be astronomical. The number of digital minds we could create using the matter on Earth alone would likely be over a quadrillion, more than 2,500 people for every star in the 400 billion star Milky Way. We could make a Galactic Civilization right here on Earth in the late 21st or 22nd century. I can scarcely imagine such a thing, but I can imagine that well be guffawing heartily as how unambitious most human goals were in the year 2010.

See more here:

What are the Benefits of Mind Uploading? – Lifeboat

Posted in Mind Uploading | Comments Off on What are the Benefits of Mind Uploading? – Lifeboat

Trying to install jitsi meet with apache2 – Stack Overflow

Posted: October 29, 2016 at 11:45 am

I know there are already post on this subject, but they don’t produce good results and I would like to share, here, my thinking on this subject. Feel free to moderate my post if you think it’s a bad idea.

Server: Ubuntu 16.04.1, Apache2.4.18

DNS conf:

Like I said I try to run Jitsi meet on apache2. By following the steps described in Quick install (https://github.com/jitsi/jitsi-meet/blob/master/doc/quick-install.md)

If I install Jitsi meet on my server just after installing Ubuntu so without Apache or Nginx. Jitsi works great. If I install Jitsi meet on my server after installing Nginx. Jitsi works great.

With the same method of installation, I try to install Jitsi meet after installing Apache2, so I notice that Jitsi meet does not configure itself apache2, so I tried this first configuration:

When I load the page meet.mydomain.xx I get the following error:

“It works! Now your customer BOSH points to this URL to connect to Prosody.

For more information see Prosody. Setting up BOSH ”

But when I look at the /etc/prosody/conf.avail/meet.mydomain.xx.cfg.lua file, I notice that bosh is already enabled and the rest of the configuration is ok with what is explain here https://github.com/jitsi/jitsi-meet/blob/master/doc/manual-install.md The log contains no errors. If you have an idea to fix this problem I’m interested.

Second configuration that I tested:

With this setup the result seems better, I can see the home page of Jitsi meet but without the text, without the logo and when I click on the go button, nothing happend. The log contains no errors.

So here I don’t no really what to do. If someone have some advices or ideas, thank you to share it !

Bye, thank you for reading

Gspohu

Original post:
Trying to install jitsi meet with apache2 – Stack Overflow

Posted in Jitsi | Comments Off on Trying to install jitsi meet with apache2 – Stack Overflow