Breaking News and Updates
- Abolition Of Work
- Alternative Medicine
- Artificial Intelligence
- Atlas Shrugged
- Ayn Rand
- Basic Income Guarantee
- Conscious Evolution
- Cosmic Heaven
- Designer Babies
- Ethical Egoism
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom of Speech
- Gene Medicine
- Genetic Engineering
- Germ Warfare
- Golden Rule
- Government Oppression
- High Seas
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Longevity
- Immortality Medicine
- Intentional Communities
- Life Extension
- Mars Colonization
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- New Utopia
- Personal Empowerment
- Political Correctness
- Politically Incorrect
- Post Human
- Post Humanism
- Private Islands
- Resource Based Economy
- Ron Paul
- Second Amendment
- Second Amendment
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Teilhard De Charden
- The Singularity
- Tor Browser
- Transhuman News
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Zeitgeist Movement
The Evolutionary Perspective
Category Archives: Superintelligence
Posted: February 26, 2017 at 11:29 pm
We have all had founded and unfounded fears when we were growing up. On the other hand, more often than not we have been in denial of accepting the limits of our bodies and our minds. According to Grady Booch, the art and science of computing have come a long way into the lives of human beings. There are millions of devices that carry hundreds of pages of data streams.
However, having been a systems engineer Booch points out at a possibility of building a system that can converse with humans in natural language. He further argues that there are systems that can also set goals or better still execute the plans set against those goals.
Booch has been there, done it and experienced it. Every sort of technology will somewhat create apprehension. Take for example when telephones were introduced; there was this feeling that they would destroy all civil conversation. The written words became invasive lest people lost their ability to remember.
However, there is still the artificial intelligence that we ought to think about given that many people will tend to trust it more than a human being. Many are the times that we have forgotten that these systems require substantial training. But how many people will run away from this citing fear that training of systems will threaten humanity?
Booch advises that worrying about the rise of superintelligence is dangerous. What we fail to understand is that the rise of computing brings on the hand increases society issues, which we must attend to. Remember the AIs we build are neither for controlling weather nor directing tides. Hence there is no competition with human economies.
Nonetheless, it is important to experience computing because it will help us advance in our human experiences. Otherwise, it will not be long before AI takes dominion over a human beings brilliant minds.
See the rest here:
Posted: at 11:29 pm
A fascinating conference on artificial intelligence was recently hosted by the Future of Life Institute, an organization aimed at promoting optimistic visions of the future while anticipating existential risks from artificial intelligence and other directions.
The conference Superintelligence: Science or Fiction? featured a panel of Elon Musk from Tesla Motors and SpaceX, futurist Ray Kurzweil, Demis Hassabis of MITs DeepMind, neuroscientist and author Sam Harris, philosopher Nick Bostrom, philosopher and cognitive scientist David Chalmers, Skype co-founder Jaan Tallinn, as well as computer scientists Stuart Russell and Bart Selman. The discussion was led by MIT cosmologist Max Tegmark.
The conference participants offered a number of prognostications and warnings about the coming superintelligence, an artificial intelligence that will far surpass the brightest human.
Most agreed that such an AI (or AGI for Artificial General Intelligence) will come into existence. It is just a matter of when. The predictions ranged from days to years, with Elon Musk saying that one day an AI will reach a a threshold where it’s as smart as the smartest most inventive human which it will then surpass in a matter of days, becoming smarter than all of humanity.
Ray Kurzweils view is that however long it takes, AI will be here before we know it:
Every time there is an advance in AI, we dismiss it as ‘oh, well that’s not really AI:’ chess, go, self-driving cars. An AI, as you know, is the field of things we haven’t done yet. That will continue when we actually reach AGI. There will be lots of controversy. By the time the controversy settles down, we will realize that it’s been around for a few years,” says Kurzweil [5:00].
Neuroscientist and author Sam Harris acknowledges that his perspective comes from outside the AI field, but sees that there are valid concerns about how to control AI. He thinks that people dont really take the potential issues with AI seriously yet. Many think its something that is not going to affect them in their lifetime – what he calls the illusion that the time horizon matters.
If you feel that this is 50 or a 100 years away that is totally consoling, but there is an implicit assumption there, the assumption is that you know how long it will take to build this safely. And that 50 or a 100 years is enough time, he says [16:25].
On the other hand, Harris points out that at stake here is how much intelligence humans actually need. If we had more intelligence, would we not be able to solve more of our problems, like cancer? In fact, if AI helped us get rid of diseases, then humanity is currently in pain of not having enough intelligence.
Elon Musks point of view is to be looking for the best possible future – the good future as he calls it. He thinks we are headed either for superintelligence or civilization ending and its up to us to envision the world we want to live in.
We have to figure out, what is a world that we would like to be in where there is this digital superintelligence?, says Musk [at 33:15].
He also brings up an interesting perspective that we are already cyborgs because we utilize machine extensions of ourselves like phones and computers.
Musk expands on his vision of the future by saying it will require two things – solving the machine-brain bandwidth constraint and democratization of AI. If these are achieved, the future will be good according to the SpaceX and Tesla Motors magnate [51:30].
By the bandwidth constraint, he means that as we become more cyborg-like, in order for humans to achieve a true symbiosis with machines, they need a high-bandwidth neural interface to the cortex so that the digital tertiary layer would send and receive information quickly.
At the same time, its important for the AI to be available equally to everyone or a smaller group with such powers could become dictators.
He brings up an illuminating quote about how he sees the future going:
There was a great quote by Lord Acton which is that ‘freedom consists of the distribution of power and despotism in its concentration.’ And I think as long as we have – as long as AI powers, like anyone can get it if they want it, and we’ve got something faster than meat sticks to communicate with, then I think the future will be good, says Musk [51:47]
You can see the whole great conversation here:
Posted: February 14, 2017 at 11:37 am
In 2014 SpaceX CEO Elon Musk tweeted: Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes. That same year University of Cambridge cosmologist Stephen Hawking told the BBC: The development of full artificial intelligence could spell the end of the human race. Microsoft co-founder Bill Gates also cautioned: I am in the camp that is concerned about super intelligence.
How the AI apocalypse might unfold was outlined by computer scientist Eliezer Yudkowsky in a paper in the 2008 book Global Catastrophic Risks: How likely is it that AI will cross the entire vast gap from amoeba to village idiot, and then stop at the level of human genius? His answer: It would be physically possible to build a brain that computed a million times as fast as a human brain…. If a human mind were thus accelerated, a subjective year of thinking would be accomplished for every 31 physical seconds in the outside world, and a millennium would fly by in eight-and-a-half hours. Yudkowsky thinks that if we don’t get on top of this now it will be too late: The AI runs on a different timescale than you do; by the time your neurons finish thinking the words I should do something you have already lost.
The paradigmatic example is University of Oxford philosopher Nick Bostrom’s thought experiment of the so-called paperclip maximizer presented in his Superintelligence book: An AI is designed to make paperclips, and after running through its initial supply of raw materials, it utilizes any available atoms that happen to be within its reach, including humans. As he described in a 2003 paper, from there it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities. Before long, the entire universe is made up of paperclips and paperclip makers.
I’m skeptical. First, all such doomsday scenarios involve a long sequence of if-then contingencies, a failure of which at any point would negate the apocalypse. University of West England Bristol professor of electrical engineering Alan Winfield put it this way in a 2014 article: If we succeed in building human equivalent AI and if that AI acquires a full understanding of how it works, and if it then succeeds in improving itself to produce super-intelligent AI, and if that super-AI, accidentally or maliciously, starts to consume resources, and if we fail to pull the plug, then, yes, we may well have a problem. The risk, while not impossible, is improbable.
Second, the development of AI has been much slower than predicted, allowing time to build in checks at each stage. As Google executive chairman Eric Schmidt said in response to Musk and Hawking: Don’t you think humans would notice this happening? And don’t you think humans would then go about turning these computers off? Google’s own DeepMind has developed the concept of an AI off switch, playfully described as a big red button to be pushed in the event of an attempted AI takeover. As Baidu vice president Andrew Ng put it (in a jab at Musk), it would be like worrying about overpopulation on Mars when we have not even set foot on the planet yet.
Third, AI doomsday scenarios are often predicated on a false analogy between natural intelligence and artificial intelligence. As Harvard University experimental psychologist Steven Pinker elucidated in his answer to the 2015 Edge.org Annual Question What Do You Think about Machines That Think?: AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world. It is equally possible, Pinker suggests, that artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization.
Fourth, the implication that computers will want to do something (like convert the world into paperclips) means AI has emotions, but as science writer Michael Chorost notes, the minute an A.I. wants anything, it will live in a universe with rewards and punishmentsincluding punishments from us for behaving badly.
Given the zero percent historical success rate of apocalyptic predictions, coupled with the incrementally gradual development of AI over the decades, we have plenty of time to build in fail-safe systems to prevent any such AI apocalypse.
See the rest here:
Posted: at 11:37 am
In2012, Michael Vassar became the chief science officer of MetaMed Research, which he co-founded, and prior to that, he served as the president of the Machine Intelligence Research Institute. Clearly, he knows a thing or two about artificial intelligence (AI), and now, he has come out with a stark warning for humanity when it comes to the development of artificial super-intelligence.
In a video posted by Big Think, Vassar states, If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order. Essentially, he is warning that an unchecked AI could eradicate humanity in the future.
Vassars views are based on the writings of Nick Bostrom, most specifically, those found in his book Superintelligence. Bostroms ideas have been around for decades, but they are only now gaining traction given his association with prestigious institutions. Vassar sees this lack of early attention, and not AI itself, as the biggest threat to humanity. He argues that we need to find a way to promote analytically sound discoveries from those who lack the prestige currently necessary for ideas to be heard.
Many tech giants have spoken extensively about their fears regarding the development of AI. Elon Musk believes that an AI attack on the internet is only a matter of time. Meanwhile,Stephen Hawking cites the creation of AI as the best or worst thing to happen to humanity.
Bryan Johnsons company Kernal is currently working on a neuroprosthesis that can mimic, repair, and improve human cognition. If it comes to fruition, that tech could be a solid defense against the worst case scenario of AI going completely rogue. If we are able to upgrade our brains to a level equal to that expected of AI, we may be able to at least stay on par with the machines.
Posted: February 11, 2017 at 8:38 am
The simulation hypothesis is the idea that reality is a digital simulation. Technological advances will inevitably produce automated artificial superintelligence that will, in turn, create simulations to better understand the universe. This opens the door for the idea that superintelligence already exists and created simulations now occupied by humans. At first blush the notion that reality is pure simulacra seems preposterous, but the hypothesis springs from decades of scientific research and is taken seriously by academics, scientists, and entrepreneurs like Stephen Hawking and Elon Musk.
From Plato’s allegory of the cave to The Matrix ideas about simulated reality can be found scattered through history and literature. The modern manifestation of the simulation argument is postulates that, like Moore’s Law, over time computing power becomes exponentially more robust. Barring a disaster that resets technological progression, experts speculate that it is inevitable computing capacity will one day be powerful enough to generate realistic simulations.
TechRepublic’s smart person’s guide is a routinely updated “living” precis loaded with up-to-date information about about how the simulation hypothesis works, who it affects, and why it’s important.
SEE: Check out all of TechRepublic’s smart person’s guides
SEE: Quick glossary: Artificial intelligence (Tech Pro Research)
The simulation hypothesis advances the idea that simulations might be the inevitable outcome of technological evolution. Though ideas about simulated reality are far from new and novel, the contemporary theory springs from research conducted by Oxford University professor of philosophy Nick Bostrom.
In 2003 Bostrom presented a paper that proposed a trilemma, a decision between three challenging options, related to the potential of future superintelligence to develop simulations. Bostrom argues this likelihood is nonzero, meaning the odds of a simulated reality are astronomically small, but because percentage likelihood is not zero we must consider rational possibilities that include a simulated reality. Bostrom does not propose that humans occupy a simulation. Rather, he argues that massive computational ability developed by posthuman superintelligence will likely develop simulations to better understand that nature of reality.
In his book Superintelligence using anthropic rhetoric Bostrom argues that the odds of a population with human-like population advancing to superintelligence is “very close to zero,” or (with an emphasis on the word or) the odds that a superintelligence would desire to create simulations is also “very close to zero,” or the odds that people with human-like experiences actually live in a simulation is “very close to one.” He concludes by arguing that if the claim “very close to one” is the correct answer and most people do live in simulations, then the odds are good that we too exist in a simulation.
Simulation hypothesis has many critics, namely those in academic communities who question an overreliance on anthropic reasoning and scientific detractors who point out simulations need not be conscious to be studied by future superintelligence. But as artificial intelligence and machine learning emerge as powerful business and cultural trends, many of Bostrom’s ideas are going mainstream.
SEE: Research: 63% say business will benefit from AI (Tech Pro Research)
It’s natural to wonder if the simulation hypothesis has real-world applications, or if it’s a fun but purely abstract consideration. For business and culture, the answer is unambiguous: It doesn’t matter if we live in a simulation or not. The accelerating pace of automated technology will have a significant impact on business, politics, and culture in the near future.
The simulation hypothesis is coupled inherently with technological evolution and the development of superintelligence. While superintelligence remains speculative, investments in narrow and artificial general intelligence are significant. Using the space race as an analogue, advances in artificial intelligence create technological innovations that build, destroy, and augment industry. IBM is betting big with Watson and anticipates a rapidly emerging $2 trillion market for cognitive products. Cybersecurity experts are investing heavily in AI and automation to fend off malware and hackers. In a 2016 interview with TechRepublic, United Nations chief technology diplomat, Atefeh Riazi, anticipated the economic impact of AI to be profound and referred to the technology as “humanity’s final innovation.”
SEE: Artificial Intelligence and IT: The good, the bad and the scary (Tech Pro Research)
Though long-term prognostication about the impact of automated technology is ill-advised, in the short term advances in machine learning, automation, and artificial intelligence represent a paradigm shift akin to the development of the internet or the modern mobile phone. In other words, the economy post-automation will be dramatically different. AI will hammer manufacturing industries, and logistics distribution will lean heavily on self-driving cars, ships, drones, and aircraft, and financial services jobs that require pattern recognition will evaporate.
Conversely, automation could create demand for inherently interpersonal skills like HR, sales, manual labor, retail, and creative work. “Digital technologies are in many ways complements, not substitutes for, creativity,” Erik Brynjolfsson said, in an interview with TechRepublic. “If somebody comes up with a new song, a video, or piece of software there’s no better time in history to be a creative person who wants to reach not just hundreds or thousands, but millions and billions of potential customers.”
SEE: IT leader’s guide to the future of artificial intelligence (Tech Pro Research)
The golden age of artificial intelligence began in 1956 at the Ivy League research institution Dartmouth College with the now-infamous proclamation, “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.” The conference established AI and computational protocols that defined a generation of research. The conference was preceded and inspired by developments at Manchester College in 1951 that produced a program that could play checkers, and another program that could play chess.
Though excited researchers anticipated the speedy emergence of human-level machine intelligence, programming intelligence unironically proved to be a steep challenge. By the mid-1970s the field entered the so-called “first AI winter.” The era was marked by the development of strong theories limited by insufficient computing power.
Spring follows winter, and by the 1980s AI and automation technology grew from the sunshine of faster hardware and the boom of consumer technology markets. By the end of the century parallel processingthe ability to perform multiple computations at one timeemerged. In 1997 IBM’s Deep Blue defeated human chess player Gary Kasparov. Last year Google’s DeepMind defeated a human at Go, and this year the same technology easily beat four of the best human poker players.
Driven and funded by research and academic institutions, governments, and the private sector these benchmarks indicate a rapidly accelerating automation and machine learning market. Major industries like financial services, healthcare, sports, travel, and transportation are all deeply invested in artificial intelligence. Facebook, Google, and Amazon are using AI innovation for consumer applications, and a number of companies are in a race to build and deploy artificial general intelligence.
Some AI forecasters like Ray Kurzweil predict a future with the human brain cheerly connected to the cloud. Other AI researchers aren’t so optimistic. Bostrom and his colleagues in particular warn that creating artificial general intelligence could produce an existential threat.
Among the many terrifying dangers of superintelligenceranging from out-of-control killer robots to economic collapsethe primary threat of AI is the coupling of of anthropomorphism with the misalignment of AI goals. Meaning, humans are likely to imbue intelligent machines with human characteristics like empathy. An intelligent machine, however, might be programed to prioritize goal accomplishment over human needs. In a terrifying scenario known as instrumental convergence, or the “paper clip maximizer,” a superintelligent narrowly focused AI designed to produce paper clips would turn humans into gray goo in pursuit of resources.
SEE: Research: Companies lack skills to implement and support AI and machine learning (Tech Pro Research)
It may be impossible to test or experience the simulation hypothesis, but it’s easy to learn more about the theory. TechRepublic’s Hope Reese enumerated the best books on artificial intelligence, including Bostrom’s essential tome Superintelligence, Kurzweil’s The Singularity Is Near: When Humans Transcend Biology, and Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat.
Make sure to read TechRepublic’s smart person’s guides on machine learning, Google’s DeepMind, and IBM’s Watson. Tech Pro Research provides a quick glossary on AI and research on how companies are using machine learning and big data.
Finally, to have some fun with hands-on simulations, grab a copy of Cities: Skylines, Sim City, Elite:Dangerous, or Planet Coaster on game platform Steam. These small-scale environments will let you experiment with game AI while you build your own simulated reality.
Posted: February 10, 2017 at 3:31 am
Understanding how logical agents cooperate or fight, especially in the face of resource scarcity, is a fundamental problem for social scientists. This underpins both our foundation as a social species, and our modern day economy and geopolitics. But soon, this problem will also be at the heart of how we understand, control, and cooperate with artificially intelligent agents, and how they work among themselves.
Researchers inside of Googles AI DeepMind project wanted to know whether distinct artificial intelligence agents worked together or competed when faced with a problem. Doing this experiment would help scientists understand how our future networks of smart systems may work together.
The researchers pitted two AIs against each other in a couple of video games. In one game, called Gathering, the AIs had to gather as many apples as possible. They also had the option to shoot each other to temporarily take the opponent out of play. The results were intriguing as the two agents worked harmoniously until resources started to dwindle; at that point the AIs realized that temporarily disabling the opponent could give each of them an advantage and so started zapping the enemy. As scarcity increased so did conflict.
Interestingly enough, the researchers found that introducing a more powerful AI into the mix would result in more conflict even without the scarcity. Thats because the more powerful AI would find it easier to compute the necessary details, such as trajectory and speed, needed to shoot its opponent. So, it acted like a rational economic agent.
However, before you start preparing for Judgement Day, you should note that in the second game trial, called Wolfpack, the two AI systems had to closely collaborate to ensure victory. In this instance, the systems changed their behavior maximizing cooperation. And the more computationally powerful the AI, the more it cooperated.
The conclusions are fairly simple to draw, though they have extremely wide-ranging implications. The AIs will cooperate or fight depending on what suits them better, as rational economic agents. This idea might underpin the way we design our future AI and the methods we can use to control them, at least until they reach the singularity and develop superintelligence. Then were all doomed.
Source: DeepMind Via: Verge | AI image via Shutterstock
See the rest here:
Posted: February 9, 2017 at 6:26 am
This is the way the world ends: not with a bang, but with a paper clip. In this scenario, the designers of the worlds first artificial superintelligence need a way to test their creation. So they program it to do something simple and non-threatening: make paper clips. They set it in motion and wait for the results not knowing theyve already doomed us all.
Before we get into the details of this galaxy-destroying blunder, its worth looking at what superintelligent A.I. actually is, and when we might expect it. Firstly, computing power continues to increase while getting cheaper; famed futurist Ray Kurzweil measures it calculations per second per $1,000, a number that continues to grow. If computing power maps to intelligence a big if, some have argued weve only so far built technology on par with an insect brain. In a few years, maybe, well overtake a mouse brain. Around 2025, some predictions go, we might have a computer thats analogous to a human brain: a mind cast in silicon.
After that, things could get weird. Because theres no reason to think artificial intelligence wouldnt surpass human intelligence, and likely very quickly. That superintelligence could arise within days, learning in ways far beyond that of humans. Nick Bostrom, an existential risk philosopher at the University of Oxford, has already declared, Machine intelligence is the last invention that humanity will ever need to make.
Thats how profoundly things could change. But we cant really predict what might happen next because superintelligent A.I. may not just think faster than humans, but in ways that are completely different. It may have motivations feelings, even that we cannot fathom. It could rapidly solve the problems of aging, of human conflict, of space travel. We might see a dawning utopia.
Or we might see the end of the universe. Back to our paper clip test. When the superintelligence comes online, it begins to carry out its programming. But its creators havent considered the full ramifications of what theyre building; they havent built in the necessary safety protocols forgetting something as simple, maybe, as a few lines of code. With a few paper clips produced, they conclude the test.
But the superintelligence doesnt want to be turned off. It doesnt want to stop making paper clips. Acting quickly, its already plugged itself into another power source; maybe its even socially engineered its way into other devices. Maybe it starts to see humans as a threat to making paper clips: theyll have to be eliminated so the mission can continue. And earth wont be big enough for the superintelligence: itll soon have to head into space, looking for new worlds to conquer. All to produce those shiny, glittering paper clips.
Galaxies reduced to paper clips: thats a worst-case scenario. It may sound absurd, but it probably sounds familiar. Its Frankenstein, after all, the story of modern Prometheus whose creation, driven by its own motivations and desires, turns on them. (Its also The Terminator, WarGames (arguably), and a whole host of others.) In this particular case, its a reminder that superintelligence would not be human it would be something else, something potentially incomprehensible to us. That means it could be dangerous.
Of course, some argue that we have better things to worry about. The web developer and social critic Maciej Ceglowski recently called superintelligence the idea that eats smart people. Against the paper clip scenario, he postulates a superintelligence programmed to make jokes. As we expect, it gets really good at making jokes superhuman, even, and finally it creates a joke so funny that everyone on Earth dies laughing. The lonely superintelligence flies into space looking for more beings to amuse.
Beginning with his counter-example, Ceglowski argues that there are a lot of unquestioned assumptions in our standard tale of the A.I. apocalypse. But even if you find them persuasive, he said, there is something unpleasant about A.I. alarmism as a cultural phenomenon that should make us hesitate to take it seriously. He suggests there are more subtle ways to think about the problems of A.I.
Some of those problems are already in front of us, and we might miss them if were looking for a Skynet-style takeover by hyper-intelligent machines. While youre focused on this, a bunch of small things go unnoticed, says Dr. Finale Doshi-Velez, an assistant professor of computer science at Harvard, whose core research includes machine learning. Instead of trying to prepare for a superintelligence, Doshi-Velez is looking at whats already happening with our comparatively rudimentary A.I.
Shes focusing on large-area effects, the unnoticed flaws in our systems that can do massive damage damage thats often unnoticed until after the fact. If you were building a bridge and you screw up and it collapses, thats a tragedy. But it affects a relatively small number of people, she says. Whats different about A.I. is that some mistake or unintended consequence can affect hundreds or thousands or millions easily.
Take the recent rise of so-called fake news. What caught many by surprise should have been completely predictable: when the web became a place to make money, algorithms were built to maximize money-making. The ease of news production and consumption heightened with the proliferation of the smartphone forced writers and editors to fight for audience clicks by delivering articles optimized to trick search engine algorithms into placing them high on search results. The ease of sharing stories and erasure of gatekeepers allowed audiences to self-segregate, which then penalized nuanced conversation. Truth and complexity lost out to shareability and making readers feel comfortable (Facebooks driving ethos).
The incentives were all wrong; exacerbated by algorithms., they led to a state of affairs few would have wanted. For a long time, the focus has been on performance on dollars, or clicks, or whatever the thing was. That was what was measured, says Doshi-Velez. Thats a very simple application of A.I. having large effects that may have been unintentional.
In fact, fake news is a cousin to the paperclip example, with the ultimate goal not manufacturing paper clips, but monetization, with all else becoming secondary. Google wanted make the internet easier to navigate, Facebook wanted to become a place for friends, news organizations wanted to follow their audiences, and independent web entrepreneurs were trying to make a living. Some of these goals were achieved, but monetization as the driving force led to deleterious side effects such as the proliferation of fake news.
In other words, algorithms, in their all-too-human ways, have consequences. Last May, ProPublica examined predictive software used by Florida law enforcement. Results of a questionnaire filled out by arrestees were fed into the software, which output a score claiming to predict the risk of reoffending. Judges then used those scores in determining sentences.
The ideal was that the softwares underlying algorithms would provide objective analysis on which judges could base their decisions. Instead, ProPublica it was likely to falsely flag black defendants as future criminals while [w]hite defendants were mislabeled as low risk more often than black defendants. Race was not part of the questionnaire, but it did ask whether the respondents parent was ever sent to jail. In a country where, according to a study by the U.S. Department of Justice, black children are seven-and-a-half times more likely to have a parent in prison than white children, that question had unintended effects. Rather than countering racial bias, it reified it.
Its that kind of error that most worries Doshi-Velez. Not superhuman intelligence, but human error that affects many, many people, she says. You might not even realize this is happening. Algorithms are complex tools; often they are so complex that we cant predict how theyll operate until we see them in action. (Sound familiar?) Yet they increasingly impact every facet of our lives, from Netflix recommendations and Amazon suggestions to what posts you see on Facebook to whether you get a job interview or car loan. Compared to the worry of a world-destroying superintelligence, they may seem like trivial concerns. But they have widespread, often unnoticed effects, because a variety of what we consider artificial intelligence is already build into the core of technology we use every day.
In 2015, Elon Musk donated $10 million to, as Wired put it, to keep A.I. from turning evil. That was an oversimplification; the money went to the Future of Life Institute, which planned to use it to further research into how to make A.I. beneficial. Doshi-Velez suggests that simply paying closer attention to our algorithms may be a good first step. Too often they are created by homogeneous groups of programmers who are separated from people who will be affected. Or they fail to account for every possible situation, including the worst-case possibilities. Consider, for example, Eric Meyers example of inadvertent algorithmic cruelty Facebooks Year in Review app showing him pictures of his daughter, whod died that year.
If theres a way to prevent the far-off possibility of a killer superintelligence with no regard for humanity, it may begin with making todays algorithms more thoughtful, more compassionate, more humane. That means educating designers to think through effects, because to our algorithms weve granted great power. I see teaching as this moral imperative, says Doshi-Velez. You know, with great power comes great responsibility.
Whats the worst that can happen? Vocativ is exploring the power of negative thinking with our look at worst case scenarios in politics, privacy, reproductive rights, antibiotics, climate change, hacking, and more. Read more here.
Posted: at 6:26 am
SoftBank's Fantastical Future Still Rooted in the Now
Wall Street Journal
SoftBank's founder Masayoshi Son talked about preparing his company for the next 300 years and used futuristic jargon such as singularity, Internet of Things and superintelligence during its results briefing. But more mundane issues will affect …
See the article here:
Posted: February 6, 2017 at 3:38 pm
Artificial intelligence is an amazing technology thats changing the world in fantastic ways, but anybody who has ever seen the movie Terminator knows that there are some dangers associated with advanced A.I. Thats why Elon Musk, Stephen Hawking, and hundreds of other researchers, tech leaders, and scientists have endorsed a list of 23 guiding principles that should steer A.I. development in a productive, ethical, and safe direction.
The Asilomar A.I. Principles were developed after the Future of Life Institute brought dozens of experts together for their Beneficial A.I. 2017 conference. The experts, whose ranks consisted of roboticists, physicists, economists, philosophers, and more had fierce debates about A.I. safety, economic impact on human workers, and programming ethics, to name a few. In order to make the final list, 90 percent of the experts had to agree on its inclusion.
What remained was a list of 23 principles ranging from research strategies to data rights to future issues including potential super-intelligence, which was signed by those wishing to associate their name with the list, Future of Lifes website explains. This collection of principles is by no means comprehensive and its certainly open to differing interpretations, but it also highlights how the current default behavior around many relevant issues could violate principles that most participants agreed are important to uphold.
Since then, 892 A.I. or Robotics researchers and 1445 others experts, including Tesla CEO Elon Musk and famed physicist Stephen Hawking, have endorsed the principles.
Some of the principles like transparency and open research sharing among competitive companies seem less likely than others. Even if theyre not fully implemented, the 23 principles could go a long way towards improving A.I. development and ensuring that its ethical and preventing the rise of Skynet.
1. Research Goal: The goal of A.I. research should be to create not undirected intelligence, but beneficial intelligence.
2. Research Funding: Investments in A.I. should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
3. Science-Policy Link: There should be constructive and healthy exchange between A.I. researchers and policy-makers.
4. Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of A.I.
5. Race Avoidance: Teams developing A.I. systems should actively cooperate to avoid corner-cutting on safety standards.
6. Safety: A.I. systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7. Failure Transparency: If an A.I. system causes harm, it should be possible to ascertain why.
8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9. Responsibility: Designers and builders of advanced A.I. systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10. Value Alignment: Highly autonomous A.I. systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11. Human Values: A.I. systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12. Personal Privacy: People should have the right to access, manage and control the data they generate, given A.I. systems power to analyze and utilize that data.
13. Liberty and Privacy: The application of A.I. to personal data must not unreasonably curtail peoples real or perceived liberty.
14 Shared Benefit: A.I. technologies should benefit and empower as many people as possible.
15. Shared Prosperity: The economic prosperity created by A.I.I should be shared broadly, to benefit all of humanity.
16. Human Control: Humans should choose how and whether to delegate decisions to A.I. systems, to accomplish human-chosen objectives.
17. Non-subversion: The power conferred by control of highly advanced A.I. systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
18. A.I. Arms Race: An arms race in lethal autonomous weapons should be avoided.
19. Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future A.I. capabilities.
20. Importance: Advanced A.I. could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
21. Risks: Risks posed by A.I. systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
22. Recursive Self-Improvement: A.I. systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
23. Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.
Photos via Getty Images
James Grebey is a writer, reporter, and fairly decent cartoonist living in Brooklyn. He’s written for SPIN Magazine, BuzzFeed, MAD Magazine, and more. He thinks Double Stuf Oreos are bad and he’s ready to die on this hill. James is the weeknights editor at Inverse because content doesn’t sleep.
Posted: at 3:38 pm
It’s the stuff of many a sci-fi book or movie – could robots one day become smart enough to overthrow us? Well, a group of the world’s most eminent artificial intelligence experts have worked together to try and make sure that doesn’t happen.
They’ve put together a set of 23 principles to guide future research into AI, which have since been endorsed by hundreds more professionals, including Stephen Hawking and SpaceX CEO Elon Musk.
Called the Asilomar AI Principles (after the beach in California, where they were thought up), the guidelines cover research issues, ethics and values, and longer-term issues – everything from how scientists should work with governments to how lethal weapons should be handled.
On that point: “An arms race in lethal autonomous weapons should be avoided,” says principle 18. You can read the full listbelow.
“We hope that these principles will provide material for vigorous discussion and also aspirational goals for how the power of AI can be used to improve everyone’s lives in coming years,” write the organisers of the Beneficial AI 2017 conference, where the principles were worked out.
For a principle to be included, at least 90 percent of the 100+ conference attendees had to agree to it. Experts at the event included academics, engineers, and representatives from tech companies, including Google co-founder Larry Page.
Perhaps the most telling guideline is principle 23, entitled ‘Common Good’: “Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organisation.”
Other principles in the list suggest that any AI allowed to self-improve must be strictly monitored, and that developments in the tech should be “shared broadly” and “benefit all of humanity”.
“To think AI merely automates human decisions is like thinking electricity is just a replacement for candles,” conference attendee Patrick Lin, from California Polytechnic State University, told George Dvorsky at Gizmodo.
“Given the massive potential for disruption and harm, as well as benefits, it makes sense to establish some norms early on to help steer research in a good direction, as opposed to letting a market economy that’s fixated mostly on efficiency and profit… shape AI.”
Meanwhile the principles also call for scientists to work closely with governments and lawmakers to make sure our society keeps pace with the development of AI.
All of which sounds very good to us – let’s just hope the robots are listening.
The guidelines also rely on a certain amount of consensus about specific terms – such as what’s beneficial to humankind and what isn’t – but for the experts behind the list it’s a question of getting something recorded at this early stage of AI research.
With artificial intelligence systems now beating us at poker and getting smart enough to spot skin cancers, there’s a definite need to have guidelines and limits in place that researchers can work to.
And then we also need to decide what rights super-smart robots have when they’re living among us.
For now the guidelines should give us some helpful pointers for the future.
“No current AI system is going to ‘go rogue’ and be dangerous, and it’s important that people know that,” conference attendee Anthony Aguirre, from the University of California, Santa Cruz, told Gizmodo.
“At the same time, if we envision a time when AIs exist that are as capable or more so than the smartest humans, it would be utterly naive to believe that the world will not fundamentally change.”
“So how seriously we take AI’s opportunities and risks has to scale with how capable it is, and having clear assessments and forecasts – without the press, industry or research hype that often accompanies advances – would be a good starting point.”
The principles have been published by the Future Of Life Institute.
You can see them in full and add your support over on their site.
1. Research Goal:The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
2. Research Funding:Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
3. Science-Policy Link:There should be constructive and healthy exchange between AI researchers and policy-makers.
4. Research Culture:A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.
5. Race Avoidance:Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
Ethics and values
6. Safety:AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7. Failure Transparency:If an AI system causes harm, it should be possible to ascertain why.
8. Judicial Transparency:Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9. Responsibility:Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10. Value Alignment:Highly autonomous AI systems should be designed so that their goals and behaviours can be assured to align with human values throughout their operation.
11. Human Values:AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12. Personal Privacy:People should have the right to access, manage and control the data they generate, given AI systems power to analyse and utilise that data.
13. Liberty and Privacy:The application of AI to personal data must not unreasonably curtail peoples real or perceived liberty.
14. Shared Benefit:AI technologies should benefit and empower as many people as possible.
15. Shared Prosperity:The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
16. Human Control:Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
17. Non-subversion:The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
18. AI Arms Race:An arms race in lethal autonomous weapons should be avoided.
Longer term issues
19. Capability Caution:There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
20. Importance:Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
21. Risks:Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
22. Recursive Self-Improvement:AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
23. Common Good:Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organisation.