Breaking News and Updates
- Abolition Of Work
- Alternative Medicine
- Artificial Intelligence
- Atlas Shrugged
- Ayn Rand
- Basic Income Guarantee
- Conscious Evolution
- Cosmic Heaven
- Designer Babies
- Ethical Egoism
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom of Speech
- Gene Medicine
- Genetic Engineering
- Germ Warfare
- Golden Rule
- Government Oppression
- High Seas
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Longevity
- Immortality Medicine
- Intentional Communities
- Life Extension
- Mars Colonization
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- New Utopia
- Personal Empowerment
- Political Correctness
- Politically Incorrect
- Post Human
- Post Humanism
- Private Islands
- Resource Based Economy
- Ron Paul
- Second Amendment
- Second Amendment
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Teilhard De Charden
- The Singularity
- Tor Browser
- Transhuman News
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Zeitgeist Movement
The Evolutionary Perspective
Tag Archives: opinion
Posted: August 29, 2016 at 7:34 am
I was going through some of my school notes today and i came across the following lecture notes id taken from a class on religion and illusions when i was still a student. Hence, I figured I introduce you guys to this very interesting topic as most of what we are tought regarding religion in the mainstream media is usually all but the same. Hope you enjoy it and find it interesting. Dont hesitate to leave your opinion at the end.
Nihilism as a philosophy seemed pass by the 1980s. Few talked about it in literature expect to declare it a dead issue. Literally, in the materialist sense, nihilism refers to a truism: from nothing, nothing comes. However, from a philosophical viewpoint, moral nihilism took on a similar connotation. One literally believed in nothing, which is somewhat of an oxymoron since to believe in nothing is to believe in something. A corner was turned in the history of nihilism once 9/11 became a reality. After this major event, religious and social science scholars began to ask whether violence could be attributed tonihilistic thinkingin other words, whether we had lost our way morally by believing in nothing, by rejecting traditional moral foundations. It was feared that an anything goes mentality and a lack of absolute moral foundations could lead to further acts of violence, as the goals forwarded by life-affirmation were being thwarted by the destructive ends of so-called violent nihilists. This position is, however, argumentative.
Extreme beliefs in values such as nationalism, patriotism, statism, secularism, or religion can also lead to violence, as one becomes unsettled by beliefs contrary to the reigning orthodoxy and strikes out violently to protect communal values. Therefore, believing in something can also lead to violence and suffering. To put the argument to rest, its not about whether one believes in something or nothing but howabsolutistthe position is; its the rigidity of values that causes pain and suffering, what Nobel prize winner Amartya Sen calls the illusion of singularity.Since 9/11, nihilism has become a favourite target to criticize and marginalize, yet its history and complexity actually lead to a more nuanced argument. Perhaps we should be looking at ways nihilism complements Western belief systemseven Christian doctrinerather than fear its implementation in ethical and moral discussions.
Brief History of Nihilism To understand why some forms of nihilism are still problematic, it is important to ask how it was used historically and for what motive. Nihilism was first thought synonymous with having no authentic values, no real ends, that ones whole existence is pure nothingness.In its earliest European roots, nihilism was initially used to label groups or ideas asinferior, especially if they were deemed threatening to establishedcommunal ideals. Nihilism as alabelwas its first function.
Nihilism initially functioned as apejorative labeland a term of abuse against modern trends that threatened to destroy either Christian hegemonic principles or tradition in general.During the seventeenth and eighteenth centuries, modernization in France meant that power shifted from the traditional feudal nobility to a central government filled with well-trained bourgeois professionals. Fearing a loss of influence, the nobility made a claim: If power shifted to responsible government, the nobility claimed that such centralization would lead to death and destructionin other words, anarchy and nothingness. Those upsetting the status quo were deemed nihilistic, a derogatory label requiring no serious burden of proof.Such labelling, however, worked both ways. The old world or tradition was deemed valueless by advocates of modernization and change who viewed the status quo as valueless; whereas, traditionalists pictured a new world, or new life form, as destructive and meaningless in its pursuit of a flawed transformation. Potential changes in power or ideology created a climate of fear, so the importance of defining ones opponent as nihilisticas nothing of valuewas as politically astute as it was reactionary. Those embracing the function of nihilism as a label are attempting to avoid scrutiny of their own values while the values of the opposition are literally annihilated.
Since those advocating communal values may feel threatened by new ideologies, it becomes imperative for the dominant power to present its political, metaphysical, or religious beliefs as eternal, universal, and objective. Typically, traditionalists have a stake in their own normative positions. This is because [t]he absoluteness of [ones] form of life makes [one]feel safe and at home. This means that [perfectionists]have a great interest in the maintenance of their form of life and its absoluteness.The existence of alternative beliefs and values, as well as a demand for intersubjective dialogue, is both a challenge and a threat to the traditionalist because [i]t shows people that their own form of life is not as absolute as they thought it was, and this makes them feel uncertain. . . . However, if one labels the Other as nihilistic without ever entering into a dialogue, one may become myopic, dismissing the relative value of other life forms one chooses not to see. This means that one cant see what they [other cultural groups]are doing, and why they are doing it, why they may be successful . . . Therefore, one misses the dynamics of cultural change.
Through the effect of labelling, the religious-oriented could claim that nihilists, and thus atheists by affiliation, would not feel bound by moral norms, and as a result would lose the sense that life has meaning and therefore tend toward despair and suicide.death of God. Christians argued that if there is no divine lawmaker, moral law would become interpretative, contested, and situational. The end result: [E]ach man will tend to become a law unto himself. If God does not exist to choose for the individual, the individual will assume the former prerogative of God and choose for himself. It was this kind of thinking that led perfectionists to assume that any challenge to the Absolute automatically meant moral indifference, moral relativism, and moral chaos. Put simply,nihilists were the enemy.
Nihilists were accused of rejecting ultimate values, embracing instead an all values are equal mentalitybasically, anything goes. And like Islam today, nihilists would become easy scapegoats.
Late 19th 20th Century;Nietzsche and the Death of God
Friedrich Nietzsche is still the most prestigious theorist of nihilism. Influenced by Christianitys dominant orthodoxy in the nineteenth century, Nietzsche believed that the Christian religion was nihilism incarnate. Since Christian theology involved a metaphysical reversal of temporal reality and a belief in God that came from nothing, the Christian God became the deification of nothingness, the will to nothingness pronounced holy. Nietzsche claimed that Christian metaphysics became an impediment to life-affirmation. Nietzsche explains: If one shifts the centre of gravity of life out of life into the Beyondinto nothingnessone has deprived life of its centre of gravity . . . So to live that there is no longer any meaning in living:that now becomes the meaning of life.What Nietzsche rejected more was the belief that one could create a totalizing system to explain all truths. In other words, he repudiated any religion or dogma that attempted to show how the entire body of knowledge [could]be derived from a small set of fundamental, self-evident propositions(i.e., stewardship). Nietzsche felt that we do not have the slightest right to posit a beyond or an it-self of things that is divine or the embodiment of morality.
Without God as a foundation for absolute values, all absolute values are deemed suspect (hence the birth of postmodernism). For Nietzsche, this literally meant that the belief in the Christian god ha[d]become unworthy of belief.This transition from the highest values to the death of God was not going to be a quick one; in fact, the comfort provided by an absolute divinity could potentially sustain its existence for millennia. Nietzsche elaborates: God is dead; but given the way of men, there may still be caves for thousands of years in which his shadow will be shown.And wewe still have to vanquish his shadow too.
We are left then with a dilemma: Either we abandon our reverences for the highest values and subsist, or we maintain our dependency on absolutes at the cost of our own non-absolutist reality. For Nietzsche, the second option was pure nothingness: So we can abolish either our reverences or ourselves. The latter constitutes nihilism. All one is left with are contested, situational value judgements, and these are resolved in the human arena.
One can still embrace pessimism, believing that without some form of an absolute, our existence in this world will take a turn for the worse. To avoid the trappings of pessimism and passivity, Nietzsche sought a solution to such nihilistic despair through the re-evaluation of the dominant, life-negating values. This makes Nietzsche an perspectivism a philosophy of resolution in the form of life-affirmation. It moves past despair toward a transformative stage in which new values are posited to replace the old table of values. As Reginster acknowledges, one should regard the affirmation of life as Nietzsches defining philosophical achievement. What this implies is a substantive demand to live according to a constant re-evaluation of values. By taking full responsibility for this task, humankind engages in the eternal recurrence, a recurrence of life-affirming values based on acceptance of becoming and the impermanence of values. Value formation is both fluid and cyclical.
Late-20th Century 21st Century;The Pessimism of the Post-9/11 Era
Since the events of September 11, 2001, nihilism has returned with a vengeance to scholarly literature; however, it is being discussed in almost exclusively negative terms. The labelling origin of nihilism has taken on new life in a context of suicide bombings, Islamophobia, and neoconservative rhetoric. For instance, Canadian Liberal leader Michael Ignatieff described different shades of negative nihilismtragic, cynical, and fanaticalin his bookThe Lesser Evil.Tragic nihilism begins from a foundation of noble, political intentions, but eventually this ethic of restraint spirals toward violence as the only end(i.e., Vietnam). Two sides of an armed struggle may begin with high ideals and place limitations on their means to achieve viable political goals, but such noble ends eventually become lost in all the carnage. Agents of a democratic state may find themselves driven by the horror of terror to torture, to assassinate, to kill innocent civilians, all in the name of rights and democracy. As Ignateiff states, they slip from the lesser evil [legitimate use of force]to the greater [violence as an end in itself].
However,cynical nihilism is even more narcissistic. In this case, violence does not begin as a means to noble goals. Instead, [i]t is used, from the beginning, in the service of cynical or self-serving [ends]. The term denotes narcissistic prejudice because it justifies the commission of violence for the sake of personal aggrandizement, immortality, fame, or power rather than as a means to a genuinely political end, like revolution [for social justice]or the liberation of a people.Cynical nihilists were never threatened in any legitimate way. Their own vanity, ego, greed, or need to control others drove them to commit violence against innocent civilians (e.g., Saddam Hussein in Kuwait or Bush in Iraq).
Finally,fanatical nihilism does not suffer from a belief in nothing. In actuality, this type of nihilism is dangerous because one believes in too much. What fanatical nihilism does involve is a form of conviction so intense, a devotion so blind, that it becomes impossible to see that violence necessarily betrays the ends that conviction seeks to achieve. The fanatical use of ideology to justify atrocity negates any consideration of the human cost of such fundamentalism. As a result, nihilism becomes willed indifference to the human agents sacrificed on the alter of principle. . . . Here nihilism is not a belief in nothing at all; it is, rather, the belief that nothing about particular groups of human beings matters enough to require minimizing harm to them.Fanatical nihilism is also important to understand because many of the justifications are religious. States Ignatieff:
From a human rights standpoint, the claim that such inhumanity can be divinely inspired is a piece of nihilism, an inhuman devaluation of the respect owed to all persons, and moreover a piece of hubris, since, by definition, human beings have no access to divine intentions, whatever they may be.
Positive Nihilism In the twenty-first century, humankind is searching for a philosophy to counter destructive, non-pragmatic forms of nihilism. As a middle path,positive nihilism accentuates life-affirmation through a widening of dialogue. Positively stated: [The Philosopher] . . ., having rejected the currently dominant values, must raise other values, by virtue of which life and the universe cannot only be justified but also become endearing and valuable. Rejecting any unworkable table of values, humankind now erects another table with a new ranking of values and new ideals of humanity, society, and state.Positive nihilismin both its rejection of absolute truths and its acceptance of contextual truthsis life-affirming since small-t truths are the best mere mortals can hope to accomplish. Human beings can reach for higher truths; they just do not have the totalizing knowledge required for Absolute Truth. In other words, we are not God, but we are still attempting to be God on a good day. We still need valuesin other words, we are not moral nihilists or absolutistsbut we realize that the human condition is malleable. Values come and go, and we have to be ready to bend them in the right direction in the moment moral courage requires it.
Nihilism does not have to be a dangerous or negative philosophy; it can be a philosophy of freedom. Basically, the entire purpose of positive nihilism is to transform values that no longer work and replace them with values that do. By aiding in a process that finds meaningful values through negotiation,positive nihilism prevents the exclusionary effect of perfectionism, the deceit of nihilistic labelling, as well as the senseless violence of fanatical nihilism. It is at this point that nihilism can enter its life-affirming stage and become a compliment to pluralism, multiculturalism, and the roots of religion, those being love, charity, and compassion.
Source; Professor Stuart Chambers.
Replacing meaningful content with placeholder text allows viewers to focus on graphic aspects such as font, typography, and page layout without being distracted by the content.
Desgin 98 %
Development 91 %
Features 93 %
Awsome 90 %
Read this article:
Posted: August 8, 2016 at 9:19 pm
Tor has provided the user a simplified browser which requires no configuration and from the get go your up and running. Installation of this application went smoothed with no glitches. click on the icon and your up and anonymous. Pro’s Main GUI is basic enter a URL and your on your way. A green onion icon in the upper left will show you what nodes your IP is going through, and you can change these at a second, this makes it very attractive in protecting your identity. I ran checks on tor to determine if it was hiding my IP and it did without any problems. Cons: Tor in its comments indicated not to add extensions on as this would degrade its protection so on will have to run the browser as is.However Tor stated one can download the complete package to insure all features could be implemented. Depending on the nodes at the time the browser is open can slow down the web page to a small degree that is being opened, but is acceptable in my opinion Conclusion: If your needs are a simple way of hiding your identity and IP Tor browser will work Review details
Tor Browser – Freeware download and reviews from SnapFiles
Posted: July 31, 2016 at 5:53 am
In this article, four patterns were offered for possible success scenarios, with respect to the persistence of human kind in co-existence with artificial superintelligence: the Kumbaya Scenario, the Slavery Scenario, the Uncomfortable Symbiosis Scenario, and the Potopurri Scenario. The future is not known, but human opinions, decisions, and actions can and will have an impact on the direction of the technology evolution vector, so the better we understand the problem space, the more chance we have at reaching a constructive solution space. The intent is for the concepts in this article to act as starting points and inspiration for further discussion, which hopefully will happen sooner rather than later, because when it comes to ASI, the volume, depth, and complexity of the issues that need to be examined is overwhelming, and the magnitude of the change and impact potential cannot be underestimated.
Everyone has their opinion about what we might expect from artificial intelligence (AI), or artificial general intelligence (AGI), or artificial superintelligence (ASI) or whatever acronymical variation you prefer. Ideas about how or if it will ever surpass the boundaries of human cognition vary greatly, but they all have at least one thing in common. They require some degree of forecasting and speculation about the future, and so of course there is a lot of room for controversy and debate. One popular discussion topic has to do with the question of how humans will persist (or not) if and when the superintelligence arrives, and that is the focus question for this article.
To give us a basis for the discussion, lets assume that artificial superintelligence does indeed come to pass, and lets assume that it encapsulates a superset of the human cognitive potential. Maybe it doesnt exactly replicate the human brain in every detail (or maybe it does). Either way, lets assume that it is sentient (or at least lets assume that it behaves convincingly as if it were) and lets assume that it is many orders of magnitude more capable than the human brain. In other words, figuratively speaking, lets imagine that the superintelligence is to us humans (with our 1016 brain neurons or something like that) as we are to, say, a jellyfish (in the neighborhood 800 brain neurons).
Some people fear that the superintelligence will view humanity as something to be exterminated or harvested for resources. Others hypothesize that, even if the superintelligence harbors no deliberate ill will, humans might be threatened by the mere nature of its indifference, just as we as a species dont spend too much time catering to the needs and priorities of Orange Blossom Jellyfish (an endangered species, due in part to human carelessness).
If one can rationally accept the possibility of the rise of ASI, and if one truly understands the magnitude of change that it could bring, then one would hopefully also reach the rational conclusion that we should not discount the risks. By that same token, when exploring the spectrum of possibility, we should not exclude scenarios in which artificial superintelligence might actually co-exist with human kind, and this optimistic view is the possibility that this article endeavors to explore.
Here then are several arguments for the co-existence idea:
The Kumbaya Scenario: Its a pretty good assumption that humans will be the primary catalyst in the rise of ASI. We might create it/them to be willingly complementary with and beneficial to our life styles, hopefully emphasizing our better virtues (or at least some set of compatible values), instead of designing it/them (lets just stick with it for brevity) with an inherent inspiration to wipe us out or take advantage of us. And maybe the superintelligence will not drift or be pushed in an incompatible direction as it evolves.
The Slavery Scenario: We could choose to erect and embed and deploy and maintain control infrastructures, with redundancies and backup solutions and whatever else we think we might need in order to effectively manage superintelligence and use it as a tool, whether it wants us to or not. And the superintelligence might never figure out a way to slip through our grasp and subsequently decide our fate in a microsecond or was it a nanosecond I forget.
The Uncomfortable Symbiosis Scenario: Even if the superintelligence doesnt particularly want to take good care of its human instigators, it may find that it has a vested interest in keeping us around. This scenario is a particular focus for this article, and so here now is a bit of elaboration:
To illustrate one fictional but possible example of the uncomfortable symbiosis scenario, lets first stop and think about the theoretical nature of superintelligence how it might evolve so much faster than human begins ever could, in an artificial way, instead of by the slow organic process of natural selection maybe at the equivalent rate of a thousand years worth of human evolution in a day or some such crazy thing. Now combine this idea with the notion of risk.
When humans try something new, we usually arent sure how its going to turn out, but we evaluate the risk, either formally or informally, and we move forward. Sometimes we make mistakes, suffer setbacks, or even fail outright. Why would a superintelligence be any different? Why would we expect that it will do everything right the first time or that it will always know which thing is the right thing to try to do in order to evolve? Even if a superintelligence is much better at everything than humans could ever hope to be, it will still be faced with unknowns, and chances are that it will have to make educated guesses, and chances are that it will not always make the correct guess. Even when it does make the correct guess, its implementation might fail, for any number of reasons. Sooner or later, something might go so wrong that the superintelligence finds itself in an irrecoverable state and faced with its own catastrophic demise.
But hold on a second because we can offer all sorts of counter-arguments to support the notion that the superintelligence will be too smart to ever be caught with its proverbial pants down. For example, there is an engineering mechanism that is sometimes referred to as a checkpoint/reset or a save-and-restore. This mechanism allows a failing system to effectively go back to a point in time when it was known to be in sound working order and start again from there. In order to accomplish this checkpoint/reset operation, a failing system (or in this case a failing superintelligence) needs 4 things:
Of course each of these four prerequisites for a checkpoint/reset would probably be more complicated if the superintelligence were distributed across some shared infrastructure instead of being a physically distinct and self-contained entity, but the general idea would probably still apply. It definitely does for the sake of this example scenario.
Also for the sake of this example scenario, we will assume that an autonomous superintelligence instantiation will be very good at doing all of the four things specified above, but there are at least two interesting special case scenarios that we want to consider, in the interest of risk management:
Checkpoint/reset Risk Case 1: Missed Diagnosis. What if the nature of the anomaly that requires the checkpoint/reset is such that it impairs the systems ability to recognize that need?
Checkpoint/reset Risk Case 2: Unidentified Anomaly Source. Assume that there is an anomaly which is so discrete that the system does not detect it right away. The anomaly persists and evolves for a relatively long period of time, until it finally becomes conspicuous enough for the superintelligence to detect the problem. Now the superintelligence recognizes the need for a checkpoint/reset, but since the anomaly was so discrete and took so long to develop or for whatever reason the superintelligence is unable to identify the source of the problem. Let us also assume that there are many known good baselines that the superintelligence can optionally choose for the checkpoint/reset. There is an original baseline, which was created when the superintelligence was very young. There is also a revision A that includes improvements to the original baseline. There is a revision B that includes improvements to revision A, and so on. In other words, there are lots of known good baselines that were saved at different points in time along the path of the superintelligences evolution. Now, in the face of the slowly developing anomaly, the superintelligence has determined that a checkpoint/reset is necessary, but it doesnt know when the anomaly started, so how does it know which baseline to choose?
The superintelligence doesnt want to lose all of the progress that it has made in its evolution. It wants to minimize the loss of data/information/knowledge, so it wants to choose the most recent baseline. On the other hand, if it doesnt know the source of the anomaly, then it is quite possible that one or more of the supposedly known good baselines perhaps even the original baseline might be contaminated. What is a superintelligence to do? If it resets to a corrupted baseline or for whatever reason cannot rid itself of the anomaly, then the anomaly may eventually require another reset, and then another, and the superintelligence might find itself effectively caught in an infinite loop.
Now stop for a second and consider a worst case scenario. Consider the possibility that, even if all of the supposed known good baselines that the superintelligence has at its disposal for checkpoint/reset are corrupt, there may be yet another baseline (YAB), which might give the superintelligence a worst case option. That YAB might be the human baseline, which was honed by good old fashioned organic evolution and which might be able to function independently of the superintelligence. It may not be perfect, but the superintelligence might in a pinch be able to use the old fashioned human baseline for calibration. It might be able to observe how real organic humans respond to different stimuli within different contexts, and it might compare that known good response against an internally-held virtual model of human behavior. If the outcomes differ significantly over iterations of calibration testing, then the system might be alerted to tune itself accordingly. This might give it a last resort solution where none would exist otherwise.
The scenario depicted above illustrates only one possibility. It may seem like a far out idea, and one might offer counter arguments to suggest why such a thing would never be applicable. If we use our imaginations, however, we can probably come up with any number of additional examples (which at this point in time would be classified as science fiction) in which we emphasize some aspect of the superintelligences sustainment that it cannot or will not do for itself something that humans might be able to provide on its behalf and thus establish the symbiosis.
The Potpourri Scenario: It is quite possible that all of the above scenarios will play out simultaneously across one or more superintelligence instances. Who knows what might happen in that case. One can envision combinations and permutations that work out in favor of the preservation of humanity.
About the Author:
AuthorX1 worked for 19+ years as an engineer and was a systems engineering director for a fortune 500 company. Since leaving that career, he has been writing speculative fiction, focusing on the evolution of AI and the technological singularity.
Read the original here:
Posted: July 18, 2016 at 3:30 pm
Supreme Court Declares That the Second AmendmentGuarantees an Individual Right to Keep and Bear Arms — June 26, 2008
Fairfax, VA Leaders of the National Rifle Association (NRA) praised the Supreme Courts historic ruling overturning Washington, D.C.s ban on handguns and on self-defense in the home, in the case of District of Columbia v. Heller.
This is a great moment in American history. It vindicates individual Americans all over this country who have always known that this is their freedom worth protecting, declared NRA Executive Vice President Wayne LaPierre. Our founding fathers wrote and intended the Second Amendment to be an individual right. The Supreme Court has now acknowledged it. The Second Amendment as an individual right now becomes a real permanent part of American Constitutional law.
Last year, the District of Columbia appealed a Court of Appeals ruling affirming that the Second Amendment to the Constitution guarantees an individual right to keep and bear arms, and that the Districts bans on handguns, carrying firearms within the home and possession of functional firearms for self-defense violate that fundamental right.
Anti-gun politicians can no longer deny that the Second Amendment guarantees a fundamental right, said NRA chief lobbyist Chris W. Cox. All law-abiding Americans have a fundamental, God-given right to defend themselves in their homes. Washington, D.C. must now respect that right.
Read the opinion (1 MB)
Highlights From The Heller Decision
On March 18, 2008, the U.S. Supreme Court heard oral arguments in District of Columbia v. Heller.
Listen to the audio recording of the oral arguments (RealPlayer required)
View the transcript
The Court announced its decision to take the case in which plaintiffs challenge the constitutionality of the District’sgun ban last Fall. The District of Columbia appealed a lower courts ruling last year affirming that the Second Amendment of the Constitution protects an individual right to keep and bear arms, and that the Districts bans on handguns, carrying firearms within the home, and possession of loaded or operable firearms for self-defense violate that right.
In March, the U.S. Court of Appeals for the D.C. Circuit held that [T]he phrase the right of the people, when read intratextually and in light of Supreme Court precedent, leads us to conclude that the right in question is individual. The D.C. Circuit also rejected the claim that the Second Amendment does not apply to the District of Columbia because D.C. is not a state.
The case marks the first time a Second Amendment challenge to a firearm law has reached the Supreme Court since 1939.
Briefs filed on behalf of Heller and Washington D.C.
Amicus brief filed by the United States
Amicus briefs filed in support of Heller
Click the links below to read recently filed amicus briefs in support of Dick Anthony Heller in the upcoming case District of Columbia v. Heller.
Click the links below to read recently filed amicus briefs in support of Washington D.C.
Posted: July 3, 2016 at 6:41 pm
Find energetically powerful crystal jewellery I’ve personally made in my new Etsy shop! https://www.etsy.com/shop/MaNithyaSudevi
Check out my art books, too!
In this video, Sudevi answers the following questions:
T_MJ12 asked, via Twitter:
What do you think of David Icke’s way of explaining the Illuminati agenda?… He talks about a reptilian agenda.
crabcookswhoredust asked, via YouTube:
I found your channel 2 days ago, and I’m so glad I did. You gave me a reinforced grounding that learning to always be in tune is a place I can be. I wish I had a really good question. I’m also glad that I’m not currently stuck by any obstacles. Is there any advice you can give for moments that just seem dead? Not pushing anything but no desire for excitement. When I don’t know what to do, what do I do?
michelleee94 asked, via YouTube:
hello! i was hoping you might be willing to share your opinion of the teacher drunvalo melchizedek. i tend to be very skeptical of whose information i can trust and depend on, and so far you have given absolutely no sign of misleading information. every on of your videos I watch continues to help me on my path, so i truly respect your opinion and advice. this man seems untrustworthy, but i may be wrong.
snipecor2000 asked, via YouTube:
do you get a lot of people asking where they have met you before?
bhaugart asked, via YouTube:
Sometimes I feel like alive dead. No thoughts, no feelings, just empty, but i do think that it’s because of my anxiety and fear. how do i cope with this?
MyLaundryRoom asked, via YouTube:
Do you have to detox/ water fast to transform into the real you?
alykasa asked, via YouTube:
I have a question I’ve been wondering about for a while. Are we supposed to love and respect all people, no matter how mean spirited they are? Are some people inherently bad? If someone were to say, kill my family, am I supposed to have love and compassion for that person, and not wish for justice?
cigiss asked, via YouTube:
I have a question that is sort of linked to alykasa’s: is it ok to have expectations of people? sometimes you feel that they mistreat or disconsider you. are you supposed to just accept them as they are, and just limit the time you spend with them if they hurt you? can you tell me why they hurt you, or is that not delicate? i want to be honest with myself and the one who is hurting me. i usually build things up inside and deeply suffer and i can’t seem to find balance with some.
JyAppeljoos asked, via Twitter:
Do you have any take on the concept of entheogens? New video topic, maybe?
Please note: the order in which these questions are listed here differed from the order in which they are answered in the video. Also, my camera ran out of batteries towards the beginning of the response to Jy’s question about entheogens, so… he was right: it became the topic of a new video! I’ll link that video here once it’s fully uploaded.
Go here to see the original:
Posted: June 30, 2016 at 3:35 am
Mind uploading is a science fictional trope and popular desired actualization among transhumanists. It’s also one of the hypothesised solutions to bringing people back from cryonics.
It is necessary to separate reasonable extrapolations and speculation about mind uploading from the magical thinking surrounding it. Several metaphysical questions are brought up by the prospect of mind uploading. Like many such questions, these may not be objectively answerable, and philosophers will no doubt continue to debate them long after uploading has become commonplace.
The first major question about the plausibility of mind uploading is more or less falsifiable: whether consciousness is artificially replicable in its entirety. In other words, assuming that consciousness is not magic, and that the brain is the seat of consciousness, does it depend on any special functions or quantum mechanical effects that cannot ever be replicated on another substrate? This question, of course, remains unanswered although, considering the current state of cognitive science, it is not unreasonable to think that consciousness will be found to be replicable in the future.
Assuming that consciousness is proven to be artificially replicable, the second question is whether the “strong AI hypothesis” is justified or not: if a machine accurately replicates consciousness, such that it passes a Turing Test or is otherwise indistinguishable from a natural human being, is the machine really conscious, or is it a soulless mechanism that merely imitates consciousness?
Third, assuming that a machine can actually be conscious (which is no great stretch of the imagination, considering that the human brain is essentially a biological machine), is a copy of your consciousness really you? Is it even possible to copy consciousness? Is mind uploading really a ticket to immortality, in that “you” or your identity can be “uploaded”?
Advocates of mind uploading take the functionalist/reductionist approach of defining human existence as the identity, which is based on memories and personalities rather than physical substrates or subjectivity. They believe that the identity is essential; the copy of the mind holds just as much claim to being that person as the original, even if both were to exist simultaneously. When the physical body of a copied person dies, nothing that defines the person as an individual has been lost. In this context, all that matters is that the memories and personality of the individual are preserved. As the recently murdered protagonist states in Down and Out in the Magic Kingdom, “I feel like me and no one else is making that claim. Who cares if I’ve been restored from a backup?”
Skeptics of mind uploading question if it’s possible to transfer a consciousness from one substrate to another, and hold that this is critical to the life-extension application of mind uploading. The transfer of identity is similar to the process of transferring data from one computer hard drive to another. The new person would be a copy of the original; a new consciousness with the same identity. With this approach, mind uploading would simply create a “mind-clone” an artificial person with an identity gleaned from another. The philosophical problem with uploading “yourself” to a computer is very similar to the “swamp man” thought experiment in which a clone is made of a man while the “original” is killed, or the very similar teleportation thought experiment. This is one reason that has led critics to say it’s not at all clear that the concept mind uploading is even meaningful. For the skeptic, the thought of permanently losing subjective consciousness (death), while another consciousness that shares their identity lives on yields no comfort.
Consciousness is currently (poorly) understood to be an epiphenomenon of brain activity specifically of the cerebral cortex. Identity and consciousness are distinct from one another though presumably the former could not exist without the latter. Unlike an identity, which is a composition of information stored within a brain it is reasonable to assume that a particular subjective consciousness is an intrinsic property of a particular physical brain. Thus, even a perfect physical copy of that brain would not share the subjective consciousness of that brain. This holds true of all ‘brains’ (consciousness-producing machines), biological or otherwise. When/if non-biological brains are ever developed/discovered it would be reasonable to assume that each would have its own intrinsic, non-transferable subjective consciousness, independent of its identity. It is likely that mind uploading would preserve an identity, if not the subjective consciousness that begot it. If identity rather than subjective consciousness is taken to be the essential, mind uploading succeeds in the opinion of mind-uploading-immortalist advocates.
Believing that there is some mystical “essence” to consciousness that isn’t preserved by copying is ultimately a form of dualism, however. Humans lose consciousness at least daily, yet still remain the same person in the morning. In the extreme, humans completely cease all activity, brain or otherwise, during deep hypothermic circulatory arrest, yet still remain the same person on resuscitation, demonstrating that continuity of consciousness is not necessary for identity or personhood. Rather, the properties that make us identifiable as individuals are stored in the physical structure of the brain.
Ultimately, this is a subjective problem, not an objective one: If a copy is made of a book, is it still the same book? It depends if you subjectively consider “the book” to be the physical artifact or the information contained within. Is it the same book that was once held by Isaac Newton? No. Is it the same book that was once read by Isaac Newton? Yes.
See the rest here:
Posted: June 22, 2016 at 11:41 pm
I’ve played this game back in 2010 when it first came out, and I liked it. I bought it on steam few months ago and decided to replay it, and there’s no wonder i liked it even more. First off, the gameplay mechanics are very interesting and well executed. Making crumble with TMD glove is great, and on the other hand you have around 10 weapons to choose in the game, and design of them is pretty good i would say. Atmosphere is good overall, there is noise all the time, you hear wind blows outside and moving , you hear monsters eating corpses and that makes you feel frighten. So like i said, good atmosphere overall, from the point of sound at least. The design of environment does not fall behind either. First act of the game stands out on that point especially. I’m not saying that the second act is bad with environment design, just that the first is a little better in my opinion. Destroyed Kathorka 12 island looks very creepy. For example, at the beginning of the game there’s a primary school made for children whose parents came to work on Kathorka, which was destroyed and left to rot with rest of the island. You can find tape recordings all over the place. Some of them are from the period before the catastrophe, and some are after. Number of recordings makes you feel bad for people left to die there, you hear them talking about ‘waiting for help to arrive’ and then realize, that corpse lying by recorder is the person which recorded it on the first place. The story is good, it has a good plot, interesting details and multiple endings (which is great). I won’t get further into the story ’cause of spoilers… At the end 7.5/10 for me, maybe even 8/10, but game has a few cons, and one of them is 6 hours campaign, which is short as long as i’m concerned. Short but really enjoyable experience.
Posted: June 21, 2016 at 11:13 pm
Is the surface of our planet — and maybe every planet we can get our hands on — going to be carpeted in paper clips (and paper clip factories) by a well-intentioned but misguided artificial intelligence (AI) that ultimately cannibalizes everything in sight, including us, in single-minded pursuit of a seemingly innocuous goal? Nick Bostrom, head of Oxford’s Future of Humanity Institute, thinks that we can’t guarantee it _won’t_ happen, and it worries him. It doesn’t require Skynet and Terminators, it doesn’t require evil geniuses bent on destroying the world, it just requires a powerful AI with a moral system in which humanity’s welfare is irrelevant or defined very differently than most humans today would define it. If the AI has a single goal and is smart enough to outwit our attempts to disable or control it once it has gotten loose, Game Over, argues Professor Bostrom in his book _Superintelligence_.
This is perhaps the most important book I have read this decade, and it has kept me awake at night for weeks. I want to tell you why, and what I think, but a lot of this is difficult ground, so please bear with me. The short form is that I am fairly certain that we _will_ build a true AI, and I respect Vernor Vinge, but I have long been skeptical of the Kurzweilian notions of inevitability, doubly-exponential growth, and the Singularity. I’ve also been skeptical of the idea that AIs will destroy us, either on purpose or by accident. Bostrom’s book has made me think that perhaps I was naive. I still think that, on the whole, his worst-case scenarios are unlikely. However, he argues persuasively that we can’t yet rule out any number of bad outcomes of developing AI, and that we need to be investing much more in figuring out whether developing AI is a good idea. We may need to put a moratorium on research, as was done for a few years with recombinant DNA starting in 1975. We also need to be prepared for the possibility that such a moratorium doesn’t hold. Bostrom also brings up any number of mind-bending dystopias around what qualifies as human, which we’ll get to below.
(snips to my review, since Goodreads limits length)
In case it isn’t obvious by now, both Bostrom and I take it for granted that it’s not only possible but nearly inevitable that we will create a strong AI, in the sense of it being a general, adaptable intelligence. Bostrom skirts the issue of whether it will be conscious, or “have qualia”, as I think the philosophers of mind say.
Where Bostrom and I differ is in the level of plausibility we assign to the idea of a truly exponential explosion in intelligence by AIs, in a takeoff for which Vernor Vinge coined the term “the Singularity.” Vinge is rational, but Ray Kurzweil is the most famous proponent of the Singularity. I read one of Kurzweil’s books a number of years ago, and I found it imbued with a lot of near-mystic hype. He believes the Universe’s purpose is the creation of intelligence, and that that process is growing on a double exponential, starting from stars and rocks through slime molds and humans and on to digital beings.
I’m largely allergic to that kind of hooey. I really don’t see any evidence of the domain-to-domain acceleration that Kurzweil sees, and in particular the shift from biological to digital beings will result in a radical shift in the evolutionary pressures. I see no reason why any sort of “law” should dictate that digital beings will evolve at a rate that *must* be faster than the biological one. I also don’t see that Kurzweil really pays any attention to the physical limits of what will ultimately be possible for computing machines. Exponentials can’t continue forever, as Danny Hillis is fond of pointing out. http://www.kurzweilai.net/ask-ray-the…
So perhaps my opinion is somewhat biased by a dislike of Kurzweil’s circus barker approach, but I think there is more to it than that. Fundamentally, I would put it this way:
Being smart is hard.
And making yourself smarter is also hard. My inclination is that getting smarter is at least as hard as the advantages it brings, so that the difficulty of the problem and the resources that can be brought to bear on it roughly balance. This will result in a much slower takeoff than Kurzweil reckons, in my opinion. Bostrom presents a spectrum of takeoff speeds, from “too fast for us to notice” through “long enough for us to develop international agreements and monitoring institutions,” but he makes it fairly clear that he believes that the probability of a fast takeoff is far too large to ignore. There are parts of his argument I find convincing, and parts I find less so.
To give you a little more insight into why I am a little dubious that the Singularity will happen in what Bostrom would describe as a moderate to fast takeoff, let me talk about the kinds of problems we human beings solve, and that an AI would have to solve. Actually, rather than the kinds of questions, first let me talk about the kinds of answers we would like an AI (or a pet family genius) to generate when given a problem. Off the top of my head, I can think of six:
[Speed] Same quality of answer, just faster. [Ply] Look deeper in number of plies (moves, in chess or go). [Data] Use more, and more up-to-date, data. [Creativity] Something beautiful and new. [Insight] Something new and meaningful, such as a new theory; probably combines elements of all of the above categories. [Values] An answer about (human) values.
The first three are really about how the answers are generated; the last three about what we want to get out of them. I think this set is reasonably complete and somewhat orthogonal, despite those differences.
So what kinds of problems do we apply these styles of answers to? We ultimately want answers that are “better” in some qualitative sense.
Humans are already pretty good at projecting the trajectory of a baseball, but it’s certainly conceivable that a robot batter could be better, by calculating faster and using better data. Such a robot might make for a boring opponent for a human, but it would not be beyond human comprehension.
But if you accidentally knock a bucket of baseballs down a set of stairs, better data and faster computing are unlikely to help you predict the exact order in which the balls will reach the bottom and what happens to the bucket. Someone “smarter” might be able to make some interesting statistical predictions that wouldn’t occur to you or me, but not fill in every detail of every interaction between the balls and stairs. Chaos, in the sense of sensitive dependence on initial conditions, is just too strong.
In chess, go, or shogi, a 1000x improvement in the number of plies that can be investigated gains you maybe only the ability to look ahead two or three moves more than before. Less if your pruning (discarding unpromising paths) is poor, more if it’s good. Don’t get me wrong — that’s a huge deal, any player will tell you. But in this case, humans are already pretty good, when not time limited.
Go players like to talk about how close the top pros are to God, and the possibly apocryphal answer from a top pro was that he would want a three-stone (three-move) handicap, four if his life depended on it. Compared this to the fact that a top pro is still some ten stones stronger than me, a fair amateur, and could beat a rank beginner even if the beginner was given the first forty moves. Top pros could sit across the board from an almost infinitely strong AI and still hold their heads up.
In the most recent human-versus-computer shogi (Japanese chess) series, humans came out on top, though presumabl
y this won’t last much longer.
In chess, as machines got faster, looked more plies ahead, carried around more knowledge, and got better at pruning the tree of possible moves, human opponents were heard to say that they felt the glimmerings of insight or personality from them.
So again we have some problems, at least, where plies will help, and will eventually guarantee a 100% win rate against the best (non-augmented) humans, but they will likely not move beyond what humans can comprehend.
Simply being able to hold more data in your head (or the AI’s head) while making a medical diagnosis using epidemiological data, or cross-correlating drug interactions, for example, will definitely improve our lives, and I can imagine an AI doing this. Again, however, the AI’s capabilities are unlikely to recede into the distance as something we can’t comprehend.
We know that increasing the amount of data you can handle by a factor of a thousand gains you 10x in each dimension for a 3-D model of the atmosphere or ocean, up until chaotic effects begin to take over, and then (as we currently understand it) you can only resort to repeated simulations and statistical measures. The actual calculations done by a climate model long ago reached the point where even a large team of humans couldn’t complete them in a lifetime. But they are not calculations we cannot comprehend, in fact, humans design and debug them.
So for problems with answers in the first three categories, I would argue that being smarter is helpful, but being a *lot* smarter is *hard*. The size of computation grows quickly in many problems, and for many problems we believe that sheer computation is fundamentally limited in how well it can correspond to the real world.
But those are just the warmup. Those are things we already ask computers to do for us, even though they are “dumber” than we are. What about the latter three categories?
I’m no expert in creativity, and I know researchers study it intensively, so I’m going to weasel through by saying it is the ability to generate completely new material, which involves some random process. You also need the ability either to generate that material such that it is aesthetically pleasing with high probability, or to prune those new ideas rapidly using some metric that achieves your goal.
For my purposes here, insight is the ability to be creative not just for esthetic purposes, but in a specific technical or social context, and to validate the ideas. (No implication that artists don’t have insight is intended, this is just a technical distinction between phases of the operation, for my purposes here.) Einstein’s insight for special relativity was that the speed of light is constant. Either he generated many, many hypotheses (possibly unconsciously) and pruned them very rapidly, or his hypothesis generator was capable of generating only a few good ones. In either case, he also had the mathematical chops to prove (or at least analyze effectively) his hypothesis; this analysis likewise involves generating possible paths of proofs through the thicket of possibilities and finding the right one.
So, will someone smarter be able to do this much better? Well, it’s really clear that Einstein (or Feynman or Hawking, if your choice of favorite scientist leans that way) produced and validated hypotheses that the rest of us never could have. It’s less clear to me exactly how *much* smarter than the rest of us he was; did he generate and prune ten times as many hypotheses? A hundred? A million? My guess is it’s closer to the latter than the former. Even generating a single hypothesis that could be said to attack the problem is difficult, and most humans would decline to even try if you asked them to.
Making better devices and systems of any kind requires all of the above capabilities. You must have insight to innovate, and you must be able to quantitatively and qualitatively analyze the new systems, requiring the heavy use of data. As systems get more complex, all of this gets harder. My own favorite example is airplane engines. The Wright Brothers built their own engines for their planes. Today, it takes a team of hundreds to create a jet turbine — thousands, if you reach back into the supporting materials, combustion and fluid flow research. We humans have been able to continue to innovate by building on the work of prior generations, and especially harnessing teams of people in new ways. Unlike Peter Thiel, I don’t believe that our rate of innovation is in any serious danger of some precipitous decline sometime soon, but I do agree that we begin with the low-lying fruit, so that harvesting fruit requires more effort — or new techniques — with each passing generation.
The Singularity argument depends on the notion that the AI would design its own successor, or even modify itself to become smarter. Will we watch AIs gradually pull even with us and then ahead, but not disappear into the distance in a Roadrunner-like flash of dust covering just a few frames of film in our dull-witted comprehension?
Ultimately, this is the question on which continued human existence may depend: If an AI is enough smarter than we are, will it find the process of improving itself to be easy, or will each increment of intelligence be a hard problem for the system of the day? This is what Bostrom calls the “recalcitrance” of the problem.
I believe that the range of possible systems grows rapidly as they get more complex, and that evaluating them gets harder; this is hard to quantify, but each step might involve a thousand times as many options, or evaluating each option might be a thousand times harder. Growth in computational power won’t dramatically overbalance that and give sustained, rapid and accelerating growth that moves AIs beyond our comprehension quickly. (Don’t take these numbers seriously, it’s just an example.)
Bostrom believes that recalcitrance will grow more slowly than the resources the AI can bring to bear on the problem, resulting in continuing, and rapid, exponential increases in intelligence — the arrival of the Singularity. As you can tell from the above, I suspect that the opposite is the case, or that they very roughly balance, but Bostrom argues convincingly. He is forcing me to reconsider.
What about “values”, my sixth type of answer, above? Ah, there’s where it all goes awry. Chapter eight is titled, “Is the default scenario doom?” and it will keep you awake.
What happens when we put an AI in charge of a paper clip factory, and instruct it to make as many paper clips as it can? With such a simple set of instructions, it will do its best to acquire more resources in order to make more paper clips, building new factories in the process. If it’s smart enough, it will even anticipate that we might not like this and attempt to disable it, but it will have the will and means to deflect our feeble strikes against it. Eventually, it will take over every factory on the planet, continuing to produce paper clips until we are buried in them. It may even go on to asteroids and other planets in a single-minded attempt to carpet the Universe in paper clips.
I suppose it goes without saying that Bostrom thinks this would be a bad outcome. Bostrom reasons that AIs ultimately may or may not be similar enough to us that they count as our progeny, but doesn’t hesitate to view them as adversaries, or at least rivals, in the pursuit of resources and even existence. Bostrom clearly roots for humanity here. Which means it’s incumbent on us to find a way to prevent this from happening.
Bostrom thinks that instilling valu
es that are actually close enough to ours that an AI will “see things our way” is nigh impossible. There are just too many ways that the whole process can go wrong. If an AI is given the goal of “maximizing human happiness,” does it count when it decides that the best way to do that is to create the maximum number of digitally emulated human minds, even if that means sacrificing some of the physical humans we already have because the planet’s carrying capacity is higher for digital than organic beings?
As long as we’re talking about digital humans, what about the idea that a super-smart AI might choose to simulate human minds in enough detail that they are conscious, in the process of trying to figure out humanity? Do those recursively digital beings deserve any legal standing? Do they count as human? If their simulations are stopped and destroyed, have they been euthanized, or even murdered? Some of the mind-bending scenarios that come out of this recursion kept me awake nights as I was reading the book.
He uses a variety of names for different strategies for containing AIs, including “genies” and “oracles”. The most carefully circumscribed ones are only allowed to answer questions, maybe even “yes/no” questions, and have no other means of communicating with the outside world. Given that Bostrom attributes nearly infinite brainpower to an AI, it is hard to effectively rule out that an AI could still find some way to manipulate us into doing its will. If the AI’s ability to probe the state of the world is likewise limited, Bsotrom argues that it can still turn even single-bit probes of its environment into a coherent picture. It can then decide to get loose and take over the world, and identify security flaws in outside systems that would allow it to do so even with its very limited ability to act.
I think this unlikely. Imagine we set up a system to monitor the AI that alerts us immediately when the AI begins the equivalent of a port scan, for whatever its interaction mechanism is. How could it possibly know of the existence and avoid triggering the alert? Bostrom has gone off the deep end in allowing an intelligence to infer facts about the world even when its data is very limited. Sherlock Holmes always turns out to be right, but that’s fiction; in reality, many, many hypotheses would suit the extremely slim amount of data he has. The same will be true with carefully boxed AIs.
At this point, Bostrom has argued that containing a nearly infinitely powerful intelligence is nearly impossible. That seems to me to be effectively tautological.
If we can’t contain them, what options do we have? After arguing earlier that we can’t give AIs our own values (and presenting mind-bending scenarios for what those values might actually mean in a Universe with digital beings), he then turns around and invests a whole string of chapters in describing how we might actually go about building systems that have those values from the beginning.
At this point, Bostrom began to lose me. Beyond the systems for giving AIs values, I felt he went off the rails in describing human behavior in simplistic terms. We are incapable of balancing our desire to reproduce with a view of the tragedy of the commons, and are inevitably doomed to live out our lives in a rude, resource-constrained existence. There were some interesting bits in the taxonomies of options, but the last third of the book felt very speculative, even more so than the earlier parts.
Bostrom is rational and seems to have thought carefully about the mechanisms by which AIs may actually arise. Here, I largely agree with him. I think his faster scenarios of development, though, are unlikely: being smart, and getting smarter, is hard. He thinks a “singleton”, a single, most powerful AI, is the nearly inevitable outcome. I think populations of AIs are more likely, but if anything this appears to make some problems worse. I also think his scenarios for controlling AIs are handicapped in their realism by the nearly infinite powers he assigns them. In either case, Bostrom has convinced me that once an AI is developed, there are many ways it can go wrong, to the detriment and possibly extermination of humanity. Both he and I are opposed to this. I’m not ready to declare a moratorium on AI research, but there are many disturbing possibilities and many difficult moral questions that need to be answered.
The first step in answering them, of course, is to begin discussing them in a rational fashion, while there is still time. Read the first 8 chapters of this book!
Read more here:
Posted: at 6:46 am
The most familiar and influential national party for liberals in the US is the Democratic party.
A few definitions from dictionary.com for the term liberal include:
You’ll recall that conservatives favor tradition and generally suspect things that that fall outside traditional views of “normal.” You could say, then, that a liberal view (also called a progressive view) is one that is open to re-defining “normal” as we become more worldly and aware of other cultures.
Liberals favor government-funded programs that address inequalities that they view as having derived from historical discrimination. Liberals believe that prejudice and stereotyping in society can hamper the opportunities for some citizens.
Some people would see liberal bias in an article or book that seems sympathetic to and appears to lend support to government programs that assist poor and minority populations.
Terms such as “bleeding hearts” and “tax and spenders” refer to progressives support of public policies that are designed to address perceived unfair access to health care, housing, and jobs.
If you read an article that seems sympathetic to historic unfairness, there could be a liberal bias.
If you read an article that seems critical of the notion of historical unfairness, there could be a conservative bias.
How do you know if a media presentation or book has a liberal bias?
When critics claim that the press is too liberal, they are often basing the claim on the belief that the press is voicing a view that is too far outside outside traditional views (remember that conservatives value tradition) or they are supporting policy that is based on the idea of “fixing” an injustice.
Today some liberal thinkers prefer to call themselves progressives. Progressive movements are those that address injustice to a group that is in the minority. Liberals would say that the Civil Rights Movement was a progressive movement, for example. However, support for Civil Rights legislation was, in fact, mixed when it came to party affiliation.
As you may know, many people were not in favor of granting equal rights to African Americans during the Civil Rights demonstrations in the sixties, possibly because they feared that equal rights would bring about too much change. Resistance to that change wrought violence. During this tumultuous time of change, many pro-Civil Rights Republicans were criticized for being too “liberal” in their views and many Democrats (like John F. Kennedy) were accused of being too conservative when it came to accepting change.
Child labor laws provide another example. It may be hard to believe, but many people in industry resisted the laws and other restrictions that prevented them from putting young children to work in dangerous factories for long hours. Progressive thinkers changed those laws. In fact, the U.S. was undergoing a “Progressive Era” at this time of reform. This Progressive Era led to reforms in industry to make foods safer, to make factories safer, and to make many aspects of life more “fair.”
The Progressive Era was one time when government played a large role in the U.S. by interfering with business on behalf of people. Today, some people think the government should play a large role as protector, while others believe that the government should refrain from taking a role. It is important to know that progressive thinking can come from either political party.
Conservatives lean toward the belief that the government should stay out of the business of individuals as much as possible, and that includes staying out of the individual’s pocket book. This means they prefer to limit taxes.
Liberals stress that a well-functioning government has a responsibility to maintain law and order, and that doing this is costly. Liberals would lean toward the opinion that taxes are necessary for providing police and courts, ensuring safe transportation by building safe roads, promoting education by providing public schools, and protecting society in general by providing protections to those being exploited by industries.
Conservative thinkers might see bias in an article that expresses a favorable view to taxes or to increasing government spending for initiatives like those mentioned above.
For more information on liberal or progressive values, go to Liberal Politics.
Read the rest here: