Tag Archives: european

Trump rattles NATO with ‘obsolete’ blast – CNNPolitics.com

Posted: January 18, 2017 at 12:46 pm

Speaking in Brussels before a meeting of EU foreign ministers, Frank-Walter Steinmeier suggested the alliance was rattled by the remarks by Trump.

In a joint interview with the Times of London and the German publication Bild, Trump signaled that US foreign policy in a range of areas could be turned on its head. He suggested that sanctions imposed after Russia’s annexation of Crimea could be eased in return for a deal to reduce nuclear weapons, that German Chancellor Angela Merkel had pursued “catastrophic” policies on refugees, and that his son-in-law Jared Kushner could lead a Middle East peace effort.

Speaking in Berlin, Merkel shrugged off the interview, saying she would make no substantive comment before Trump was inaugurated.

NATO said it was “absolutely confident” that the US would remain “committed” to the organization

Trump used the interview to restate his doubts about NATO. “I said a long time ago that NATO had problems,” he said in the interview.

“Number one it was obsolete, because it was designed many, many years ago.

“Number two the countries weren’t paying what they’re supposed to be paying,” adding that this was unfair to the United States.

Steinmeier said he had spoken with NATO Secretary General Jens Stoltenberg, “who is concerned that President-elect Trump regards NATO as obsolete.”

He also noted Trump’s position was “in contradiction” to that of James Mattis, Trump’s nominee for defense secretary.

NATO spokesperson Oana Lungescu pushed back against the comments. “A strong NATO is good for the United States, just as it is for Europe.”

But Dmitry Peskov, the press spokesman for Russian President Vladimir Putin, agreed with Trump’s assessment of NATO, saying on Monday that “the systematic goal of this organization is confrontation.”

Trump suggested they could be eased in return for a nuclear weapons deal.

“They have sanctions against Russia — let’s see if we can strike a few good deals with Russia. I think there should be less nuclear weapons and they have to be reduced significantly, that’s part of it.

“But there are these sanctions and Russia is suffering lots from it. But I think there are things, which lots of people can profit from. ”

Responding to Trump’s comments Monday, the Kremlin said that “sanctions are not a question on our agenda, neither internally, nor in discussions with our international partners.”

“Do you know what? Jared is such a good guy; he will conclude an Israel agreement that no one else can do. You know, he is a natural talent, he is amazing, he is a natural talent,” Trump said, according to Bild.

Trump set his sights on German Chancellor Angela Merkel, calling her “by far the most important leader” in Europe while tearing into her immigration policies, labeling them “catastrophic.”

“I have great respect for her, I felt she was a great leader, I think she made one very catastrophic mistake and that was taking all these illegals and taking all these people where ever they come from and nobody really knows where they come from.”

Merkel, who will run for a fourth term in elections this year, has stood firm on Germany’s position of accepting nearly all asylum seekers found to be legitimate refugees.

Germany took in more than one million refugees in 2015, making it the most open country in Europe to asylum seekers.

Speaking at a joint press conference with New Zealand Prime Minister Bill English on Monday, Merkel said she would not comment before Trump’s inauguration.

“We have known what his position is for some time, and my position is also known,” she said.

An enthusiastic Trump praised Britain’s impending exit from the European Union as being “so smart.”

“I think Brexit is going to end up being a great thing,” Trump said to his interviewer from the Times, Michael Gove, one of the former leaders of the Brexit campaign, and a sitting member of the UK Parliament.

Asked why he thought the UK’s Brexit campaign was successful, Trump blamed loose borders and concerns about the effects of immigration. He also tied it to US security concerns.

“Countries want their own identity. The UK wanted its own identity. I do believe this, if they hadn’t been forced to take in all of the refugees so many, with all the problems that entails, you wouldn’t have a Brexit… it probably could’ve worked out.”

According to the Times, Trump would offer Britain a “fair” trade deal with America within weeks of taking office. (In fact, Britain will not be able to begin negotiating new trade deals until the Brexit process is complete).

Trump said he will immediately take action to tighten US immigration restrictions once he’s sworn in.

“This country we’re going to go very strong borders. From the day I get in. One of the first orders I’m going to sign. Day one.”

Trump reiterated plans to implement what he calls “extreme vetting” of people from the Muslim world, according to a Bild transcript translated into English by CNN.

“There will be extreme security vetting, it won’t be like it is now. We don’t have any proper security controls for people entering our country, they don’t really exist at the moment, like it has been the case in your country at least in the past.”

“That could happen. But we will see,” he said.

Trump also touched on the Iran nuclear agreement but declined to say whether he would demand changes to it.

“Well I don’t want to say what I’m gonna do with the Iran deal. I just don’t want to play the cards. I mean, look, I’m not a politician, I don’t go out and say, “I’m gonna do this” I’m gonna do, I gotta do what I gotta do . . . But I’m not happy with the Iran deal, I think it’s one of the worst deals ever made,” he said, according to the Times.

Republican members of Congress have sharply criticized the deal since it was announced in 2014.

Trump said he doesn’t intend to lay off Twitter once he’s in the Oval Office and will keep his @realDonaldTrump account, according to the Times of London.

“I thought I’d do less of it, but I’m covered so dishonestly by the press so dishonestly that I can put out Twitter and it’s not 140, it’s now 280 I can go bing bing bing . . . and they put it on and as soon as I tweet it out this morning on television, Fox ‘Donald Trump, we have breaking news.'”

CNN’s Nadine Schmidt, Gregory Krieg, Sara Mazloumsaki and Laura Goehler contributed to this report.

See the article here:
Trump rattles NATO with ‘obsolete’ blast – CNNPolitics.com

Posted in NATO | Comments Off on Trump rattles NATO with ‘obsolete’ blast – CNNPolitics.com

Income inequality in the United States – Wikipedia

Posted: January 10, 2017 at 2:58 am

Income inequality in the United States has increased significantly since the 1970s after several decades of stability, meaning the share of the nation’s income received by higher income households has increased. This trend is evident with income measured both before taxes (market income) as well as after taxes and transfer payments. Income inequality has fluctuated considerably since measurements began around 1915, moving in an arc between peaks in the 1920s and 2000s, with a 30-year period of relatively lower inequality between 19501980.[1][2]

Measured for all households, U.S. income inequality is comparable to other developed countries before taxes and transfers, but is among the highest after taxes and transfers, meaning the U.S. shifts relatively less income from higher income households to lower income households. Measured for working-age households, market income inequality is comparatively high (rather than moderate) and the level of redistribution is moderate (not low). These comparisons indicate Americans shift from reliance on market income to reliance on income transfers later in life and less than households in other developed countries do.[2][3]

The U.S. ranks around the 30th percentile in income inequality globally, meaning 70% of countries have a more equal income distribution.[4] U.S. federal tax and transfer policies are progressive and therefore reduce income inequality measured after taxes and transfers.[5] Tax and transfer policies together reduced income inequality slightly more in 2011 than in 1979.[1]

While there is strong evidence that it has increased since the 1970s, there is active debate in the United States regarding the appropriate measurement, causes, effects and solutions to income inequality.[5] The two major political parties have different approaches to the issue, with Democrats historically emphasizing that economic growth should result in shared prosperity (i.e., a pro-labor argument advocating income redistribution), while Republicans tend to downplay the validity or feasibility of positively influencing the issue (i.e., a pro-capital argument against redistribution).[6]

U.S. income inequality has grown significantly since the early 1970s,[8][9][10][11][12][13] after several decades of stability,[14][15][16] and has been the subject of study of many scholars and institutions. The U.S. consistently exhibits higher rates of income inequality than most developed nations due to the nation’s enhanced support of free market capitalism and less progressive spending on social services.[17][18][19][20][21]

The top 1% of income earners received approximately 20% of the pre-tax income in 2013,[7] versus approximately 10% from 1950 to 1980.[2][22][23] The top 1% is not homogeneous, with the very top income households pulling away from others in the top 1%. For example, the top 0.1% of households received approximately 10% of the pre-tax income in 2013, versus approximately 34% between 19511981.[7][24] According to IRS data, adjusted gross income (AGI) of approximately $430,000 was required to be in the top 1% in 2013.[25]

Most of the growth in income inequality has been between the middle class and top earners, with the disparity widening the further one goes up in the income distribution.[26] The bottom 50% earned 20% of the nation’s pre-tax income in 1979; this fell steadily to 14% by 2007 and 13% by 2014. Income for the middle 40% group, a proxy for the middle class, fell from 45% in 1979 to 41% in both 2007 and 2014.[27]

To put this change into perspective, if the US had the same income distribution it had in 1979, each family in the bottom 80% of the income distribution would have $11,000 more per year in income on average, or $916 per month.[28] Half of the U.S. population lives in poverty or is low-income, according to U.S. Census data.[29]

The trend of rising income inequality is also apparent after taxes and transfers. A 2011 study by the CBO[30] found that the top earning 1 percent of households increased their income by about 275% after federal taxes and income transfers over a period between 1979 and 2007, compared to a gain of just under 40% for the 60 percent in the middle of America’s income distribution.[30] U.S. federal tax and transfer policies are progressive and therefore substantially reduce income inequality measured after taxes and transfers. They became moderately less progressive between 1979 and 2007[5] but slightly more progressive measured between 1979 and 2011. Income transfers had a greater impact on reducing inequality than taxes from 1979 to 2011.[1]

Americans are not generally aware of the extent of inequality or recent trends.[31] There is a direct relationship between actual income inequality and the public’s views about the need to address the issue in most developed countries, but not in the U.S., where income inequality is worse but the concern is lower.[32] The U.S. was ranked the 6th worst among 173 countries (4th percentile) on income equality measured by the Gini index.[33]

There is significant and ongoing debate as to the causes, economic effects, and solutions regarding income inequality. While before-tax income inequality is subject to market factors (e.g., globalization, trade policy, labor policy, and international competition), after-tax income inequality can be directly affected by tax and transfer policy. U.S. income inequality is comparable to other developed nations before taxes and transfers, but is among the worst after taxes and transfers.[2][34] Income inequality may contribute to slower economic growth, reduced income mobility, higher levels of household debt, and greater risk of financial crises and deflation.[35][36]

Labor (workers) and capital (owners) have always battled over the share of the economic pie each obtains. The influence of the labor movement has waned in the U.S. since the 1960s along with union participation and more pro-capital laws.[22] The share of total worker compensation has declined from 58% of national income (GDP) in 1970 to nearly 53% in 2013, contributing to income inequality.[37] This has led to concerns that the economy has shifted too far in favor of capital, via a form of corporatism, corpocracy or neoliberalism.[38][39][40][41][42][43][44]

Although some have spoken out in favor of moderate inequality as a form of incentive,[45][46] others have warned against the current high levels of inequality, including Yale Nobel prize for economics winner Robert J. Shiller, (who called rising economic inequality “the most important problem that we are facing now today”),[47] former Federal Reserve Board chairman Alan Greenspan, (“This is not the type of thing which a democratic society a capitalist democratic society can really accept without addressing”),[48] and President Barack Obama (who referred to the widening income gap as the “defining challenge of our time”).[49]

The level of concentration of income in the United States has fluctuated throughout its history. Going back to the early 20th Century, when income statistics started to become available, there has been a “great economic arc” from high inequality “to relative equality and back again,” in the words of Nobel laureate economist Paul Krugman.[50] In 1915, an era in which the Rockefellers and Carnegies dominated American industry, the richest 1% of Americans earned roughly 18% of all income. By 2007, the top 1 percent account for 24% of all income.[51] In between, their share fell below 10% for three decades.

The first era of inequality lasted roughly from the post-civil war era (“the Gilded Age”) to sometime around 1937. But from about 1937 to 1947 a period that has been dubbed the “Great Compression”[52] income inequality in the United States fell dramatically. Highly progressive New Deal taxation, the strengthening of unions, and regulation of the National War Labor Board during World War II raised the income of the poor and working class and lowered that of top earners.[53] This “middle class society” of relatively low level of inequality remained fairly steady for about three decades ending in early 1970s,[14][52][54] the product of relatively high wages for the US working class and political support for income leveling government policies.

Wages remained relatively high because of lack of foreign competition for American manufacturing, and strong trade unions. By 1947 more than a third of non-farm workers were union members,[55] and unions both raised average wages for their membership, and indirectly, and to a lesser extent, raised wages for workers in similar occupations not represented by unions.[56] Scholars believe political support for equalizing government policies was provided by high voter turnout from union voting drives, the support of the otherwise conservative South for the New Deal, and prestige that the massive mobilization and victory of World War II had given the government.[57]

The return to high inequality or to what Krugman and journalist Timothy Noah have referred as the “Great Divergence”,[51] began in the 1970s. Studies have found income grew more unequal almost continuously except during the economic recessions in 199091, 2001 (Dot-com bubble), and 2007 sub-prime bust.[58][59]

The Great Divergence differs in some ways from the pre-Depression era inequality. Before 1937, a larger share of top earners income came from capital (interest, dividends, income from rent, capital gains). After 1970, income of high-income taxpayers comes predominantly from labor: employment compensation.[60]

Until 2011, the Great Divergence had not been a major political issue in America, but stagnation of middle-class income was. In 2009 the Barack Obama administration White House Middle Class Working Families Task Force convened to focus on economic issues specifically affecting middle-income Americans. In 2011, the Occupy movement drew considerable attention to income inequality in the country.

CBO reported that for the 1979-2007 period, after-tax income of households in the top 1 percent of earners grew by 275%, compared to 65% for the next 19%, just under 40% for the next 60%, 18% for the bottom fifth of households. “As a result of that uneven income growth,” the report noted, “the share of total after-tax income received by the 1 percent of the population in households with the highest income more than doubled between 1979 and 2007, whereas the share received by low- and middle-income households declined…. The share of income received by the top 1 percent grew from about 8% in 1979 to over 17% in 2007. The share received by the other 19 percent of households in the highest income quintile (one fifth of the population as divided by income) was fairly flat over the same period, edging up from 35% to 36%.”[5][61]

According to the CBO,[62] the major reason for observed rise in unequal distribution of after-tax income was an increase in market income, that is household income before taxes and transfers. Market income for a household is a combination of labor income (such as cash wages, employer-paid benefits, and employer-paid payroll taxes), business income (such as income from businesses and farms operated solely by their owners), capital gains (profits realized from the sale of assets and stock options), capital income (such as interest from deposits, dividends, and rental income), and other income. Of them, capital gains accounted for 80% of the increase in market income for the households in top 20%, in the 20002007 period. Even over the 19912000 period, according to the CBO, capital gains accounted for 45% of the market income for the top 20% households.

In a July 2015 op-ed article, Martin Feldstein, Professor of Economics at Harvard University, stated that the CBO found that from 1980 to 2010 real median household income rose by 15%. However, when the definition of income was expanded to include benefits and subtracted taxes, the CBO found that the median household’s real income rose by 45%. Adjusting for household size, the gain increased to 53%.[63]

Just as higher-income groups are more likely to enjoy financial gains when economic times are good, they are also likely to suffer more significant income losses during economic downturns and recessions when they are compared to lower income groups. Higher-income groups tend to derive relatively more of their income from more volatile sources related to capital income (business income, capital gains, and dividends), as opposed to labor income (wages and salaries). For example, in 2011 the top 1% of income earners derived 37% of their income from labor income, versus 62% for the middle quintile. On the other hand, the top 1% derived 58% of their income from capital as opposed to 4% for the middle quintile. Government transfers represented only 1% of the income of the top 1% but 25% for the middle quintile; the dollar amounts of these transfers tend to rise in recessions.[1]

This effect occurred during the Great Recession of 20072009, when total income going to the bottom 99 percent of Americans declined by 11.6%, but fell by 36.3% for the top 1%. Declines were especially steep for capital gains, which fell by 75% in real (inflation-adjusted) terms between 2007 and 2009. Other sources of capital income also fell: interest income by 40% and dividend income by 33%. Wages, the largest source of income, fell by a more modest 6%.

The share of pretax income received by the top 1% fell from 18.7% in 2007 to 16.0% in 2008 and 13.4% in 2009, while the bottom four quintiles all had their share of pretax income increase from 2007 to 2009.[64][65] The share of aftertax income received by the top 1% income group fell from 16.7%, in 2007, to 11.5%, in 2009.[1]

The distribution of household incomes has become more unequal during the post-2008 economic recovery as the effects of the recession reversed.[66][67][68] CBO reported in November 2014 that the share of pre-tax income received by the top 1% had risen from 13.3% in 2009 to 14.6% in 2011.[1] During 2012 alone, incomes of the wealthiest 1 percent rose nearly 20%, whereas the income of the remaining 99 percent rose 1% in comparison.[22]

If the United States had the same income distribution it had in 1979, the bottom 80 percent of the population would have $1 trillion or $11,000 per family more. The top 1 percent would have $1 trillion or $750,000 less. Larry Summers[69]

According to an article in The New Yorker, by 2012, the share of pre-tax income received by the top 1% had returned to its pre-crisis peak, at around 23% of the pre-tax income.[2] This is based on widely cited data from economist Emmanuel Saez, which uses “market income” and relies primarily on IRS data.[67] The CBO uses both IRS data and Census data in its computations and reports a lower pre-tax figure for the top 1%.[1] The two series were approximately 5 percentage points apart in 2011 (Saez at about 19.7% versus CBO at 14.6%), which would imply a CBO figure of about 18% in 2012 if that relationship holds, a significant increase versus the 14.6% CBO reported for 2011. The share of after-tax income received by the top 1% rose from 11.5% in 2009 to 12.6% in 2011.[1]

Inflation-adjusted pre-tax income for the bottom 90% of American families fell between 2010 and 2013, with the middle income groups dropping the most, about 6% for the 40th-60th percentiles and 7% for the 20th-40th percentiles. Incomes in the top decile rose 2%.[34]

The top 1% captured 91% of the real income growth per family during the 2009-2012 recovery period, with their pre-tax incomes growing 34.7% adjusted for inflation while the pre-tax incomes of the bottom 99% grew 0.8%. Measured from 20092015, the top 1% captured 52% of the total real income growth per family, indicating the recovery was becoming less “lopsided” in favor of higher income families. By 2015, the top 10% (top decile) had a 50.5% share of the pre-tax income, close its highest all-time level.[70]

Tax increases on higher income earners were implemented in 2013 due to the Affordable Care Act and American Taxpayer Relief Act of 2012. CBO estimated that “average federal tax rates under 2013 law would be higher relative to tax rates in 2011 across the income spectrum. The estimated rates under 2013 law would still be well below the average rates from 1979 through 2011 for the bottom four income quintiles, slightly below the average rate over that period for households in the 81st through 99th percentiles, and well above the average rate over that period for households in the top 1 percent of the income distribution.”[1] In 2016, the economists Peter H. Lindert and Jeffrey G. Williamson contended that inequality is the highest it has been since the nation’s founding.[71] French economist Thomas Piketty attributed the victory of Donald Trump in the 2016 presidential election, which he characterizes as an “electoral upset,” to “the explosion in economic and geographic inequality in the United States over several decades and the inability of successive governments to deal with this.”[72]

U.S. income inequality is comparable to other developed countries measured before taxes and transfers, but is among the worst after taxes and transfers.[2]

According to the CBO and others, “the precise reasons for the

rapid growth in income at the top are not well understood”,[60][75] but “in all likelihood,” an “interaction of multiple factors” was involved.[76] “Researchers have offered several potential rationales.”[60][77] Some of these rationales conflict, some overlap.[78] They include:

Paul Krugman put several of these factors into context in January 2015: “Competition from emerging-economy exports has surely been a factor depressing wages in wealthier nations, although probably not the dominant force. More important, soaring incomes at the top were achieved, in large part, by squeezing those below: by cutting wages, slashing benefits, crushing unions, and diverting a rising share of national resources to financial wheeling and dealing…Perhaps more important still, the wealthy exert a vastly disproportionate effect on policy. And elite priorities obsessive concern with budget deficits, with the supposed need to slash social programs have done a lot to deepen [wage stagnation and income inequality].”[92]

There is an ongoing debate as to the economic effects of income inequality. For example, Alan B. Krueger, President Obama’s Chairman of the Council of Economic Advisors, summarized the conclusions of several research studies in a 2012 speech. In general, as income inequality worsens:

Among economists and related experts, many believe that America’s growing income inequality is “deeply worrying”,[48] unjust,[84] a danger to democracy/social stability,[96][97][98] or a sign of national decline.[99] Yale professor Robert Shiller, who was among three Americans who won the Nobel prize for economics in 2013, said after receiving the award, “The most important problem that we are facing now today, I think, is rising inequality in the United States and elsewhere in the world.”[100] Economist Thomas Piketty, who has spent nearly 20 years studying inequality primarily in the US, warns that “The egalitarian pioneer ideal has faded into oblivion, and the New World may be on the verge of becoming the Old Europe of the twenty-first century’s globalized economy.”[101]

On the other side of the issue are those who have claimed that the increase is not significant,[102] that it doesn’t matter[98] because America’s economic growth and/or equality of opportunity are what’s important,[103] that it is a global phenomenon which would be foolish to try to change through US domestic policy,[104] that it “has many economic benefits and is the result of … a well-functioning economy”,[105][106] and has or may become an excuse for “class-warfare rhetoric”,[102] and may lead to policies that “reduce the well-being of wealthier individuals”.[105][107]

Economist Alan B. Krueger wrote in 2012: “The rise in inequality in the United States over the last three decades has reached the point that inequality in incomes is causing an unhealthy division in opportunities, and is a threat to our economic growth. Restoring a greater degree of fairness to the U.S. job market would be good for businesses, good for the economy, and good for the country.” Krueger wrote that the significant shift in the share of income accruing to the top 1% over the 1979 to 2007 period represented nearly $1.1 trillion in annual income. Since the wealthy tend to save nearly 50% of their marginal income while the remainder of the population saves roughly 10%, other things equal this would reduce annual consumption (the largest component of GDP) by as much as 5%. Krueger wrote that borrowing likely helped many households make up for this shift, which became more difficult in the wake of the 20072009 recession.[95]

Inequality in land and income ownership is negatively correlated with subsequent economic growth. A strong demand for redistribution will occur in societies where a large section of the population does not have access to the productive resources of the economy. Rational voters must internalize such issues.[108] High unemployment rates have a significant negative effect when interacting with increases in inequality. Increasing inequality harms growth in countries with high levels of urbanization. High and persistent unemployment also has a negative effect on subsequent long-run economic growth. Unemployment may seriously harm growth because it is a waste of resources, because it generates redistributive pressures and distortions, because it depreciates existing human capital and deters its accumulation, because it drives people to poverty, because it results in liquidity constraints that limit labor mobility, and because it erodes individual self-esteem and promotes social dislocation, unrest and conflict. Policies to control unemployment and reduce its inequality-associated effects can strengthen long-run growth.[109]

Concern extends even to such supporters (or former supporters) of laissez-faire economics and private sector financiers. Former Federal Reserve Board chairman Alan Greenspan, has stated reference to growing inequality: “This is not the type of thing which a democratic society a capitalist democratic society can really accept without addressing.”[48] Some economists (David Moss, Paul Krugman, Raghuram Rajan) believe the “Great Divergence” may be connected to the financial crisis of 2008.[105][110] Money manager William H. Gross, former managing director of PIMCO, criticized the shift in distribution of income from labor to capital that underlies some of the growth in inequality as unsustainable, saying:

Even conservatives must acknowledge that return on capital investment, and the liquid stocks and bonds that mimic it, are ultimately dependent on returns to labor in the form of jobs and real wage gains. If Main Street is unemployed and undercompensated, capital can only travel so far down Prosperity Road.

He concluded: “Investors/policymakers of the world wake up you’re killing the proletariat goose that lays your golden eggs.”[111][112]

Among economists and reports that find inequality harming economic growth are a December 2013 Associated Press survey of three dozen economists’,[114] a 2014 report by Standard and Poor’s,[115] economists Gar Alperovitz, Robert Reich, Joseph Stiglitz, and Branko Milanovic.

A December 2013 Associated Press survey of three dozen economists found that the majority believe that widening income disparity is harming the US economy. They argue that wealthy Americans are receiving higher pay, but they spend less per dollar earned than middle class consumers, the majority of the population, whose incomes have largely stagnated.[114]

A 2014 report by Standard and Poor’s concluded that diverging income inequality has slowed the economic recovery and could contribute to boom-and-bust cycles in the future as more and more Americans take on debt in order to consume. Higher levels of income inequality increase political pressures, discouraging trade, investment, hiring, and social mobility according to the report.[115]

Economists Gar Alperovitz and Robert Reich argue that too much concentration of wealth prevents there being sufficient purchasing power to make the rest of the economy function effectively.[116][117]

Joseph Stiglitz argues that concentration of wealth and income leads the politically powerful economic elite seek to protect themselves from redistributive policies by weakening the state, and this leads to less public investments by the state roads, technology, education, etc. that are essential for economic growth.[118][119]

According to economist Branko Milanovic, while traditionally economists thought inequality was good for growth, “The view that income inequality harms growth or that improved equality can help sustain growth has become more widely held in recent years. The main reason for this shift is the increasing importance of human capital in development. When physical capital mattered most, savings and investments were key. Then it was important to have a large contingent of rich people who could save a greater proportion of their income than the poor and invest it in physical capital. But now that human capital is scarcer than machines, widespread education has become the secret to growth.” He continued that “Broadly accessible education” is both difficult to achieve when income distribution is uneven and tends to reduce “income gaps between skilled and unskilled labor.”[120]

Robert Gordon wrote that such issues as ‘rising inequality; factor price equalization stemming from the interplay between globalization and the Internet; the twin educational problems of cost inflation in higher education and poor secondary student performance; the consequences of environmental regulations and taxes…” make economic growth harder to achieve than in the past.[121]

In response to the Occupy movement Richard A. Epstein defended inequality in a free market society, maintaining that “taxing the top one percent even more means less wealth and fewer jobs for the rest of us.” According to Epstein, “the inequalities in wealth … pay for themselves by the vast increases in wealth”, while “forced transfers of wealth through taxation … will destroy the pools of wealth that are needed to generate new ventures.[122] Some researchers have found a connection between lowering high marginal tax rates on high income earners (high marginal tax rates on high income being a common measure to fight inequality), and higher rates of employment growth.[123][124] Government significant free market strategy affects too. the reason is there is a failure in the US political system to counterbalance the rise in unequal distribution of income amongst the citizens.[125]

Economic sociologist Lane Kenworthy has found no correlation between levels of inequality and economic growth among developed countries, among states of the US, or in the US over the years from 1947 to 2005.[126]Jared Bernstein found a nuanced relation he summed up as follows: “In sum, I’d consider the question of the extent to which higher inequality lowers growth to be an open one, worthy of much deeper research”.[127]Tim Worstall commented that capitalism would not seem to contribute to an inherited-wealth stagnation and consolidation, but instead appears to promote the opposite, a vigorous, ongoing turnover and creation of new wealth.[128][129]

Income inequality was cited as one of the causes of the Great Depression by Supreme Court Justice Louis D. Brandeis in 1933. In his dissent in the Louis K. Liggett Co. v. Lee (288 U.S. 517) case, he wrote: “Other writers have shown that, coincident with the growth of these giant corporations, there has occurred a marked concentration of individual wealth; and that the resulting disparity in incomes is a major cause of the existing depression.”[130]

Central Banking economist Raghuram Rajan argues that “systematic economic inequalities, within the United States and around the world, have created deep financial ‘fault lines’ that have made [financial] crises more likely to happen than in the past” the Financial crisis of 200708 being the most recent example.[131] To compensate for stagnating and declining purchasing power, political pressure has developed to extend easier credit to the lower and middle income earners particularly to buy homes and easier credit in general to keep unemployment rates low. This has given the American economy a tendency to go “from bubble to bubble” fueled by unsustainable monetary stimulation.[132]

Greater income inequality can lead to monopolization of the labor force, resulting in fewer employers requiring fewer workers.[133][134] Remaining employers can consolidate and take advantage of the relative lack of competition, leading to less consumer choice, market abuses, and relatively higher prices.[109][134]

Income inequality lowers aggregate demand, leading to increasingly large segments of formerly middle class consumers unable to afford as many luxury and essential goods and services.[133] This pushes production and overall employment down.[109]

Deep debt may lead to bankruptcy and researchers Elizabeth Warren and Amelia Warren Tyagi found a fivefold increase in the number of families filing for bankruptcy between 1980 and 2005.[135] The bankruptcies came not from increased spending “on luxuries”, but from an “increased spending on housing, largely driven by competition to get into good school districts.” Intensifying inequality may mean a dwindling number of ever more expensive school districts that compel middle class or would-be middle class to “buy houses they can’t really afford, taking on more mortgage debt than they can safely handle”.[136]

The ability to move from one income group into another (income mobility) is a means of measuring economic opportunity. A higher probability of upward income mobility theoretically would help mitigate higher income inequality, as each generation has a better chance of achieving higher income groups. Conservatives and libertarians such as economist Thomas Sowell, and Congressman Paul Ryan (R., Wisc.)[137] argue that more important than the level of equality of results is America’s equality of opportunity, especially relative to other developed countries such as western Europe.

Nonetheless, results from various studies reflect the fact that endogenous regulations and other different rules yield distinct effects on income inequality. A study examines the effects of institutional change on age-based labor market inequalities in Europe. There is a focus on wage-setting institutions on the adult male population and the rate of their unequal income distribution. According to the study, there is evidence that unemployment protection and temporary work regulation affect the dynamics of age-based inequality with positive employment effects of all individuals by the strength of unions. Even though the European Union is within a favorable economic context with perspectives of growth and development, it is also very fragile. [138]

However, several studies have indicated that higher income inequality corresponds with lower income mobility. In other words, income brackets tend to be increasingly “sticky” as income inequality increases. This is described by a concept called the Great Gatsby curve.[95][139] In the words of journalist Timothy Noah, “you can’t really experience ever-growing income inequality without experiencing a decline in Horatio Alger-style upward mobility because (to use a frequently-employed metaphor) it’s harder to climb a ladder when the rungs are farther apart.”[48]

The centrist Brookings Institution said in March 2013 that income inequality was increasing and becoming permanent, sharply reducing social mobility in the US.[140] A 2007 study (by Kopczuk, Saez and Song in 2007) found the top population in the United States “very stable” and that income mobility had “not mitigated the dramatic increase in annual earnings concentration since the 1970s.”[139]

Economist Paul Krugman, attacks conservatives for resorting to “extraordinary series of attempts at statistical distortion”. He argues that while in any given year, some of the people with low incomes will be “workers on temporary layoff, small businessmen taking writeoffs, farmers hit by bad weather” the rise in their income in succeeding years is not the same ‘mobility’ as poor people rising to middle class or middle income rising to wealth. It’s the mobility of “the guy who works in the college bookstore and has a real job by his early thirties.”

Studies by the Urban Institute and the US Treasury have both found that about half of the families who start in either the top or the bottom quintile of the income distribution are still there after a decade, and that only 3 to 6% rise from bottom to top or fall from top to bottom.[141]

On the issue of whether most Americans do not stay put in any one income bracket, Krugman quotes from 2011 CBO distribution of income study

Household income measured over a multi-year period is more equally distributed than income measured over one year, although only modestly so. Given the fairly substantial movement of households across income groups over time, it might seem that income measured over a number of years should be significantly more equally distributed than income measured over one year. However, much of the movement of households involves changes in income that are large enough to push households into different income groups but not large enough to greatly affect the overall distribution of income. Multi-year income measures also show the same pattern of increasing inequality over time as is observed in annual measures.[30]

In other words, “many people who have incomes greater than $1 million one year fall out of the category the next year but that’s typically because their income fell from, say, $1.05 million to 0.95 million, not because they went back to being middle class.”[30][142]

Several studies have found the ability of children from poor or middle-class families to rise to upper income known as “upward relative intergenerational mobility” is lower in the US than in other developed countries[143] and at least two economists have found lower mobility linked to income inequality.[48][144]

In their Great Gatsby curve,[144]White House Council of Economic Advisers Chairman Alan B. Krueger and labor economist Miles Corak show a negative correlation between inequality and social mobility. The curve plotted “intergenerational income elasticity” i.e. the likelihood that someone will inherit their parents’ relative position of income level and inequality for a number of countries.[48][145]

Aside from the proverbial distant rungs, the connection between income inequality and low mobility can be explained by the lack of access for un-affluent children to better (more expensive) schools and preparation for schools crucial to finding high-paying jobs; the lack of health care that may lead to obesity and diabetes and limit education and employment.[143]

Krueger estimates that “the persistence in the advantages and disadvantages of income passed from parents to the children” will “rise by about a quarter for the next generation as a result of the rise in inequality that the U.S. has seen in the last 25 years.”[48]

Greater income inequality can increase the poverty rate, as more income shifts away from lower income brackets to upper income brackets. Jared Bernstein wrote: “If less of the economy’s market-generated growth i.e., before taxes and transfers kick in ends up in the lower reaches of the income scale, either there will be more poverty for any given level of GDP growth, or there will have to be a lot more transfers to offset inequality’s poverty-inducing impact.” The Economic Policy Institute estimated that greater income inequality would have added 5.5% to the poverty rate between 1979 and 2007, other factors equal. Income inequality was the largest driver of the change in the poverty rate, with economic growth, family structure, education and race other important factors.[146][147] An estimated 16% of Americans lived in poverty in 2012, versus 26% in 1967.[148]

A rise in income disparities weakens skills development among people with a poor educational background in term of the quantity and quality of education attained. Those with a low level of expertise will always consider themselves unworthy of any high position and pay[149]

Lisa Shalett, chief investment officer at Merrill Lynch Wealth Management noted that, “for the last two decades and especially in the current period, … productivity soared … [but] U.S. real average hourly earnings are essentially flat to down, with today’s inflation-adjusted wage equating to about the same level as that attained by workers in 1970. … So where have the benefits of technology-driven productivity cycle gone? Almost exclusively to corporations and their very top executives.”[150][150] In addition to the technological side of it, the affected functionality emanates from the perceived unfairness and the reduced trust of people towards the state. The study by Kristal and Cohen showed that rising wage inequality has brought about an unhealthy competition between institutions and technology. The technological changes, with computerization of the workplace, seem to give an upper hand to the high-skilled workers as the primary cause of inequality in America. The qualified will always be considered to be in a better position as compared to those dealing with hand work leading to replacements and unequal distribution of resources.[151]

Economist Timothy Smeeding summed up the current trend:[152]

Americans have the highest income inequality in the rich world and over the past 2030 years Americans have also experienced the greatest increase in income inequality among rich nations. The more detailed the data we can use to observe this change, the more skewed the change appears to be … the majority of large gains are indeed at the top of the distribution.

According to Janet L. Yellen, chair of the Federal Reserve,

…from 1973 to 2005, real hourly wages of those in the 90th percentile where most people have college or advanced degrees rose by 30% or more… among this top 10 percent, the growth was heavily concentrated at the very tip of the top, that is, the top 1 percent. This includes the people who earn the very highest salaries in the U.S. economy, like sports and entertainment stars, investment bankers and venture capitalists, corporate attorneys, and CEOs. In contrast, at the 50th percentile and below where many people have at most a high school diploma real wages rose by only 5 to 10% [77]

Economists Jared Bernstein and Paul Krugman have attacked the concentration of income as variously “unsustainable”[97] and “incompatible”[98] with real democracy. American political scientists Jacob S. Hacker and Paul Pierson quote a warning by Greek-Roman historian Plutarch: “An imbalance between rich and poor is the oldest and most fatal ailment of all republics.”[96] Some academic researchers have written that the US political system risks drifting towards a form of oligarchy, through the influence of corporations, the wealthy, and other special interest groups.[153][154]

Rising income inequality has been linked to the political polarization in Washington DC.[155] According to a 2013 study published in the Political Research Quarterly, elected officials tend to be more responsive to the upper income bracket and ignore lower income groups.[156]

Paul Krugman wrote in November 2014 that: “The basic story of political polarization over the past few decades is that, as a wealthy minority has pulled away economically from the rest of the country, it has pulled one major party along with it…Any policy that benefits lower- and middle-income Americans at the expense of the elite like health reform, which guarantees insurance to all and pays for that guarantee in part with taxes on higher incomes will face bitter Republican opposition.” He used environmental protection as another example, which was not a partisan issue in the 1990s but has since become one.[157]

As income inequality has increased, the degree of House of Representatives polarization measured by voting record has also increased. The voting is mostly by the rich and for the rich making it hard to achieve equal income and resource distribution for the average population (Bonica et al., 2013). There is a little number of people who turn to government insurance with the rising wealth and real income since they consider inequality within the different government sectors. Additionally, there has been an increased influence by the rich on the regulatory, legislative and electoral processes within the country that has led to improved employment standards for the bureaucrats and politicians.[158] Professors McCarty, Pool and Rosenthal wrote in 2007 that polarization and income inequality fell in tandem from 1913 to 1957 and rose together dramatically from 1977 on. They show that Republicans have moved politically to the right, away from redistributive policies that would reduce income inequality. Polarization thus creates a feedback loop, worsening inequality.[159]

Several economists and political scientists have argued that economic inequality translates into political inequality, particularly in situations where politicians have financial incentives to respond to special interest groups and lobbyists. Researchers such as Larry Bartels of Vanderbilt University have shown that politicians are significantly more responsive to the political opinions of the wealthy, even when controlling for a range of variables including educational attainment and political knowledge.[161][162]

Historically, discussions of income inequality and capital vs. labor debates have sometimes included the language of class warfare, from President Theodore Roosevelt (referring to the leaders of big corporations as “malefactors of great wealth”), to President Franklin Roosevelt (“economic royalists…are unanimous in their hate for me–and I welcome their hatred”), to more the recent “1% versus the 99%” issue and the question of which political party better represents the interests of the middle class.[163]

Investor Warren Buffett said in 2006 that: “There’s class warfare, all right, but it’s my class, the rich class, that’s making war, and we’re winning.” He advocated much higher taxes on the wealthiest Americans, who pay lower effective tax rates than many middle-class persons.[164]

Two journalists concerned about social separation in the US are economist Robert Frank, who notes that: “Today’s rich had formed their own virtual country .. [T]hey had built a self-contained world unto themselves, complete with their own health-care system (concierge doctors), travel network (Net jets, destination clubs), separate economy…The rich weren’t just getting richer; they were becoming financial foreigners, creating their own country within a country, their own society within a society, and their economy within an economy.[165]

George Packer wrote that “Inequality hardens society into a class system … Inequality divides us from one another in schools, in neighborhoods, at work, on airplanes, in hospitals, in what we eat, in the condition of our bodies, in what we think, in our children’s futures, in how we die. Inequality makes it harder to imagine the lives of others.[99]

Even these class levels can affect the politics in certain ways. There has been an increased influence by the rich on the regulatory, legislative and electoral processes within the country that has led to improved employment standards for the bureaucrats and politicians. They have a greater influence through their lobbying and contributions that give them an opportunity to immerse wealth for themselves.[166]

Loss of income by the middle class relative to the top-earning 1% and 0.1% is both a cause and effect of political change, according to journalist Hedrick Smith. In the decade starting around 2000, business groups employed 30 times as many Washington lobbyists as trade unions and 16 times as many lobbyists as labor, consumer, and public interest lobbyists combined.[167]

From 1998 through 2010 business interests and trade groups spent $28.6 billion on lobbying compared with $492 million for labor, nearly a 60-to-1 business advantage.[168]

The result, according to Smith, is a political landscape dominated in the 1990s and 2000s by business groups, specifically “political insiders” former members of Congress and government officials with an inside track working for “Wall Street banks, the oil, defense, and pharmaceutical industries; and business trade associations.” In the decade or so prior to the Great Divergence, middle-class-dominated reformist grassroots efforts such as civil rights movement, environmental movement, consumer movement, labor movement had considerable political impact.[167]

“We haven’t achieved the minimalist state that libertarians advocate. What we’ve achieved is a state too constrained to provide the public goods investments in infrastructure, technology, and education that would make for a vibrant economy and too weak to engage in the redistribution that is needed to create a fair society. But we have a state that is still large enough and distorted enough that it can provide a bounty of gifts to the wealthy.”

Economist Joseph Stiglitz argues that hyper-inequality may explain political questions such as why America’s infrastructure (and other public investments) are deteriorating,[170] or the country’s recent relative lack of reluctance to engage in military conflicts such as the 2003 invasion of Iraq. Top-earning families, wealthy enough to buy their own education, medical care, personal security, and parks, have little interest in helping pay for such things for the rest of society, and the political influence to make sure they don’t have to. So too, the lack of personal or family sacrifice involved for top earners in the military intervention of their country their children being few and far between in the relatively low-paying all-volunteer military may mean more willingness by influential wealthy to see its government wage war.[171]

Economist Branko Milanovic argued that globalization and the related competition with cheaper labor from Asia and immigrants have caused U.S. middle-class wages to stagnate, fueling the rise of populist political candidates such as Donald Trump.[172]

The relatively high rates of health and social problems, (obesity, mental illness, homicides, teenage births, incarceration, child conflict, drug use) and lower rates of social goods (life expectancy, educational performance, trust among strangers, women’s status, social mobility, even numbers of patents issued per capita), in the US compared to other developed countries may be related to its high income inequality. Using statistics from 23 developed countries and the 50 states of the US, British researchers Richard G. Wilkinson and Kate Pickett have found such a correlation which remains after accounting for ethnicity,[173] national culture,[174] and occupational classes or education levels.[175] Their findings, based on UN Human Development Reports and other sources, locate the United States at the top of the list in regards to inequality and various social and health problems among developed countries.[176] The authors argue inequality creates psychosocial stress and status anxiety that lead to social ills.[177] A 2009 study conducted by researchers at Harvard University and published in the British Medical Journal attribute one in three deaths in the United States to high levels of inequality.[178] According to The Earth Institute, life satisfaction in the US has been declining over the last several decades, which has been attributed to soaring inequality, lack of social trust and loss of faith in government.[179]

It is claimed in a 2015 study by Princeton University researchers Angus Deaton and Anne Case that income inequality could be a driving factor in a marked increase in deaths among white males between the ages of 45 to 54 in the period 1999 to 2013.[180][181]

Paul Krugman argues that the much lamented long-term funding problems of Social Security and Medicare can be blamed in part on the growth in inequality as well as the usual culprits like longer life expectancies. The traditional source of funding for these social welfare programs payroll taxes is inadequate because it does not capture income from capital, and income above the payroll tax cap, which make up a larger and larger share of national income as inequality increases.[182]

Upward redistribution of income is responsible for about 43% of the projected Social Security shortfall over the next 75 years.[183]

Disagreeing with this focus on the top-earning 1%, and urging attention to the economic and social pathologies of lower-income/lower education Americans, is conservative[184] journalist David Brooks. Whereas in the 1970s, high school and college graduates had “very similar family structures”, today, high school grads are much less likely to get married and be active in their communities, and much more likely to smoke, be obese, get divorced, or have “a child out of wedlock.”[185]

The zooming wealth of the top one percent is a problem, but it’s not nearly as big a problem as the tens of millions of Americans who have dropped out of high school or college. It’s not nearly as big a problem as the 40 percent of children who are born out of wedlock. It’s not nearly as big a problem as the nation’s stagnant human capital, its stagnant social mobility and the disorganized social fabric for the bottom 50 percent.[185][186]

Contradicting most of these arguments, classical liberals such as Friedrich Hayek have maintained that because individuals are diverse and different, state intervention to redistribute income is inevitably arbitrary and incompatible with the concept of general rules of law, and that “what is called ‘social’ or distributive’ justice is indeed meaningless within a spontaneous order”. Those who would use the state to redistribute, “take freedom for granted and ignore the preconditions necessary for its survival.”[187][188][188]

The growth of inequality has provoked a political protest movement the Occupy movement starting in Wall Street and spreading to 600 communities across the United States in 2011. Its main political slogan “We are the 99%” references its dissatisfaction with the concentration of income in the top 1%.

More:

Income inequality in the United States – Wikipedia

Posted in Basic Income Guarantee | Comments Off on Income inequality in the United States – Wikipedia

Turks and Caicos Islands – Wikipedia

Posted: January 6, 2017 at 11:07 pm

The Turks and Caicos Islands ( and / / ), or TCI for short, are a British Overseas Territory consisting of the larger Caicos Islands and smaller Turks Islands, two groups of tropical islands in the Lucayan Archipelago of the Atlantic Ocean and northern West Indies.

They are known primarily for tourism and as an offshore financial centre. The resident population is 31,458 as of 2012[update][2] of whom 23,769 live on Providenciales in the Caicos Islands.

The Turks and Caicos Islands lie southeast of Mayaguana in the Bahamas island chain and north of the island of Hispaniola and the other Antilles archipelago islands. Cockburn Town, the capital since 1766, is situated on Grand Turk Island about 1,042 kilometres (647mi) east-southeast of Miami, United States. The islands have a total land area of 430 square kilometres (170sqmi).[b]

The first recorded European sighting of the islands now known as the Turks and Caicos occurred in 1512.[7] In the subsequent centuries, the islands were claimed by several European powers with the British Empire eventually gaining control. For many years the islands were governed indirectly through Bermuda, the Bahamas, and Jamaica. When the Bahamas gained independence in 1973, the islands received their own governor, and have remained a separate autonomous British Overseas Territory since. In August 2009, the United Kingdom suspended the Turks and Caicos Islands’ self-government after allegations of ministerial corruption.[8] Home rule was restored in the islands after the November 2012 elections.

The Turks and Caicos Islands are named after the Turk’s cap cactus (Melocactus intortus), and the Lucayan term caya hico, meaning ‘string of islands’.[9][10][11]

The first inhabitants of the islands were Arawakan-speaking Tano people, who crossed over from Hispaniola sometime from AD 500 to 800. Together with Taino who migrated from Cuba to the southern Bahamas around the same time, these people developed as the Lucayan. Around 1200, the Turks and Caicos Islands were resettled by Classical Tanos from Hispaniola.

Soon after the Spanish arrived in the islands in 1512,[7] they began capturing the Tano of the Turks and Caicos Islands and the Lucayan as slaves (technically, as workers in the encomienda system)[12] to replace the largely depleted native population of Hispaniola. The southern Bahama Islands and the Turks and Caicos Islands were completely depopulated by about 1513, and remained so until the 17th century.[13][14][15][16][17]

The first European documented to sight the islands was Spanish conquistador Juan Ponce de Len, who did so in 1512.[7] During the 16th, 17th, and 18th centuries, the islands passed from Spanish, to French, to British control, but none of the three powers ever established any settlements.

Bermudian salt collectors settled the Turks Islands around 1680. For several decades around the turn of the 18th century, the islands became popular pirate hideouts. From 17651783, the islands were under French occupation, and again after the French captured the archipelago in 1783.

After the American War of Independence (17751783), many Loyalists fled to British Caribbean colonies; in 1783, they were the first settlers on the Caicos Islands. They developed cotton as an important cash crop, but it was superseded by the development of the salt industry.

In 1799, both the Turks and the Caicos island groups were annexed by Britain as part of the Bahamas.[citation needed] The processing of sea salt was developed as a highly important export product from the West Indies, with the labour done by African slaves. Salt continued to be a major export product into the nineteenth century.

In 1807, Britain prohibited the slave trade and, in 1833, abolished slavery in its colonies. British ships sometimes intercepted slave traders in the Caribbean, and some ships were wrecked off the coast of these islands. In 1837, the Esperanza, a Portuguese slaver, was wrecked off East Caicos, one of the larger islands. While the crew and 220 captive Africans survived the shipwreck, 18 Africans died before the survivors were taken to Nassau. Africans from this ship may have been among the 189 liberated Africans whom the British colonists settled in the Turks and Caicos from 1833 to 1840.[18]

In 1841, the Trouvadore, an illegal Spanish slave ship, was wrecked off the coast of East Caicos. All the 20-man crew and 192 captive Africans survived the sinking. Officials freed the Africans and arranged for 168 persons to be apprenticed to island proprietors on Grand Turk Island for one year. They increased the small population of the colony by seven percent.[18] Numerous descendants have come from those free Africans. The remaining 24 were resettled in Nassau. The Spanish crew were also taken there, to be turned over to the custody of the Cuban consul and taken to Cuba for prosecution.[19] An 1878 letter documents the “Trouvadore Africans” and their descendants as constituting an essential part of the “labouring population” on the islands.[18]

In 2004, marine archaeologists affiliated with the Turks and Caicos National Museum discovered a wreck, called the “Black Rock Ship”, that subsequent research has suggested may be that of the Trouvadore. In November 2008, a cooperative marine archaeology expedition, funded by the United States NOAA, confirmed that the wreck has artefacts whose style and date of manufacture link them to the Trouvadore.[19][20][21]

In 1848, Britain designated the Turks and Caicos as a separate colony under a council president. In 1873, the islands were made part of the Jamaica colony; in 1894, the chief colonial official was restyled commissioner. In 1917, Canadian Prime Minister Robert Borden suggested that the Turks and Caicos join Canada, but this suggestion was rejected by British Prime Minister David Lloyd George. The islands remained a dependency of Jamaica into 1959.[citation needed]

On 4 July 1959, the islands were again designated as a separate colony, the last commissioner being restyled administrator. The governor of Jamaica also continued as the governor of the islands. When Jamaica was granted independence from Britain in August 1962, the Turks and Caicos Islands became a Crown colony. From 1965, the governor of the Bahamas also was governor of the Turks and Caicos Islands and oversaw affairs for the islands.[citation needed]

When the Bahamas gained independence in 1973, the Turks and Caicos received their own governor (the last administrator was restyled). In 1974, Canadian New Democratic Party MP Max Saltsman tried to use his Private Member’s Bill for legislation to annex the islands to Canada, but it did not pass in the Canadian House of Commons.[22]

Since August 1976, the islands have had their own government headed by a chief minister, the first of whom was James Alexander George Smith McCartney.

The islands’ political troubles in the early 21st century resulted in a rewritten constitution promulgated in 2006. The UK took over direction of the government in 2009.[23][24]

In 2013 and 2014, interest in annexing Turks and Caicos to Canada was renewed as Edmonton East MP Peter Goldring met with the Turks and Caicos’ premier Rufus Ewing in a reception at Torontos Westin Harbour Castle hotel.[25][26]

The two island groups are in the North Atlantic Ocean, southeast of the Bahamas, northwest of Puerto Rico, north of Hispaniola, and about 1,000 kilometres (620mi) from Miami in the United States, at 2145N 7135W / 21.750N 71.583W / 21.750; -71.583Coordinates: 2145N 7135W / 21.750N 71.583W / 21.750; -71.583. The territory is geographically contiguous to the Bahamas, both comprising the Lucayan Archipelago, but is politically a separate entity. The Caicos Islands are separated by the Caicos Passage from the closest Bahamian islands, Mayaguana and Great Inagua.

The eight main islands and more than 299 smaller islands[citation needed] have a total land area of 616.3 square kilometres (238.0 square miles),[b] consisting primarily of low, flat limestone with extensive marshes and mangrove swamps and 332 square kilometres (128sqmi) of beach front. The weather is usually sunny (it is generally regarded that the islands receive 350 days of sun each year[27]) and relatively dry, but suffers frequent hurricanes. The islands have limited natural fresh water resources; private cisterns collect rainwater for drinking. The primary natural resources are spiny lobster, conch, and other shellfish.

The two distinct island groups are separated by the Turks Islands Passage.

The Turks Islands are separated from the Caicos Islands by Turks Island Passage, which is more than 2,200m or 7,200ft deep,[28] The islands form a chain that stretches northsouth. The 2012 Census population was 4,939 on the two main islands, the only inhabited islands of the group:

Together with nearby islands, all on Turks Bank, those two main islands form the two of the six administrative districts of the territory that fall within the Turks Islands. Turks Bank, which is smaller than Caicos Bank, has a total area of about 324km2 (125sqmi).[30]

25 kilometres (16mi) east of the Turks Islands and separated from them by Mouchoir Passage is the Mouchoir Bank. Although it has no emergent cays or islets, some parts are very shallow and the water breaks on them. Mouchoir Bank is part of the Turks and Caicos Islands and falls within its Exclusive Economic Zone. It measures 960 square kilometres (370sqmi) in area.[31] Two banks further east, Silver Bank and Navidad Bank, are geographically a continuation, but belong politically to the Dominican Republic.

The largest island in the Caicos archipelago is the sparsely-inhabited Middle Caicos, which measures 144 square kilometres (56sqmi) in area, but has a population of only 168 at the 2012 Census. The most populated island is Providenciales, with 23,769 inhabitants in 2012, and an area of 122 square kilometres (47sqmi). North Caicos (116 square kilometres (45sqmi) in area) had 1,312 inhabitants. South Caicos (21 square kilometres (8.1sqmi) in area) had 1,139 inhabitants, and Parrot Cay (6 square kilometres (2.3sqmi) in area) had 131 inhabitants. East Caicos (which is administered as part of South Caicos District) is uninhabited, while the only permanent inhabitants of West Caicos (administered as part of Providenciales District) are resort staff.

The Turks and Caicos Islands feature a relatively dry and sunny marine tropical climate[32] with relatively consistent temperatures throughout the course of the year. Summertime temperatures rarely exceed 33C (91F) and winter nighttime temperatures rarely fall below 18C (64F).

The Turks and Caicos Islands are a British Overseas Territory. As a British territory, its sovereign is Queen Elizabeth II of the United Kingdom, represented by a governor appointed by the monarch, on the advice of the Foreign Office. The United Nations Special Committee on Decolonization includes the territory on the United Nations list of Non-Self-Governing Territories.

With the election of the territory’s first Chief Minister, J.A.G.S. McCartney, the islands adopted a constitution on 30 August 1976, which is Constitution Day, the national holiday.

The constitution was suspended in 1986, but restored and revised 5 March 1988. In the interim two Advisory Councils took over with members from the Progressive National Party (PNP), People’s Democratic Movement (PDM) and National Democratic Alliance (NDA), which was a splinter group from the PNP:[35]

A new constitution came into force on 9 August 2006, but was in parts suspended and amended in 2009. The territory’s legal system is based on English common law, with a small number of laws adopted from Jamaica and the Bahamas. Suffrage is universal for those over 18 years of age. English is the official language. Grand Turk is the administrative and political capital of the Turks and Caicos Islands and Cockburn Town has been the seat of government since 1766.

Under the suspended 2006 constitution, the head of government was the premier, filled by the leader of the elected party. The cabinet consisted of three ex officio members and five appointed by the governor from among the members of the House of Assembly. The unicameral House of Assembly consisted of 21 seats, of which 15 were popularly elected; members serve four-year terms. Elections in the Turks and Caicos Islands were held on 24 April 2003 and again on 9 February 2007. The Progressive National Party, led by Michael Misick, held thirteen seats, and the People’s Democratic Movement, led by Floyd Seymour, held two seats.

Under the new constitution that came into effect in October 2012, legislative power is held by a unicameral House of Assembly, consisting of 19 seats, 15 elected and 4 appointed by the governor; of elected members, five are elected at large and 10 from single member districts for four-year terms. After the 2012 elections, Rufus Ewing of the Progressive National Party won a narrow majority of the elected seats and was appointed premier.[36]

The Turks and Caicos Islands participates in the Caribbean Development Bank, is an associate in CARICOM, member of the Universal Postal Union and maintains an Interpol sub-bureau. Defence is the responsibility of the United Kingdom.

The winning party of Turks and Caicos’ first general election in 1976, the People’s Democratic Movement (PDM) under “Jags” McCartney, sought to establish a framework and accompanying infrastructure in the pursuit of an eventual policy of full independence for the islands. However, with the early death of McCartney, confidence in the country’s leadership waned. In 1980, the PDM agreed with the British government that independence would be granted in 1982 if the PDM was re-elected in the elections of that year.[citation needed] That election was effectively a referendum on the independence issue and was won by the pro-dependency Progressive National Party (PNP), which claimed victory again four years later. With these developments, the independence issue largely faded from the political scene.[citation needed]

However, in the mid-2000s, the issue of independence for the islands was again raised. In April 2006, PNP Premier Michael Misick reaffirmed that his party saw independence from Britain as the “ultimate goal” for the islands, but not at the present time.[37]

In 2008, opponents of Misick accused him of moving toward independence for the islands to dodge a commission of inquiry, which examined reports of corruption by the Misick Administration.[38]

The Turks and Caicos Islands are divided into six administrative districts (two in the Turks Islands and four in the Caicos Islands), headed by district commissioners. For the House of Assembly, the Turks and Caicos Islands are divided into 15 electoral districts (four in the Turks Islands and eleven in the Caicos Islands).

A great number of tourists who visit the Turks and Caicos Islands are Canadian. In 2011 arrivals from Canada were about 42,000 out of a total from all countries of about 354,000.[39] Owing to this, the islands’ status as a British colony, and historical trade links, some politicians in Canada and the Turks and Caicos have suggested some form of union between Canada and the British territory. In 1917, Canadian Prime Minister Robert Borden attempted to persuade the British government to annex the islands, and the idea has been discussed several times over the last century. In 1974, the government of the islands sent Canada a “serious offer” to join the country, however at the time the Canadian government was focusing on their free trade agreement with the United States.

In 2013, Rufus Ewing, the Premier of the islands, rejected the idea of the islands joining Canada, however the following year he stated that he wasn’t “closing the door completely” on the possibility.[40]

In April 2016, it was reported that the New Democratic Party, one of the three major political parties in Canada, was considering a resolution at an upcoming national convention to discuss the possibility of working with lawmakers and citizens of Turks and Caicos Islands to have it join Canada as the eleventh Canadian province.[41]

In 2008, after members of the British parliament conducting a routine review of the administration received several reports of high-level official corruption in the Turks and Caicos,[42] then-Governor Richard Tauwhare announced the appointment of a Commission of Enquiry into corruption.[43] The same year, Premier Michael Misick himself became the focus of a criminal investigation after a woman identified by news outlets as an American citizen residing in Puerto Rico accused him of sexually assaulting her,[44] although he strongly denies the charge.[45]

On Monday, 16 March 2009, the UK threatened to suspend self-government in the islands and transfer power to the new governor, Gordon Wetherell, over systemic corruption.[46]

On 18 March 2009, on the advice of her UK ministers, Queen Elizabeth II issued an Order in Council giving the Governor the power to suspend those parts of the 2006 Constitution that deal with ministerial government and the House of Assembly, and to exercise the powers of government himself. The order, which would also establish an Advisory Council and Consultative Forum in place of the House of Assembly, would come into force on a date to be announced by the governor, and remain in force for two years unless extended or revoked.[47]

On 23 March 2009, after the enquiry found evidence of “high probability of systemic corruption or other serious dishonesty”, Misick resigned as Premier to make way for a new, unified government.[48] Politicians were accused of selling crown land for personal gain and misusing public funds.[49] The following day, Galmo Williams was sworn in as his replacement.[48][50] Misick denied all charges, and referred to the British government’s debate on whether to remove the territory’s sovereignty as “tantamount to being re-colonised. It is a backwards step completely contrary to the whole movement of history.”[49]

On 14 August 2009 after Misick’s last appeals failed, the Governor, on the instructions of the Foreign and Commonwealth Office, imposed direct rule on the Turks and Caicos Islands by authority of the 18 March 2009 Order in Council issued by the Queen. The islands’ administration was suspended for up to two years, with possible extensions, and power was transferred to the Governor, with the United Kingdom also stationing a supply vessel in between Turks and Caicos. Parliamentary Under-Secretary of State for Foreign Affairs Chris Bryant said of the decision to impose rule, “This is a serious constitutional step which the UK Government has not taken lightly but these measures are essential in order to restore good governance and sound financial management.”[51]

The move was met with vehement opposition by the former Turks and Caicos government, with Misick’s successor Williams calling it a “coup”, and stating that, “Our country is being invaded and re-colonised by the United Kingdom, dismantling a duly elected government and legislature and replacing it with a one-man dictatorship, akin to that of the old Red China, all in the name of good governance.”[51] Despite this, the civilian populace was reported to be largely welcoming of the enforced rule.[51] The British government stated that they intended to keep true to their word that the country would regain home rule in two years or less, and Foreign Office Minister Chris Bryant said that elections would be held in 2011, “or sooner”.[52] Governor Wetherell stated that he would aim to “make a clean break from the mistakes of the past” and create “a durable path towards good governance, sound financial management and sustainable development”. Wetherell added: “In the meantime we must all learn to foster a quality of public spirit, listen to all those who have the long-term interests of these islands at heart, and safeguard the fundamental assets of the Territory for future generations… Our guiding principles will be those of transparency, accountability and responsibility. I believe that most people in the Turks and Caicos will welcome these changes.”[51]

On 12 June 2012 British Foreign Secretary William Hague announced that fresh elections would be held in November 2012, stating that there had been “significant progress with an ambitious reform programme” and that there had been “sufficient progress, on the milestones and on putting in place robust financial controls”[53] A new constitution was approved on 15 October 2012. The terms of the election are specified in the constitution.[54]

The judicial branch of government is headed by a Supreme Court; appeals are heard by the Court of Appeal and final appeals by the United Kingdom’s Judicial Committee of the Privy Council. There are three justices of the Supreme Court, a Chief Justice and two others. The Court of Appeal consists of a president and at least two justices of appeal.

Magistrates’ Courts are the lower courts and appeals from Magistrates’ Courts are sent to the Supreme Court.

As of September 2014, the Chief Justice is Justice Margaret Ramsay-Hale.[55]

Eight of the thirty islands in the territory are inhabited, with a total population estimated from preliminary results of the census of 25 January 2012 (released on 12 August 2012) of 31,458 inhabitants, an increase of 58.2% from the population of 19,886 reported in the 2001 census.[2] One-third of the population is under 15 years old, and only 4% are 65 or older. In 2000 the population was growing at a rate of 3.55% per year. The infant mortality rate was 18.66 deaths per 1,000 live births and the life expectancy at birth was 73.28 years (71.15 years for males, 75.51 years for females). The total fertility rate was 3.25 children born per woman. The annual population growth rate is 2.82%.

The adult population is composed of 57.5% immigrants (“non-belongers”). The CIA World Factbook describes the islanders’ ethnicity as African 87%, European 7.9%, Mixed 2.5.%, East Indian 1.3% and Other 0.7% [58]

Vital statistics related to the population are:[59][60][61]

The official language of the islands is English and the population also speaks Turks and Caicos Islands Creole[62] which is similar to Bahamian Creole.[63] Due to its close proximity to Cuba and Hispaniola, large Haitian Creole and Spanish-speaking communities have developed in the territory due to immigration, both legal and illegal, from Creole-speaking Haiti and from Spanish-speaking Cuba and Dominican Republic.[64]

72.8% of the population of Turks and Caicos are Christian (Baptists 35.8%, Church of God 11.7%, Roman Catholics 11.4%, Anglicans 10%, Methodists 9.3%, Seventh-Day Adventists 6%, Jehovah’s Witnesses 1.8% and Others 14%).[58]

Catholics are served by the Mission “Sui Iuris” for Turks and Caicos, which was erected in 1984 with territory taken from the then Diocese of Nassau.

The Turks and Caicos Islands are most well known for ripsaw music. The islands are known for their annual Music and Cultural Festival showcasing many local talents and other dynamic performances by many music celebrities from around the Caribbean and United States.

Women continue traditional crafts of using straw to make baskets and hats on the larger Caicos islands. It is possible that this continued tradition is related to the liberated Africans who joined the population directly from Africa in the 1830s and 1841 from shipwrecked slavers; they brought cultural craft skills with them.[21]

The island’s most popular sports are fishing, sailing, football (soccer) and cricket (which is the national sport).

Turks and Caicos cuisine is based primarily around seafood, especially conch.[65] Two common dishes, whilst not traditionally ‘local’, are conch fritters and conch salad.[66]

Because the Turks and Caicos is a British Overseas Territory and not an independent country, they, at one time, could not confer citizenship. Instead, people with close ties to Britain’s Overseas Territories all held the same nationality: British Overseas Territories Citizen (BOTC) as defined by the British Nationality Act 1981 and subsequent amendments. BOTC, however, does not confer any right to live in any British Overseas Territory, including the territory from which it is derived. Instead, the rights normally associated with citizenship derive from what is called Belonger status and island natives or descendants from natives are said to be Belongers.

In 2002, the British Overseas Territories Act restored full British citizenship status to all citizens of British Overseas Territories, including the Turks and Caicos. See British Overseas Territories citizen#Access to British citizenship.

Public Education is supported by taxation, and is mandatory for children aged five to sixteen. Primary education lasts for six years and secondary education lasts for five years. In the 1990s, the island nation launched the Primary In-Service Teacher Education Project (PINSTEP) in an effort to increase the skills of its primary school teachers, nearly one-quarter of whom were unqualified. Turks and Caicos also worked to refurbish its primary schools, reduce textbook costs, and increase equipment and supplies given to schools. For example, in September 1993, each primary school was given enough books to allow teachers to establish in-class libraries.[citation needed] In 2001, the studentteacher ratio at the primary level was roughly 15:1.[citation needed] The Turks and Caicos Islands Community College offers free higher education to students who have successfully completed their secondary education. The community college also oversees an adult literacy program. The Ministry of Health, Education, Youth, Sports, and Women’s Affairs oversees education in Turks and Caicos. Once a student completes their education at The Turks and Caicos Islands Community College, they are allowed to further their education at a university in The United States, Canada, or the United Kingdom for free. They have to commit to working in The Turks and Caicos Islands for four years to receive this additional education.

The Turks and Caicos established a National Health System in 2010. Residents contribute to a National Health Insurance Plan through salary deduction and nominal user fees. Majority of care is provided by the private-public-partnership hospitals in Providenciales and Grand Turk. In addition there are a number of government clinics and private clinics. The hospital opened in 2010 is administered by Interhealth Canada and has been accredited by Accreditation Canada in 2012 and 2015.

In 2009, GDP contributions were as follows:[67] Hotels & Restaurants 34.67%, Financial Services 13.12%, Construction 7.83%, Transport, Storage & Communication 9.90%, and Real Estate, Renting & Business Activities 9.56%.[clarification needed] Most capital goods and food for domestic consumption are imported.

In 2010/2011, major sources of government revenue included Import Duties (43.31%), Stamp Duty on Land Transaction (8.82%), Work Permits and Residency Fees (10.03%) and Accommodation Tax (24.95%). The territory’s gross domestic product as of late 2009 is approximately US$795 million (per capita $24,273).[67]

The labour force totalled 27,595 workers in 2008. The labour force distribution in 2006 is as follows:

The unemployment rate in 2008 was 8.3%. In 20072008, the territory took in revenues of $206.79 million against expenditures of $235.85 million. In 1995, the island received economic aid worth $5.7 million. The territory’s currency is the United States dollar, with a few government fines (such as airport infractions) being payable in pounds sterling. Most commemorative coin issues are denominated in crowns.

The primary agricultural products include limited amounts of maize, beans, cassava (tapioca) and citrus fruits. Fish and conch are the only significant export, with some $169.2 million of lobster, dried and fresh conch, and conch shells exported in 2000, primarily to the United Kingdom and the United States. In recent years, however, the catch has been declining. The territory used to be an important trans-shipment point for South American narcotics destined for the United States, but due to the ongoing pressure of a combined American, Bahamian and Turks and Caicos effort this trade has been greatly reduced.

The islands import food and beverages, tobacco, clothing, manufacture and construction materials, primarily from the United States and the United Kingdom. Imports totalled $581 million in 2007.

The islands produce and consume about 5 GWh of electricity, per year, all of which comes from fossil fuels.

The United States was the leading source of tourists in 1996, accounting for more than half of the 87,000 visitors; another major source of tourists is Canada. Tourist arrivals had risen to 264,887 in 2007 and to 351,498 by 2009. In 2010, a total of 245 cruise ships arrived at the Grand Turk Cruise Terminal, carrying a total of 617,863 visitors.[68]

The government is pursuing a two-pronged strategy to increase tourism. Upscale resorts are aimed at the wealthy, while a large new cruise ship port and recreation centre has been built for the masses visiting Grand Turk. Turks and Caicos Islands has one of the longest coral reefs in the world[69] and the world’s only conch farm.[70]

The French vacation village company of Club Mediterannee (Club Med) has an all-inclusive adult resort called ‘Turkoise’ on one of the main islands.

Several Hollywood stars have built homes in the Turks and Caicos, including Dick Clark and Bruce Willis. Ben Affleck and Jennifer Garner married on Parrot Cay in 2005. Actress Eva Longoria and her ex-husband Tony Parker went to the islands for their honeymoon in July 2007 and High School Musical actors Zac Efron and Vanessa Hudgens went for a vacation there. In 2013 Hollywood writer/director Rob Margolies and actress Kristen Ruhlin vacationed here. Musician Nile Rodgers has a vacation home on the island.

To boost tourism during the Caribbean low season of late summer, since 2003 the Turks and Caicos Tourist Board have organised and hosted an annual series of concerts during this season called the Turks & Caicos Music and Cultural Festival.[71] Held in a temporary bandshell at The Turtle Cove Marina in The Bight on Providenciales, this festival lasts about a week and has featured several notable international recording artists, such as Lionel Richie, LL Cool J, Anita Baker, Billy Ocean, Alicia Keys, John Legend, Kenny Rogers, Michael Bolton, Ludacris, Chaka Khan, and Boyz II Men.[72] More than 10,000 people attend annually.[72]

The Turks and Caicos Islands are a biodiversity hotspot. The islands have many endemic species and others of international importance, due to the conditions created by the oldest established salt-pan development in the Caribbean. The variety of species includes a number of endemic species of lizards, snakes, insects and plants, and marine organisms; in addition to being an important breeding area for seabirds.[79]

The UK and Turks and Caicos Islands Governments have joint responsibility for the conservation and preservation to meet obligations under international environmental conventions.[80]

Due to this significance, the islands are on the United Kingdom’s tentative list for future UNESCO World Heritage Sites.[81]

Providenciales International Airport is the main entry point for the Turks and Caicos Islands. Altogether, there are seven airports, located on each of the inhabited islands. Five have paved runways (three of which are approximately 2,000m (6,600ft) long and one is approximately 1,000m (3,300ft) long), and the remaining two have unpaved runways (one of which is approximately 1,000m (3,300ft)s long and the other is significantly shorter).[82]

The islands have 121 kilometres (75 miles) of highway, 24km (15mi) paved and 97km (60mi) unpaved. Like the United States Virgin Islands and British Virgin Islands, the Turks and Caicos Islands drive on the left, but use left-hand-drive vehicles that are imported from the United States.[83]

The territory’s main international ports and harbours are on Grand Turk and Providenciales.[84]

The islands have no significant railways. In the early twentieth century East Caicos operated a horse-drawn railway to transport Sisal from the plantation to the port. The 14-kilometre (8.7-mile) route was removed after sisal trading ceased.[85]

There is no postal delivery in the Turks and Caicos; mail is picked up at one of four post offices on each of the major islands.[86] Mail is transported three or seven times a week, depending on the destination.[87] The Post Office is part of the territory’s government and reports to the Minister of Government Support Services.[88]

Mobile phone service is provided by Cable & Wireless Worldwide, using GSM 850 and TDMA, and Digicel, using GSM 900 and 1900 and Islandcom Wireless, using 3G 850. Cable & Wireless provides CDMA mobile phone service in Providenciales and Grand Turk. The system is connected to the mainland by two submarine cables and an Intelsat earth station. There were three AM radio stations (one inactive) and six FM stations (no shortwave) in 1998. The most popular station is Power 92.5 FM which plays Top 100 hits. Over 8000 radio receivers are owned across the territory.

West Indies Video (WIV) has been the sole cable television provider for the Turks and Caicos Islands for over two decades and WIV4 (a subsidiary of WIV) has been the only broadcast station in the islands for over 15 years; broadcasts from the Bahamas can also be received. The territory has two internet service providers and its country code top level domain (ccTLD) is “.tc”. Amateur radio callsigns begin with “VP5” and visiting operators frequently work from the islands.

WIV introduced Channel 4 News in 2002 broadcasting local news and infotainment programs across the country. Channel 4 was re-launched as WIV4 in November 2007 and began providing reliable daily online Turks and Caicos news with the WIV4 News blog,[89] an online forum connecting TCI residents with others interested in the islands, while keeping users updated on the TCI’s daily news.

Since 2013 4NEWS has become the Islands HD Cable News service with Television Studios in Grace Bay, Providenciales. DigicelPlay is the local cable provider.

Turks and Caicos’s newspapers include the Turks and Caicos Weekly News, the Turks and Caicos SUN[90] and the Turks and Caicos Free Press.[91] All three publications are weekly. The Weekly News and the Sun both have supplement magazines. Other local magazines Times of the Islands,[92]s3 Magazine,[93]Real Life Magazine, Baller Magazine, and Unleashed Magazine.

From 1950 to 1981, the United States had a missile tracking station on Grand Turk. In the early days of the American space program, NASA used it. After his three earth orbits in 1962, American astronaut John Glenn successfully landed in the nearby ocean and was brought back ashore to Grand Turk island.[94][95]

Cricket is the islands’ national sport.[96] The national team takes part in regional tournaments in the ICC Americas Championship,[97] as well as having played one Twenty20 match as part of the 2008 Standford 20/20.[98] Two domestic leagues exist, one on Grand Turk with three teams and another on Providenciales.[96]

As of 4 July 2012, Turks and Caicos Islands’ football team shared the position of the lowest ranking national men’s football team in the world at the rank of 207th.[99]

Because the territory is not recognized by the International Olympic Committee, Turks and Caicos Islanders compete for Great Britain at the Olympic Games.[citation needed]

27b. http://www.cbc.ca/news/background/turksandcaicos/

Articles relating to the Turks and Caicos Islands

Read the original:

Turks and Caicos Islands – Wikipedia

Posted in Private Islands | Comments Off on Turks and Caicos Islands – Wikipedia

Food Supplements | European Food Safety Authority

Posted: December 26, 2016 at 3:03 pm

Food supplements are concentrated sources of nutrients or other substances with a nutritional or physiological effect, whose purpose is to supplement the normal diet. Food supplements are marketed ‘in dose’ form, for example as pills, tablets, capsules or liquids in measured doses etc. Supplements may be used to correct nutritional deficiencies or maintain an adequate intake of certain nutrients. However, in some cases excessive intake of vitamins and minerals may be harmful or cause unwanted side effects; therefore, maximum levels are necessary to ensure their safe use in food supplements.

EU regulatory framework

The European Commission has established harmonised rules to help ensure that food supplements are safe and properly labelled. In the EU, food supplements are regulated as foods and the legislation focuses on vitamins and minerals used as ingredients of food supplements.

The main EU legislation is Directive 2002/46/EC related to food supplements containing vitamins and minerals.

The Directive sets out labelling requirements and requires that EU-wide maximum and minimum levels are set for each vitamin and mineral added to supplements. As excessive intake of vitamins and minerals may result in adverse effects, the Directive provides for the setting of maximum amounts of vitamins and minerals added to food supplements. This task has been delegated to the Commission and is currently ongoing.

In addition, its Annex II contains a list of permitted vitamin or mineral substances that may be added for specific nutritional purposes in food supplements. Annex II has been amended by Regulation 1170/2009 of 30 November 2009.

Vitamin and mineral substances may be considered for inclusion in the lists following the evaluation of an appropriate scientific dossier concerning the safety and bioavailability of the individual substance by EFSA. Companies wishing to market a substance not included in the permitted list need to submit an application to the European Commission.

A guidance by the Scientific Committee on Food in 2001 gives information on the data that should be provided in the dossier supporting the application for a new substance.

EFSAs role and activities

EFSA was asked by the European Commission to evaluate the safety and bioavailability of nutrient sources proposed for addition to the list of permitted substances in Annex II of the food supplements Directive. In July 2009, EFSA completed the first comprehensive assessment of substances used as sources of vitamins and minerals in food supplements, which are currently sold in the EU.

Based on EFSAs work, the European Commission reviewed the list of permitted vitamin or mineral substances that may be added in food supplements.

Between 2005 and 2009 EFSA examined a total of 533 applications. Of these, 186 applications were withdrawn during the evaluation process, and EFSA received insufficient scientific evidence to be able to assess around half of the remaining applications. Possible safety concerns were identified in relation to 39 applications.

The evaluations were carried out by thePanel on food additives and nutrient sources added to food (ANS). The Panels evaluations involved judging the safety of a nutrient substance at the intake levels suggested by the applicant based on best scientific knowledge available. The Panel also assessed the bioavailability of the nutrient from the source, which is the effectiveness with which the mineral or vitamin is released from the source into the tissues of the body. Previously the former Panel on food additives, flavourings, processing aids and materials in contact with food (former AFC) was responsible for this work.

Moreover, EFSAs NDA Panel has preformed a comprehensive evaluation of the possible adverse health effects of individual micronutrients at intakes exceeding the dietary requirements and, where possible, established Tolerable Upper Intake Levels (ULs) for different population groups. ULs represent the highest level of chronic daily intake of a nutrient that is not likely to pose a risk of adverse health effects to humans. The ULs defined by the NDA Panel and by the former Scientific Committee on Food (SCF) are used as a reference by the ANS Panel in its evaluations of the safety of nutrient substances added to food supplements. Throughout this work EFSA will provide support to the European Commission in establishing maximum limits for vitamins and minerals in food supplements and fortified foods.

Further information: DG Health and Consumers: Food Supplements DG Health and Consumers: Addition of vitamins and minerals DG Health and Consumers: Tolerable upper intake levels for vitamins and minerals

Read more from the original source:

Food Supplements | European Food Safety Authority

Posted in Food Supplements | Comments Off on Food Supplements | European Food Safety Authority

Nihilist movement – Wikipedia

Posted: December 22, 2016 at 12:51 pm

The Two Nihilist RevolutionsEdit

Russian nihilism (rus. “”) can be dissected into two periods. The foundational period (1860-1869) where the ‘counter-cultural’ aspects of nihilism scandalized Russia, where even the smallest of indiscretions resulted in nihilists being sent to Siberia or imprisoned for lengthy periods of time, and where the philosophy of nihilism was formed.[2] The other period would be the revolutionary period of Nihilism (1870-1881) when the pamphlet The Catechism of a Revolutionist transformed the movement, which was waiting and only striking mild propaganda, into a movement-with-teeth and a will to wage war against the tsarist regime, with dozens of actions against the Russian state. The revolutionary period ends with the assassination of the Tsar Alexander II (March 13, 1881), by a series of bombs, and the consequential crushing of the nihilist movement.[3]

Mikhail Bakunin’s (1814-1876) “Reaction in Germany” (1842) included a famous dictum, “Let us therefore trust the eternal Spirit which destroys and annihilates only because it is the unfathomable and eternal source of all life. The passion for destruction is a creative passion, too!”[4] This piece of literature anticipated and instigated the ideas of the nihilists. In Russia, Bakunin was considered a Westernizer because of his influences that spread the ideology of anarchism outside of his nation to the rest of Europe and Russia.[5] While he is inexorably linked to both the foundational and revolutionary periods of nihilism, Bakunin was a product of the earlier generation whose vision, ultimately, was not the same as the nihilist view. He stated this best as “I am a free man only so far as I recognize the humanity and liberty of all men around me. In respecting their humanity, I respect my own.” This general humanitarian instinct is in contrast to the nihilist proclamations of having a “hate with a great and holy hatred” or calling for the “annihilation of aesthetics”.[6]

Nikolay Chernyshevsky was the first to incorporate nihilism in the socialist agenda. The nihilist contribution to socialism in general was the concept that the peasant was an agent of social change (Chernyshevsky, A Criticism of Philosophical Prejudices Against the Obshchina (1858)),[7] and not just the bourgeois reformers of the revolutions of 1848, or the proletariat of Marx (a concept that wouldn’t reach Russia until later). Agitation for this position landed Chernyshevsky in prison and exile in Siberia for the next 25 years (although the specific accusations with which he was convicted were a concoction) in 1864.[8] The first group inspired by nihilist ideas to form and work towards social change did so as a secret society, and were called Land and Liberty. This group’s name was also taken by another, entirely separate group, during the Revolutionary Nihilist period, with the first Land and Freedom conspiring to support the Polish independence movement and to agitate the peasants who were burdened with debt as a result of the crippling redemption payments required by the emancipation of the serfs in 1861. Polish independence was not of particular interest to the nihilists, and after a plot to incite Kazan peasants to revolt failed, Land and Freedom folded (1863).[9]

After the failure, the Russian government began to actively hunt nihilist revolutionaries, so the first secret nihilist societies were created. One of the first to act in secrecy was called The Organization, and they created a boys’ school in a Moscow slum in order to train revolutionaries. In addition they had a secret sub-group called Hell whose purpose was political terrorism, with the assassination of the Tsar as their ultimate goal. This resulted in the failed attempt by Dmitry Karakozov on the 4th of April 1866. Dmitry was tried and hanged at Smolensk Field in St Petersburg. The leader of The Organization, Nicholas Ishutin, was also tried and was to be executed before being exiled to Siberia for life.[10] Thus ended The Organization and began the White Terror of the rest of the 1860s.

The White Terror began by the Tsar putting Count Michael Muravyov (otherwise known as ‘Hanger Muravyov’ due to his treatment of Polish rebels in prior years) in charge of the suppression of the nihilists. The two leading radical journals (The Contemporary and Russian Word) were banned, liberal reforms were minimized in fear of reaction from the public, and the educational system was reformed to stifle the existing revolutionary spirit.[11] This action by the Russian state marks the end of the foundational period of nihilism.

The entrance on the scene of Sergei Nechayev symbolizes the transformation from the foundational period to the revolutionary period. Sergei Nechaev, the son of a serf, which was unusual as most nihilists came from a slightly higher social class, what we would call lower middle class, desired an escalation of the discourse on social transformation. Nechaev argued that just as the European monarchies used the ideas of Machiavelli, and the Catholic Jesuits practiced absolute immorality to achieve their ends, there was no action that could not be also used for the sake of the people’s revolution.[12] A scholar noted that “His apparent immorality [more an amorality] derived from the cold realization that both Church and State are ruthlessly immoral in their pursuit of total control. The struggle against such powers must therefore be carried out by any means necessary.”[13] Nechaev’s social cache was greatly increased by his association with Bakunin in 1869 and extraction of funds from the Bakhmetiev Fund for Russian revolutionary propaganda.

The image of Nechaev is as much a result of his Catechism of a Revolutionist (1869) as any actions he actually took. The Catechism is an important document as it establishes the clear break between the formation of nihilism as a political philosophy and what it becomes as a practice of revolutionary action. It documents the revolutionary as a much transformed figure from the nihilist of the past decade. Whereas the nihilist may have practiced asceticism, they argued for an uninhibited hedonism. Nechaev assessed that the Revolutionary, by definition, must live devoted to one aim and not allow to be distracted by emotions or attachments.[14] Friendship was contingent on revolutionary fervor, relationships with strangers were quantified in terms of what resources they offered revolution, and everyone had a role during the revolutionary moment that boiled down to how soon they would be lined up against the wall or when they would accept that they had to do the shooting. The uncompromising tone and content of the Catechism was influential far beyond just the mere character Nechaev personified in the minds of the revolutionaries.[15] Part of the reason for this is because of the way in which it extended nihilist principles into a revolutionary program. The rest of the reason was that the catechism gave the revolutionary project a form of constitution and weight that the men `of the sixties’ did not.

Bakunin, an admirer of Nechayev’s zeal and stories of his organization’s success, provided contacts and resources to send Nechayev back to Russia as his representative of the Russian Section of the World Revolutionary Alliance, which was also an imaginary organization.[citation needed] Upon his return to Russia, Nechayev formed the secret, cell based organization, People’s Vengeance.[citation needed] One student member of the organization Ivan Ivanovich Ivanov[citation needed] questioned the very existence of the Secret Revolutionary Committee that Nechayev claimed to be the representative of.[citation needed] This suspicion of Nechayev’s modus operandi required action. Author Ronald Hingley, wrote “On the evening of 21 November 1869 the victim [Ivanov] was accordingly lured to the premises of the Moscow School of Agriculture, a hotbed of revolutionary sentiment, where Nechayev killed him by shooting and strangulation, assisted without great enthusiasm by three dupes Nechayev’s accomplices were arrested and tried.”[16] Upon his return from Russia to Switzerland, Nechayev was rejected by Bakunin, for his taking of militant actions, and was eventually extradited back to Russia where he spent the remainder of his life at the Peter and Paul Fortress.[17] He did, due to his charisma and force of will, continue to influence events, maintaining a relationship to People’s Will and weaving even his jailers into his plots.[citation needed] He was found dead in his cell in 1882.[citation needed]

See the original post here:

Nihilist movement – Wikipedia

Posted in Nihilism | Comments Off on Nihilist movement – Wikipedia

Genetically modified food – Wikipedia

Posted: December 21, 2016 at 6:43 pm

Genetically modified foods or GM foods, also known as genetically engineered foods, are foods produced from organisms that have had changes introduced into their DNA using the methods of genetic engineering. Genetic engineering techniques allow for the introduction of new traits as well as greater control over traits than previous methods such as selective breeding and mutation breeding.[1]

Commercial sale of genetically modified foods began in 1994, when Calgene first marketed its unsuccessful Flavr Savr delayed-ripening tomato.[2][3] Most food modifications have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton. Genetically modified crops have been engineered for resistance to pathogens and herbicides and for better nutrient profiles. GM livestock have been developed, although as of November 2013 none were on the market.[4]

There is a scientific consensus[5][6][7][8] that currently available food derived from GM crops poses no greater risk to human health than conventional food,[9][10][11][12][13] but that each GM food needs to be tested on a case-by-case basis before introduction.[14][15][16] Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe.[17][18][19][20] The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation.[21][22][23][24]

However, there are ongoing public concerns related to food safety, regulation, labelling, environmental impact, research methods, and the fact that some GM seeds are subject to intellectual property rights owned by corporations.[25]

Genetically modified foods, GM foods or genetically engineered foods, are foods produced from organisms that have had changes introduced into their DNA using the methods of genetic engineering as opposed to traditional cross breeding.[26][27] In the US, the Department of Agriculture (USDA) and the Food and Drug Administration (FDA) favor the use of “genetic engineering” over “genetic modification” as the more precise term; the USDA defines genetic modification to include “genetic engineering or other more traditional methods.”[28][29]

According to the World Health Organization, “Genetically modified organisms (GMOs) can be defined as organisms (i.e. plants, animals or microorganisms) in which the genetic material (DNA) has been altered in a way that does not occur naturally by mating and/or natural recombination. The technology is often called ‘modern biotechnology’ or ‘gene technology’, sometimes also ‘recombinant DNA technology’ or ‘genetic engineering’. … Foods produced from or using GM organisms are often referred to as GM foods.”[26]

Human-directed genetic manipulation of food began with the domestication of plants and animals through artificial selection at about 10,500 to 10,100 BC.[30]:1 The process of selective breeding, in which organisms with desired traits (and thus with the desired genes) are used to breed the next generation and organisms lacking the trait are not bred, is a precursor to the modern concept of genetic modification (GM).[30]:1[31]:1 With the discovery of DNA in the early 1900s and various advancements in genetic techniques through the 1970s[32] it became possible to directly alter the DNA and genes within food.

The first genetically modified plant was produced in 1983, using an antibiotic-resistant tobacco plant.[33] Genetically modified microbial enzymes were the first application of genetically modified organisms in food production and were approved in 1988 by the US Food and Drug Administration.[34] In the early 1990s, recombinant chymosin was approved for use in several countries.[34][35] Cheese had typically been made using the enzyme complex rennet that had been extracted from cows’ stomach lining. Scientists modified bacteria to produce chymosin, which was also able to clot milk, resulting in cheese curds.[36]

The first genetically modified food approved for release was the Flavr Savr tomato in 1994.[2] Developed by Calgene, it was engineered to have a longer shelf life by inserting an antisense gene that delayed ripening.[37] China was the first country to commercialize a transgenic crop in 1993 with the introduction of virus-resistant tobacco.[38] In 1995, Bacillus thuringiensis (Bt) Potato was approved for cultivation, making it the first pesticide producing crop to be approved in the USA.[39] Other genetically modified crops receiving marketing approval in 1995 were: canola with modified oil composition, Bt maize, cotton resistant to the herbicide bromoxynil, Bt cotton, glyphosate-tolerant soybeans, virus-resistant squash, and another delayed ripening tomato.[2]

With the creation of golden rice in 2000, scientists had genetically modified food to increase its nutrient value for the first time.[40]

By 2010, 29 countries had planted commercialized biotech crops and a further 31 countries had granted regulatory approval for transgenic crops to be imported.[41] The US was the leading country in the production of GM foods in 2011, with twenty-five GM crops having received regulatory approval.[42] In 2015, 92% of corn, 94% of soybeans, and 94% of cotton produced in the US were genetically modified strains.[43]

The first genetically modified animal to be approved for food use was AquAdvantage salmon in 2015.[44] The salmon were transformed with a growth hormone-regulating gene from a Pacific Chinook salmon and a promoter from an ocean pout enabling it to grow year-round instead of only during spring and summer.[45]

In April 2016, a white button mushroom (Agaricus bisporus) modified using the CRISPR technique received de facto approval in the United States, after the USDA said it would not have to go through the agency’s regulatory process. The agency considers the mushroom exempt because the editing process did not involve the introduction of foreign DNA.[46]

The most widely planted GMOs are designed to tolerate herbicides. By 2006 some weed populations had evolved to tolerate some of the same herbicides. Palmer amaranth is a weed that competes with cotton. A native of the southwestern US, it traveled east and was first found resistant to glyphosate in 2006, less than 10 years after GM cotton was introduced.[47][48][49]

Genetically engineered organisms are generated and tested in the laboratory for desired qualities. The most common modification is to add one or more genes to an organism’s genome. Less commonly, genes are removed or their expression is increased or silenced or the number of copies of a gene is increased or decreased.

Once satisfactory strains are produced, the producer applies for regulatory approval to field-test them, called a “field release.” Field-testing involves cultivating the plants on farm fields or growing animals in a controlled environment. If these field tests are successful, the producer applies for regulatory approval to grow and market the crop. Once approved, specimens (seeds, cuttings, breeding pairs, etc.) are cultivated and sold to farmers. The farmers cultivate and market the new strain. In some cases, the approval covers marketing but not cultivation.

According to the USDA, the number of field releases for genetically engineered organisms has grown from four in 1985 to an average of about 800 per year. Cumulatively, more than 17,000 releases had been approved through September 2013.[50]

Papaya was genetically modified to resist the ringspot virus. ‘SunUp’ is a transgenic red-fleshed Sunset papaya cultivar that is homozygous for the coat protein gene PRSV; ‘Rainbow’ is a yellow-fleshed F1 hybrid developed by crossing ‘SunUp’ and nontransgenic yellow-fleshed ‘Kapoho’.[51] The New York Times stated, “in the early 1990s, Hawaiis papaya industry was facing disaster because of the deadly papaya ringspot virus. Its single-handed savior was a breed engineered to be resistant to the virus. Without it, the states papaya industry would have collapsed. Today, 80% of Hawaiian papaya is genetically engineered, and there is still no conventional or organic method to control ringspot virus.”[52] The GM cultivar was approved in 1998.[53] In China, a transgenic PRSV-resistant papaya was developed by South China Agricultural University and was first approved for commercial planting in 2006; as of 2012 95% of the papaya grown in Guangdong province and 40% of the papaya grown in Hainan province was genetically modified.[54]

The New Leaf potato, a GM food developed using naturally occurring bacteria found in the soil known as Bacillus thuringiensis (Bt), was made to provide in-plant protection from the yield-robbing Colorado potato beetle.[55] The New Leaf potato, brought to market by Monsanto in the late 1990s, was developed for the fast food market. It was withdrawn in 2001 after retailers rejected it and food processors ran into export problems.[56]

As of 2005, about 13% of the Zucchini (a form of squash) grown in the US was genetically modified to resist three viruses; that strain is also grown in Canada.[57][58]

In 2011, BASF requested the European Food Safety Authority’s approval for cultivation and marketing of its Fortuna potato as feed and food. The potato was made resistant to late blight by adding resistant genes blb1 and blb2 that originate from the Mexican wild potato Solanum bulbocastanum.[59][60] In February 2013, BASF withdrew its application.[61]

In 2013, the USDA approved the import of a GM pineapple that is pink in color and that “overexpresses” a gene derived from tangerines and suppress other genes, increasing production of lycopene. The plant’s flowering cycle was changed to provide for more uniform growth and quality. The fruit “does not have the ability to propagate and persist in the environment once they have been harvested,” according to USDA APHIS. According to Del Monte’s submission, the pineapples are commercially grown in a “monoculture” that prevents seed production, as the plant’s flowers aren’t exposed to compatible pollen sources. Importation into Hawaii is banned for “plant sanitation” reasons.[62]

In 2014, the USDA approved a genetically modified potato developed by J.R. Simplot Company that contained ten genetic modifications that prevent bruising and produce less acrylamide when fried. The modifications eliminate specific proteins from the potatoes, via RNA interference, rather than introducing novel proteins.[63][64]

In February 2015 Arctic Apples were approved by the USDA,[65] becoming the first genetically modified apple approved for sale in the US.[66]Gene silencing is used to reduce the expression of polyphenol oxidase (PPO), thus preventing the fruit from browning.[67]

Corn used for food and ethanol has been genetically modified to tolerate various herbicides and to express a protein from Bacillus thuringiensis (Bt) that kills certain insects.[68] About 90% of the corn grown in the U.S. was genetically modified in 2010.[69] In the US in 2015, 81% of corn acreage contained the Bt trait and 89% of corn acreage contained the glyphosate-tolerant trait.[43] Corn can be processed into grits, meal and flour as an ingredient in pancakes, muffins, doughnuts, breadings and batters, as well as baby foods, meat products, cereals and some fermented products. Corn-based masa flour and masa dough are used in the production of taco shells, corn chips and tortillas.[70]

Genetically modified soybean has been modified to tolerate herbicides and produce healthier oils.[71] In 2015, 94% of soybean acreage in the U.S. was genetically modified to be glyphosate-tolerant.[43]

Starch or amylum is a polysaccharide produced by all green plants as an energy store. Pure starch is a white, tasteless and odourless powder. It consists of two types of molecules: the linear and helical amylose and the branched amylopectin. Depending on the plant, starch generally contains 20 to 25% amylose and 75 to 80% amylopectin by weight.[72]

Starch can be further modified to create modified starch for specific purposes,[73] including creation of many of the sugars in processed foods. They include:

Lecithin is a naturally occurring lipid. It can be found in egg yolks and oil-producing plants. it is an emulsifier and thus is used in many foods. Corn, soy and safflower oil are sources of lecithin, though the majority of lecithin commercially available is derived from soy.[74][75][76][pageneeded] Sufficiently processed lecithin is often undetectable with standard testing practices.[72][not in citation given] According to the FDA, no evidence shows or suggests hazard to the public when lecithin is used at common levels. Lecithin added to foods amounts to only 2 to 10 percent of the 1 to 5 g of phosphoglycerides consumed daily on average.[74][75] Nonetheless, consumer concerns about GM food extend to such products.[77][bettersourceneeded] This concern led to policy and regulatory changes in Europe in 2000,[citation needed] when Regulation (EC) 50/2000 was passed[78] which required labelling of food containing additives derived from GMOs, including lecithin.[citation needed] Because of the difficulty of detecting the origin of derivatives like lecithin with current testing practices, European regulations require those who wish to sell lecithin in Europe to employ a comprehensive system of Identity preservation (IP).[79][verification needed][80][pageneeded]

The US imports 10% of its sugar, while the remaining 90% is extracted from sugar beet and sugarcane. After deregulation in 2005, glyphosate-resistant sugar beet was extensively adopted in the United States. 95% of beet acres in the US were planted with glyphosate-resistant seed in 2011.[81] GM sugar beets are approved for cultivation in the US, Canada and Japan; the vast majority are grown in the US. GM beets are approved for import and consumption in Australia, Canada, Colombia, EU, Japan, Korea, Mexico, New Zealand, Philippines, Russian Federation and Singapore.[82] Pulp from the refining process is used as animal feed. The sugar produced from GM sugarbeets contains no DNA or proteinit is just sucrose that is chemically indistinguishable from sugar produced from non-GM sugarbeets.[72][83] Independent analyses conducted by internationally recognized laboratories found that sugar from Roundup Ready sugar beets is identical to the sugar from comparably grown conventional (non-Roundup Ready) sugar beets. And, like all sugar, sugar from Roundup Ready sugar beets contains no genetic material or detectable protein (including the protein that provides glyphosate tolerance).[84]

Most vegetable oil used in the US is produced from GM crops canola,[85]corn,[86][87]cotton[88] and soybeans.[89] Vegetable oil is sold directly to consumers as cooking oil, shortening and margarine[90] and is used in prepared foods. There is a vanishingly small amount of protein or DNA from the original crop in vegetable oil.[72][91] Vegetable oil is made of triglycerides extracted from plants or seeds and then refined and may be further processed via hydrogenation to turn liquid oils into solids. The refining process[92] removes all, or nearly all non-triglyceride ingredients.[93] Medium-chain triglycerides (MCTs) offer an alternative to conventional fats and oils. The length of a fatty acid influences its fat absorption during the digestive process. Fatty acids in the middle position on the glycerol molecules appear to be absorbed more easily and influence metabolism more than fatty acids on the end positions. Unlike ordinary fats, MCTs are metabolized like carbohydrates. They have exceptional oxidative stability, and prevent foods from turning rancid readily.[94]

Livestock and poultry are raised on animal feed, much of which is composed of the leftovers from processing crops, including GM crops. For example, approximately 43% of a canola seed is oil. What remains after oil extraction is a meal that becomes an ingredient in animal feed and contains canola protein.[95] Likewise, the bulk of the soybean crop is grown for oil and meal. The high-protein defatted and toasted soy meal becomes livestock feed and dog food. 98% of the US soybean crop goes for livestock feed.[96][97] In 2011, 49% of the US maize harvest was used for livestock feed (including the percentage of waste from distillers grains).[98] “Despite methods that are becoming more and more sensitive, tests have not yet been able to establish a difference in the meat, milk, or eggs of animals depending on the type of feed they are fed. It is impossible to tell if an animal was fed GM soy just by looking at the resulting meat, dairy, or egg products. The only way to verify the presence of GMOs in animal feed is to analyze the origin of the feed itself.”[99]

A 2012 literature review of studies evaluating the effect of GM feed on the health of animals did not find evidence that animals were adversely affected, although small biological differences were occasionally found. The studies included in the review ranged from 90 days to two years, with several of the longer studies considering reproductive and intergenerational effects.[100]

Rennet is a mixture of enzymes used to coagulate milk into cheese. Originally it was available only from the fourth stomach of calves, and was scarce and expensive, or was available from microbial sources, which often produced unpleasant tastes. Genetic engineering made it possible to extract rennet-producing genes from animal stomachs and insert them into bacteria, fungi or yeasts to make them produce chymosin, the key enzyme.[101][102] The modified microorganism is killed after fermentation. Chymosin is isolated from the fermentation broth, so that the Fermentation-Produced Chymosin (FPC) used by cheese producers has an amino acid sequence that is identical to bovine rennet.[103] The majority of the applied chymosin is retained in the whey. Trace quantities of chymosin may remain in cheese.[103]

FPC was the first artificially produced enzyme to be approved by the US Food and Drug Administration.[34][35] FPC products have been on the market since 1990 and as of 2015 had yet to be surpassed in commercial markets.[104] In 1999, about 60% of US hard cheese was made with FPC.[105] Its global market share approached 80%.[106] By 2008, approximately 80% to 90% of commercially made cheeses in the US and Britain were made using FPC.[103]

In some countries, recombinant (GM) bovine somatotropin (also called rBST, or bovine growth hormone or BGH) is approved for administration to increase milk production. rBST may be present in milk from rBST treated cows, but it is destroyed in the digestive system and even if directly injected into the human bloodstream, has no observable effect on humans.[107][108][109] The FDA, World Health Organization, American Medical Association, American Dietetic Association and the National Institutes of Health have independently stated that dairy products and meat from rBST-treated cows are safe for human consumption.[110] However, on 30 September 2010, the United States Court of Appeals, Sixth Circuit, analyzing submitted evidence, found a “compositional difference” between milk from rBGH-treated cows and milk from untreated cows.[111][112] The court stated that milk from rBGH-treated cows has: increased levels of the hormone Insulin-like growth factor 1 (IGF-1); higher fat content and lower protein content when produced at certain points in the cow’s lactation cycle; and more somatic cell counts, which may “make the milk turn sour more quickly.”[112]

Genetically modified livestock are organisms from the group of cattle, sheep, pigs, goats, birds, horses and fish kept for human consumption, whose genetic material (DNA) has been altered using genetic engineering techniques. In some cases, the aim is to introduce a new trait to the animals which does not occur naturally in the species, i.e. transgenesis.

A 2003 review published on behalf of Food Standards Australia New Zealand examined transgenic experimentation on terrestrial livestock species as well as aquatic species such as fish and shellfish. The review examined the molecular techniques used for experimentation as well as techniques for tracing the transgenes in animals and products as well as issues regarding transgene stability.[113]

Some mammals typically used for food production have been modified to produce non-food products, a practice sometimes called Pharming.

A GM salmon, awaiting regulatory approval[114][115][116] since 1997,[117] was approved for human consumption by the American FDA in November 2015, to be raised in specific land-based hatcheries in Canada and Panama.[118]

The use of genetically modified food-grade organisms as recombinant vaccine expression hosts and delivery vehicles can open new avenues for vaccinology. Considering that oral immunization is a beneficial approach in terms of costs, patient comfort, and protection of mucosal tissues, the use of food-grade organisms can lead to highly advantageous vaccines in terms of costs, easy administration, and safety. The organisms currently used for this purpose are bacteria (Lactobacillus and Bacillus), yeasts, algae, plants, and insect species. Several such organisms are under clinical evaluation, and the current adoption of this technology by the industry indicates a potential to benefit global healthcare systems.[119]

There is a scientific consensus[120][121][122][123] that currently available food derived from GM crops poses no greater risk to human health than conventional food,[124][125][126][127][128] but that each GM food needs to be tested on a case-by-case basis before introduction.[129][130][131] Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe.[132][133][134][135]

Opponents claim that long-term health risks have not been adequately assessed and propose various combinations of additional testing, labeling[136] or removal from the market.[137][138][139][140] The advocacy group European Network of Scientists for Social and Environmental Responsibility (ENSSER), disputes the claim that “science” supports the safety of current GM foods, proposing that each GM food must be judged on case-by-case basis.[141] The Canadian Association of Physicians for the Environment called for removing GM foods from the market pending long term health studies.[137] Multiple disputed studies have claimed health effects relating to GM foods or to the pesticides used with them.[142]

The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation.[143][144][145][146] Countries such as the United States, Canada, Lebanon and Egypt use substantial equivalence to determine if further testing is required, while many countries such as those in the European Union, Brazil and China only authorize GMO cultivation on a case-by-case basis. In the U.S. the FDA determined that GMO’s are “Generally Recognized as Safe” (GRAS) and therefore do not require additional testing if the GMO product is substantially equivalent to the non-modified product.[147] If new substances are found, further testing may be required to satisfy concerns over potential toxicity, allergenicity, possible gene transfer to humans or genetic outcrossing to other organisms.[26]

Government regulation of GMO development and release varies widely between countries. Marked differences separate GMO regulation in the U.S. and GMO regulation in the European Union.[148] Regulation also varies depending on the intended product’s use. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety.[149]

In the U.S., three government organizations regulate GMOs. The FDA checks the chemical composition of organisms for potential allergens. The United States Department of Agriculture (USDA) supervises field testing and monitors the distribution of GM seeds. The United States Environmental Protection Agency (EPA) is responsible for monitoring pesticide usage, including plants modified to contain proteins toxic to insects. Like USDA, EPA also oversees field testing and the distribution of crops that have had contact with pesticides to ensure environmental safety.[150][bettersourceneeded] In 2015 the Obama administration announced that it would update the way the government regulated GM crops.[151]

In 1992 FDA published “Statement of Policy: Foods derived from New Plant Varieties.” This statement is a clarification of FDA’s interpretation of the Food, Drug, and Cosmetic Act with respect to foods produced from new plant varieties developed using recombinant deoxyribonucleic acid (rDNA) technology. FDA encouraged developers to consult with the FDA regarding any bioengineered foods in development. The FDA says developers routinely do reach out for consultations. In 1996 FDA updated consultation procedures.[152][153]

As of 2015, 64 countries require labeling of GMO products in the marketplace.[154]

US and Canadian national policy is to require a label only given significant composition differences or documented health impacts, although some individual US states (Vermont, Connecticut and Maine) enacted laws requiring them.[155][156][157][158] In July 2016, Public Law 114-214 was enacted to regulate labeling of GMO food on a national basis.

In some jurisdictions, the labeling requirement depends on the relative quantity of GMO in the product. A study that investigated voluntary labeling in South Africa found that 31% of products labeled as GMO-free had a GM content above 1.0%.[159]

In Europe all food (including processed food) or feed that contains greater than 0.9% GMOs must be labelled.[160]

Testing on GMOs in food and feed is routinely done using molecular techniques such as PCR and bioinformatics.[161]

In a January 2010 paper, the extraction and detection of DNA along a complete industrial soybean oil processing chain was described to monitor the presence of Roundup Ready (RR) soybean: “The amplification of soybean lectin gene by end-point polymerase chain reaction (PCR) was successfully achieved in all the steps of extraction and refining processes, until the fully refined soybean oil. The amplification of RR soybean by PCR assays using event-specific primers was also achieved for all the extraction and refining steps, except for the intermediate steps of refining (neutralisation, washing and bleaching) possibly due to sample instability. The real-time PCR assays using specific probes confirmed all the results and proved that it is possible to detect and quantify genetically modified organisms in the fully refined soybean oil. To our knowledge, this has never been reported before and represents an important accomplishment regarding the traceability of genetically modified organisms in refined oils.”[162]

According to Thomas Redick, detection and prevention of cross-pollination is possible through the suggestions offered by the Farm Service Agency (FSA) and Natural Resources Conservation Service (NRCS). Suggestions include educating farmers on the importance of coexistence, providing farmers with tools and incentives to promote coexistence, conduct research to understand and monitor gene flow, provide assurance of quality and diversity in crops, provide compensation for actual economic losses for farmers.[163]

The genetically modified foods controversy consists of a set of disputes over the use of food made from genetically modified crops. The disputes involve consumers, farmers, biotechnology companies, governmental regulators, non-governmental organizations, environmental and political activists and scientists. The major disagreements include whether GM foods can be safely consumed, harm the environment and/or are adequately tested and regulated.[138][164] The objectivity of scientific research and publications has been challenged.[137] Farming-related disputes include the use and impact of pesticides, seed production and use, side effects on non-GMO crops/farms,[165] and potential control of the GM food supply by seed companies.[137]

The conflicts have continued since GM foods were invented. They have occupied the media, the courts, local, regional and national governments and international organizations.

The literature about Biodiversity and the GE food/feed consumption has sometimes resulted in animated debate regarding the suitability of the experimental designs, the choice of the statistical methods or the public accessibility of data. Such debate, even if positive and part of the natural process of review by the scientific community, has frequently been distorted by the media and often used politically and inappropriately in anti-GE crops campaigns.

Domingo, Jos L.; Bordonaba, Jordi Gin (2011). “A literature review on the safety assessment of genetically modified plants” (PDF). Environment International. 37: 734742. doi:10.1016/j.envint.2011.01.003. PMID21296423. In spite of this, the number of studies specifically focused on safety assessment of GM plants is still limited. However, it is important to remark that for the first time, a certain equilibrium in the number of research groups suggesting, on the basis of their studies, that a number of varieties of GM products (mainly maize and soybeans) are as safe and nutritious as the respective conventional non-GM plant, and those raising still serious concerns, was observed. Moreover, it is worth mentioning that most of the studies demonstrating that GM foods are as nutritional and safe as those obtained by conventional breeding, have been performed by biotechnology companies or associates, which are also responsible of commercializing these GM plants. Anyhow, this represents a notable advance in comparison with the lack of studies published in recent years in scientific journals by those companies.

Krimsky, Sheldon (2015). “An Illusory Consensus behind GMO Health Assessment” (PDF). Science, Technology, & Human Values. 40: 132. doi:10.1177/0162243915598381. I began this article with the testimonials from respected scientists that there is literally no scientific controversy over the health effects of GMOs. My investigation into the scientific literature tells another story.

And contrast:

Panchin, Alexander Y.; Tuzhikov, Alexander I. (January 14, 2016). “Published GMO studies find no evidence of harm when corrected for multiple comparisons”. Critical Reviews in Biotechnology: 15. doi:10.3109/07388551.2015.1130684. ISSN0738-8551. PMID26767435. Here, we show that a number of articles some of which have strongly and negatively influenced the public opinion on GM crops and even provoked political actions, such as GMO embargo, share common flaws in the statistical evaluation of the data. Having accounted for these flaws, we conclude that the data presented in these articles does not provide any substantial evidence of GMO harm.

The presented articles suggesting possible harm of GMOs received high public attention. However, despite their claims, they actually weaken the evidence for the harm and lack of substantial equivalency of studied GMOs. We emphasize that with over 1783 published articles on GMOs over the last 10 years it is expected that some of them should have reported undesired differences between GMOs and conventional crops even if no such differences exist in reality.

and

Yang, Y.T.; Chen, B. (2016). “Governing GMOs in the USA: science, law and public health”. Journal of the Science of Food and Agriculture. 96: 18511855. doi:10.1002/jsfa.7523. PMID26536836. It is therefore not surprising that efforts to require labeling and to ban GMOs have been a growing political issue in the USA (citing Domingo and Bordonaba, 2011).

Overall, a broad scientific consensus holds that currently marketed GM food poses no greater risk than conventional food… Major national and international science and medical associations have stated that no adverse human health effects related to GMO food have been reported or substantiated in peer-reviewed literature to date.

Despite various concerns, today, the American Association for the Advancement of Science, the World Health Organization, and many independent international science organizations agree that GMOs are just as safe as other foods. Compared with conventional breeding techniques, genetic engineering is far more precise and, in most cases, less likely to create an unexpected outcome.

Pinholster, Ginger (October 25, 2012). “AAAS Board of Directors: Legally Mandating GM Food Labels Could “Mislead and Falsely Alarm Consumers””. American Association for the Advancement of Science. Retrieved February 8, 2016.

“REPORT 2 OF THE COUNCIL ON SCIENCE AND PUBLIC HEALTH (A-12): Labeling of Bioengineered Foods” (PDF). American Medical Association. 2012. Retrieved March 19, 2016. Bioengineered foods have been consumed for close to 20 years, and during that time, no overt consequences on human health have been reported and/or substantiated in the peer-reviewed literature.

GM foods currently available on the international market have passed safety assessments and are not likely to present risks for human health. In addition, no effects on human health have been shown as a result of the consumption of such foods by the general population in the countries where they have been approved. Continuous application of safety assessments based on the Codex Alimentarius principles and, where appropriate, adequate post market monitoring, should form the basis for ensuring the safety of GM foods.

“Genetically modified foods and health: a second interim statement” (PDF). British Medical Association. March 2004. Retrieved March 21, 2016. In our view, the potential for GM foods to cause harmful health effects is very small and many of the concerns expressed apply with equal vigour to conventionally derived foods. However, safety concerns cannot, as yet, be dismissed completely on the basis of information currently available.

When seeking to optimise the balance between benefits and risks, it is prudent to err on the side of caution and, above all, learn from accumulating knowledge and experience. Any new technology such as genetic modification must be examined for possible benefits and risks to human health and the environment. As with all novel foods, safety assessments in relation to GM foods must be made on a case-by-case basis.

Members of the GM jury project were briefed on various aspects of genetic modification by a diverse group of acknowledged experts in the relevant subjects. The GM jury reached the conclusion that the sale of GM foods currently available should be halted and the moratorium on commercial growth of GM crops should be continued. These conclusions were based on the precautionary principle and lack of evidence of any benefit. The Jury expressed concern over the impact of GM crops on farming, the environment, food safety and other potential health effects.

The Royal Society review (2002) concluded that the risks to human health associated with the use of specific viral DNA sequences in GM plants are negligible, and while calling for caution in the introduction of potential allergens into food crops, stressed the absence of evidence that commercially available GM foods cause clinical allergic manifestations. The BMA shares the view that that there is no robust evidence to prove that GM foods are unsafe but we endorse the call for further research and surveillance to provide convincing evidence of safety and benefit.

The literature about Biodiversity and the GE food/feed consumption has sometimes resulted in animated debate regarding the suitability of the experimental designs, the choice of the statistical methods or the public accessibility of data. Such debate, even if positive and part of the natural process of review by the scientific community, has frequently been distorted by the media and often used politically and inappropriately in anti-GE crops campaigns.

Domingo, Jos L.; Bordonaba, Jordi Gin (2011). “A literature review on the safety assessment of genetically modified plants” (PDF). Environment International. 37: 734742. doi:10.1016/j.envint.2011.01.003. PMID21296423. In spite of this, the number of studies specifically focused on safety assessment of GM plants is still limited. However, it is important to remark that for the first time, a certain equilibrium in the number of research groups suggesting, on the basis of their studies, that a number of varieties of GM products (mainly maize and soybeans) are as safe and nutritious as the respective conventional non-GM plant, and those raising still serious concerns, was observed. Moreover, it is worth mentioning that most of the studies demonstrating that GM foods are as nutritional and safe as those obtained by conventional breeding, have been performed by biotechnology companies or associates, which are also responsible of commercializing these GM plants. Anyhow, this represents a notable advance in comparison with the lack of studies published in recent years in scientific journals by those companies.

Krimsky, Sheldon (2015). “An Illusory Consensus behind GMO Health Assessment” (PDF). Science, Technology, & Human Values. 40: 132. doi:10.1177/0162243915598381. I began this article with the testimonials from respected scientists that there is literally no scientific controversy over the health effects of GMOs. My investigation into the scientific literature tells another story.

And contrast:

Panchin, Alexander Y.; Tuzhikov, Alexander I. (January 14, 2016). “Published GMO studies find no evidence of harm when corrected for multiple comparisons”. Critical Reviews in Biotechnology: 15. doi:10.3109/07388551.2015.1130684. ISSN0738-8551. PMID26767435. Here, we show that a number of articles some of which have strongly and negatively influenced the public opinion on GM crops and even provoked political actions, such as GMO embargo, share common flaws in the statistical evaluation of the data. Having accounted for these flaws, we conclude that the data presented in these articles does not provide any substantial evidence of GMO harm.

The presented articles suggesting possible harm of GMOs received high public attention. However, despite their claims, they actually weaken the evidence for the harm and lack of substantial equivalency of studied GMOs. We emphasize that with over 1783 published articles on GMOs over the last 10 years it is expected that some of them should have reported undesired differences between GMOs and conventional crops even if no such differences exist in reality.

and

Yang, Y.T.; Chen, B. (2016). “Governing GMOs in the USA: science, law and public health”. Journal of the Science of Food and Agriculture. 96: 18511855. doi:10.1002/jsfa.7523. PMID26536836. It is therefore not surprising that efforts to require labeling and to ban GMOs have been a growing political issue in the USA (citing Domingo and Bordonaba, 2011).

Overall, a broad scientific consensus holds that currently marketed GM food poses no greater risk than conventional food… Major national and international science and medical associations have stated that no adverse human health effects related to GMO food have been reported or substantiated in peer-reviewed literature to date.

Despite various concerns, today, the American Association for the Advancement of Science, the World Health Organization, and many independent international science organizations agree that GMOs are just as safe as other foods. Compared with conventional breeding techniques, genetic engineering is far more precise and, in most cases, less likely to create an unexpected outcome.

Pinholster, Ginger (October 25, 2012). “AAAS Board of Directors: Legally Mandating GM Food Labels Could “Mislead and Falsely Alarm Consumers””. American Association for the Advancement of Science. Retrieved February 8, 2016.

“REPORT 2 OF THE COUNCIL ON SCIENCE AND PUBLIC HEALTH (A-12): Labeling of Bioengineered Foods” (PDF). American Medical Association. 2012. Retrieved March 19, 2016. Bioengineered foods have been consumed for close to 20 years, and during that time, no overt consequences on human health have been reported and/or substantiated in the peer-reviewed literature.

GM foods currently available on the international market have passed safety assessments and are not likely to present risks for human health. In addition, no effects on human health have been shown as a result of the consumption of such foods by the general population in the countries where they have been approved. Continuous application of safety assessments based on the Codex Alimentarius principles and, where appropriate, adequate post market monitoring, should form the basis for ensuring the safety of GM foods.

“Genetically modified foods and health: a second interim statement” (PDF). British Medical Association. March 2004. Retrieved March 21, 2016. In our view, the potential for GM foods to cause harmful health effects is very small and many of the concerns expressed apply with equal vigour to conventionally derived foods. However, safety concerns cannot, as yet, be dismissed completely on the basis of information currently available.

When seeking to optimise the balance between benefits and risks, it is prudent to err on the side of caution and, above all, learn from accumulating knowledge and experience. Any new technology such as genetic modification must be examined for possible benefits and risks to human health and the environment. As with all novel foods, safety assessments in relation to GM foods must be made on a case-by-case basis.

Members of the GM jury project were briefed on various aspects of genetic modification by a diverse group of acknowledged experts in the relevant subjects. The GM jury reached the conclusion that the sale of GM foods currently available should be halted and the moratorium on commercial growth of GM crops should be continued. These conclusions were based on the precautionary principle and lack of evidence of any benefit. The Jury expressed concern over the impact of GM crops on farming, the environment, food safety and other potential health effects.

The Royal Society review (2002) concluded that the risks to human health associated with the use of specific viral DNA sequences in GM plants are negligible, and while calling for caution in the introduction of potential allergens into food crops, stressed the absence of evidence that commercially available GM foods cause clinical allergic manifestations. The BMA shares the view that that there is no robust evidence to prove that GM foods are unsafe but we endorse the call for further research and surveillance to provide convincing evidence of safety and benefit.

Follow this link:
Genetically modified food – Wikipedia

Posted in Genetic Engineering | Comments Off on Genetically modified food – Wikipedia

What is Slavery?: The Abolition of Slavery Project

Posted: December 14, 2016 at 3:50 am

Slavery refers to a condition in which individuals are owned by others, who control where they live and at what they work. Slavery had previously existed throughout history, in many times and most places. The ancient Greeks, the Romans, Incas and Aztecs all had slaves.

Whatdoes it mean to be aslave or enslaved person?

To be a slave is to be owned by another person. A slave is a human being classed as property and who is forced to work for nothing. An enslaved person is a human being who is made to be a slave. This language is often used instead of the word slave, to refer to the person and their experiences and to avoid the use of dehumanising language.

What doesit mean to be a Chattel Slave?

A chattel slave is an enslaved person who is owned for ever and whose children and children’s children are automatically enslaved. Chattel slaves are individuals treated as complete property, to be bought and sold.

Chattel slavery was supported and made legal by European governments and monarchs. This type of enslavement was practised in European colonies, from the sixteenth century onwards.

Back

See the rest here:

What is Slavery?: The Abolition of Slavery Project

Posted in Abolition Of Work | Comments Off on What is Slavery?: The Abolition of Slavery Project

Cohousing – Wikipedia

Posted: December 11, 2016 at 8:03 am

Cohousing[1] is an intentional community of private homes clustered around shared space. Each attached or single family home has traditional amenities, including a private kitchen. Shared spaces typically feature a common house, which may include a large kitchen and dining area, laundry, and recreational spaces. Shared outdoor space may include parking, walkways, open space, and gardens. Neighbors also share resources like tools and lawnmowers.

Households have independent incomes and private lives, but neighbors collaboratively plan and manage community activities and shared spaces. The legal structure is typically an HOA, Condo Association, or Housing Cooperative. Community activities feature regularly-scheduled shared meals, meetings, and workdays. Neighbors gather for parties, games, movies, or other events. Cohousing makes it easy to form clubs, organize child and elder care, and carpool.

Cohousing facilitates interaction among neighbors for social and practical benefits, economic and environmental benefits.[2][3]

Neighbors commit to being part of a community for everyones mutual benefit. Cohousing cultivates a culture of sharing and caring. Design features and neighborhood size (typically 20-40 homes) promote frequent interaction and close relationships.

Cohousing neighborhoods are designed for privacy as well as community. Residents balance privacy and community by choosing their own level of engagement.

Decision making is participatory and often based on consensus. Self-management empowers residents, builds community, and saves money.

Cohousing communities support residents in actualizing shared values. Cohousing communities typically adopt green approaches to living.

The modern theory of cohousing originated in Denmark in the 1960s among groups of families who were dissatisfied with existing housing and communities that they felt did not meet their needs. Bodil Graae wrote a newspaper article titled “Children Should Have One Hundred Parents,”[4] spurring a group of 50 families to organize around a community project in 1967. This group developed the cohousing project Sttedammen, which is the oldest known modern cohousing community in the world. Another key organizer was Jan Gudmand Hyer who drew inspiration from his architectural studies at Harvard and interaction with experimental U.S. communities of the era. He published the article “The Missing Link between Utopia and the Dated Single Family House” [5] in 1968, converging a second group.

The Danish term bofllesskab (living community) was introduced to North America as cohousing by two American architects, Kathryn McCamant and Charles Durrett, who visited several cohousing communities and wrote a book about it.[2] The book resonated with some existing and forming communities, such as Sharingwood in Washington state and N Street in California, who embraced the cohousing concept as a crystallization of what they were already about. Though most cohousing groups seek to develop multi-generational communities, some focus on creating senior communities. Charles Durrett later wrote a handbook on creating senior cohousing.[3] The first community in the United States to be designed, constructed and occupied specifically for cohousing is Muir Commons in Davis, California.[6][7]Architects, Kathryn McCamant and Charles Durrett were responsible for the programming and the design of the site plan, common house and private houses.

There are precedents for cohousing in the 1920s in New York with the cooperative apartment housing with shared facilities and good social interaction. The Siheyuan, or quadrangle design of housing in China has a shared courtyard and is thus similar in some respects to cohousing.

Cohousing communities are part of the new cooperative economy in the United States and are predicted to expand rapidly in the next few decades as individuals and families seek to live more sustainably, and in community with neighbors. Since the first cohousing community was completed in the U.S. Muir Commons in Davis, California, now celebrating 25 years more than 160 communities have been established in 25 states plus the District of Columbia, with more than 125 in process. For a listing of cohousing communities visit http://www.cohousing.org/directory. Most cohousing communities are intergenerational with both children and elders; in recent years, senior cohousing focused on older adult needs have grown. These communities come in a variety, but are often environment friendly and socially sustainable.

Hundreds of cohousing communities exist in Denmark and other countries in northern Europe. In Canada, there are 11 completed communities, and approximately 19 in the forming or development phase (see [1]). There are more than 300 cohousing communities in the Netherlands (73 mixed-generation and 231 senior cohousing), with about 60 others in planning or construction phases. [8] There are also communities in Australia (see Cohousing Australia), the United Kingdom (see UK Cohousing Network http://www.cohousing.org.uk for information, Threshold Centre Cohousing Community http://www.thresholdcentre.org.uk/ offers training), and other parts of the world.

Cohousing started to develop in the UK at the end of the 1990s. The movement has gradually built up momentum and there are now 14 purpose built cohousing communities. A further 40+ cohousing groups are developing projects and new groups are forming all the time. Cohousing communities in the UK range from around 8 households to around 30 households. Most communities are mixed communities with single people, couples and families but some are only for people over 50 and one is only for women over 50 years. The communities themselves range from new developments built to modern eco standards to conversions of everything from farms to Jacobean mansions to former hospital buildings and are in urban, rural and semi- rural locations.

Because each cohousing community is planned in its context, a key feature of this model is its flexibility to the needs and values of its residents and the characteristics of the site. Cohousing can be urban, suburban or rural. The physical form is typically compact but varies from low-rise apartments to townhouses to clustered detached houses. They tend to keep cars to the periphery which promotes walking through the community and interacting with neighbors as well as increasing safety for children at play within the community. Shared green space is another characteristic, whether for gardening, play, or places to gather. When more land is available than is needed for the physical structures, the structures are usually clustered closely together, leaving as much of the land as possible “open” for shared use. This aspect of cohousing directly addresses the growing problem of suburban sprawl.

In addition to “from-scratch” new-built communities (including those physically retrofitting/re-using existing structures), there are also “retrofit” (aka “organic”) communities in which neighbors create “intentional neighborhoods” by buying adjacent properties and removing fences. Often, they create common amenities such as Common Houses after the fact, while living there. N Street Cohousing in Davis, CA, is the canonical example of this type; it came together before the term Cohousing was popularized here.

Cohousing differs from some types of intentional communities in that the residents do not have a shared economy or a common set of beliefs or religion, but instead invest in creating a socially rich and interconnected community. A non-hierarchical structure employing a consensus decision-making model is common in managing cohousing. Individuals do take on leadership roles, such as being responsible for coordinating a garden or facilitating a meeting.

Cohousing communities in the U.S. currently rely on one of two existing legal forms of real estate ownership: individually titled houses with common areas owned by a homeowner association(condominium)s or a housing cooperative. Condo ownership is most common because it fits financial institutions’ and cities’ models for multi-unit owner-occupied housing development. U.S. banks lend more readily on single-family homes and condominiums than housing cooperatives. Charles Durrett points out that rental cohousing is a very likely future model, as it has already is being practiced in Europe.

Cohousing differs from standard condominium development and master-planned subdivisions because the development is designed by, or with considerable input from, its future residents. The design process invariably emphasizes consciously fostering social relationships among its residents. Common facilities are based on the actual needs of the residents, rather than on what a developer thinks will help sell units. Turnover in cohousing developments is typically very low, and there is usually a waiting list for units to become available.

In Europe the term “joint building ventures” has been coined to define the form of ownership and housing characterized as cohousing. According to the European Urban Knowledge Network (EUKN): “Joint building ventures are a legal federation of persons willing to build who want to create owner-occupied housing and to participate actively in planning and building.”[9]

Go here to see the original:

Cohousing – Wikipedia

Posted in Intentional Communities | Comments Off on Cohousing – Wikipedia

Atheism – Wikipedia

Posted: December 9, 2016 at 5:50 am

Atheism is, in the broadest sense, the absence of belief in the existence of deities[1][2][3][4] Less broadly, atheism is the rejection of belief that any deities exist.[5][6] In an even narrower sense, atheism is specifically the position that there are no deities.[1][2][7] Atheism is contrasted with theism,[8][9] which, in its most general form, is the belief that at least one deity exists.[9][10][11]

The term atheism originated from the Greek (atheos), meaning “without god(s)”, used as a pejorative term applied to those thought to reject the gods worshiped by the larger society.[12] With the spread of freethought, skeptical inquiry, and subsequent increase in criticism of religion, application of the term narrowed in scope. The first individuals to identify themselves using the word atheist lived in the 18th century during the Age of Enlightenment. The French Revolution, noted for its “unprecedented atheism,” witnessed the first major political movement in history to advocate for the supremacy of human reason.[14]

Arguments for atheism range from the philosophical to social and historical approaches. Rationales for not believing in deities include arguments that there is a lack of empirical evidence,[15][16] the problem of evil; the argument from inconsistent revelations, the rejection of concepts that cannot be falsified, and the argument from nonbelief.[15][17] Although some atheists have adopted secular philosophies (eg. secular humanism),[18][19] there is no one ideology or set of behaviors to which all atheists adhere.[20] Many atheists hold that atheism is a more parsimonious worldview than theism and therefore that the burden of proof lies not on the atheist to disprove the existence of God but on the theist to provide a rationale for theism.[21]

Since conceptions of atheism vary, accurate estimations of current numbers of atheists are difficult.[22] Several comprehensive global polls on the subject have been conducted by Gallup International: their 2015 poll featured over 64,000 respondents and indicated that 11% were “convinced atheists” whereas an earlier 2012 poll found that 13% of respondents were “convinced atheists.”[23][24] An older survey by the BBC, in 2004, recorded atheists as comprising 8% of the world’s population.[25] Other older estimates have indicated that atheists comprise 2% of the world’s population, while the irreligious add a further 12%.[26] According to these polls, Europe and East Asia are the regions with the highest rates of atheism. In 2015, 61% of people in China reported that they were atheists.[27] The figures for a 2010 Eurobarometer survey in the European Union (EU) reported that 20% of the EU population claimed not to believe in “any sort of spirit, God or life force”.[28]

Writers disagree on how best to define and classify atheism,[29] contesting what supernatural entities it applies to, whether it is a philosophic position in its own right or merely the absence of one, and whether it requires a conscious, explicit rejection. Atheism has been regarded as compatible with agnosticism,[30][31][32][33][34][35][36] and has also been contrasted with it.[37][38][39] A variety of categories have been used to distinguish the different forms of atheism.

Some of the ambiguity and controversy involved in defining atheism arises from difficulty in reaching a consensus for the definitions of words like deity and god. The plurality of wildly different conceptions of God and deities leads to differing ideas regarding atheism’s applicability. The ancient Romans accused Christians of being atheists for not worshiping the pagan deities. Gradually, this view fell into disfavor as theism came to be understood as encompassing belief in any divinity.

With respect to the range of phenomena being rejected, atheism may counter anything from the existence of a deity, to the existence of any spiritual, supernatural, or transcendental concepts, such as those of Buddhism, Hinduism, Jainism, and Taoism.[41]

Definitions of atheism also vary in the degree of consideration a person must put to the idea of gods to be considered an atheist. Atheism has sometimes been defined to include the simple absence of belief that any deities exist. This broad definition would include newborns and other people who have not been exposed to theistic ideas. As far back as 1772, Baron d’Holbach said that “All children are born Atheists; they have no idea of God.”[42] Similarly, George H. Smith (1979) suggested that: “The man who is unacquainted with theism is an atheist because he does not believe in a god. This category would also include the child with the conceptual capacity to grasp the issues involved, but who is still unaware of those issues. The fact that this child does not believe in god qualifies him as an atheist.”[43] Smith coined the term implicit atheism to refer to “the absence of theistic belief without a conscious rejection of it” and explicit atheism to refer to the more common definition of conscious disbelief. Ernest Nagel contradicts Smith’s definition of atheism as merely “absence of theism”, acknowledging only explicit atheism as true “atheism”.[44]

Philosophers such as Antony Flew[45] and Michael Martin have contrasted positive (strong/hard) atheism with negative (weak/soft) atheism. Positive atheism is the explicit affirmation that gods do not exist. Negative atheism includes all other forms of non-theism. According to this categorization, anyone who is not a theist is either a negative or a positive atheist. The terms weak and strong are relatively recent, while the terms negative and positive atheism are of older origin, having been used (in slightly different ways) in the philosophical literature[45] and in Catholic apologetics.[46] Under this demarcation of atheism, most agnostics qualify as negative atheists.

While Martin, for example, asserts that agnosticism entails negative atheism,[33] many agnostics see their view as distinct from atheism,[47][48] which they may consider no more justified than theism or requiring an equal conviction.[47] The assertion of unattainability of knowledge for or against the existence of gods is sometimes seen as an indication that atheism requires a leap of faith.[49][50] Common atheist responses to this argument include that unproven religious propositions deserve as much disbelief as all other unproven propositions,[51] and that the unprovability of a god’s existence does not imply equal probability of either possibility.[52] Scottish philosopher J. J. C. Smart even argues that “sometimes a person who is really an atheist may describe herself, even passionately, as an agnostic because of unreasonable generalized philosophical skepticism which would preclude us from saying that we know anything whatever, except perhaps the truths of mathematics and formal logic.”[53] Consequently, some atheist authors such as Richard Dawkins prefer distinguishing theist, agnostic and atheist positions along a spectrum of theistic probabilitythe likelihood that each assigns to the statement “God exists”.

Before the 18th century, the existence of God was so accepted in the western world that even the possibility of true atheism was questioned. This is called theistic innatismthe notion that all people believe in God from birth; within this view was the connotation that atheists are simply in denial.[55]

There is also a position claiming that atheists are quick to believe in God in times of crisis, that atheists make deathbed conversions, or that “there are no atheists in foxholes”.[56] There have however been examples to the contrary, among them examples of literal “atheists in foxholes”.[57]

Some atheists have doubted the very need for the term “atheism”. In his book Letter to a Christian Nation, Sam Harris wrote:

In fact, “atheism” is a term that should not even exist. No one ever needs to identify himself as a “non-astrologer” or a “non-alchemist”. We do not have words for people who doubt that Elvis is still alive or that aliens have traversed the galaxy only to molest ranchers and their cattle. Atheism is nothing more than the noises reasonable people make in the presence of unjustified religious beliefs.

Pragmatic atheism is the view one should reject a belief in a god or gods because it is unnecessary for a pragmatic life. This view is related to apatheism and practical atheism.[59]

The source of man’s unhappiness is his ignorance of Nature. The pertinacity with which he clings to blind opinions imbibed in his infancy, which interweave themselves with his existence, the consequent prejudice that warps his mind, that prevents its expansion, that renders him the slave of fiction, appears to doom him to continual error.

Atheists have put forward arguments against the existence of gods, responding to common theistic arguments such as the argument from design or Pascal’s Wager.

Atheists have also argued that people cannot know a God or prove the existence of a God. The later is called agnosticism, which takes a variety of forms. In the philosophy of immanence, divinity is inseparable from the world itself, including a person’s mind, and each person’s consciousness is locked in the subject. According to this form of agnosticism, this limitation in perspective prevents any objective inference from belief in a god to assertions of its existence. The rationalistic agnosticism of Kant and the Enlightenment only accepts knowledge deduced with human rationality; this form of atheism holds that gods are not discernible as a matter of principle, and therefore cannot be known to exist. Skepticism, based on the ideas of Hume, asserts that certainty about anything is impossible, so one can never know for sure whether or not a god exists. Hume, however, held that such unobservable metaphysical concepts should be rejected as “sophistry and illusion”.[61] The allocation of agnosticism to atheism is disputed; it can also be regarded as an independent, basic worldview.[62]

Other arguments for atheism that can be classified as epistemological or ontological, including ignosticism, assert the meaninglessness or unintelligibility of basic terms such as “God” and statements such as “God is all-powerful.” Theological noncognitivism holds that the statement “God exists” does not express a proposition, but is nonsensical or cognitively meaningless. It has been argued both ways as to whether such individuals can be classified into some form of atheism or agnosticism. Philosophers A. J. Ayer and Theodore M. Drange reject both categories, stating that both camps accept “God exists” as a proposition; they instead place noncognitivism in its own category.[63][64]

Philosopher, Zofia Zdybicka writes:

“Metaphysical atheism… includes all doctrines that hold to metaphysical monism (the homogeneity of reality). Metaphysical atheism may be either: a) absolute an explicit denial of God’s existence associated with materialistic monism (all materialistic trends, both in ancient and modern times); b) relative the implicit denial of God in all philosophies that, while they accept the existence of an absolute, conceive of the absolute as not possessing any of the attributes proper to God: transcendence, a personal character or unity. Relative atheism is associated with idealistic monism (pantheism, panentheism, deism).”[65]

Some atheists hold the view that the various conceptions of gods, such as the personal god of Christianity, are ascribed logically inconsistent qualities. Such atheists present deductive arguments against the existence of God, which assert the incompatibility between certain traits, such as perfection, creator-status, immutability, omniscience, omnipresence, omnipotence, omnibenevolence, transcendence, personhood (a personal being), nonphysicality, justice, and mercy.[15]

Theodicean atheists believe that the world as they experience it cannot be reconciled with the qualities commonly ascribed to God and gods by theologians. They argue that an omniscient, omnipotent, and omnibenevolent God is not compatible with a world where there is evil and suffering, and where divine love is hidden from many people.[17] A similar argument is attributed to Siddhartha Gautama, the founder of Buddhism.[67]

Philosopher Ludwig Feuerbach[68] and psychoanalyst Sigmund Freud have argued that God and other religious beliefs are human inventions, created to fulfill various psychological and emotional wants or needs. This is also a view of many Buddhists.[69]Karl Marx and Friedrich Engels, influenced by the work of Feuerbach, argued that belief in God and religion are social functions, used by those in power to oppress the working class. According to Mikhail Bakunin, “the idea of God implies the abdication of human reason and justice; it is the most decisive negation of human liberty, and necessarily ends in the enslavement of mankind, in theory and practice.” He reversed Voltaire’s famous aphorism that if God did not exist, it would be necessary to invent him, writing instead that “if God really existed, it would be necessary to abolish him.”[70]

Atheism is coherent with some religious and spiritual belief systems, including Hinduism, Jainism, Buddhism, Syntheism, Ralism,[71] and Neopagan movements[72] such as Wicca.[73]stika schools in Hinduism hold atheism to be a valid path to moksha, but extremely difficult, for the atheist can not expect any help from the divine on their journey.[74] Jainism believes the universe is eternal and has no need for a creator deity, however Tirthankaras are revered that can transcend space and time [75] and have more power than the god Indra.[76]Secular Buddhism does not advocate belief in gods. Early Buddhism was atheistic as Gautama Buddha’s path involved no mention of gods. Later conceptions of Buddhism consider Buddha himself a god, suggest adherents can attain godhood, and revere Bodhisattvas[77] and Eternal Buddha.

Axiological, or constructive, atheism rejects the existence of gods in favor of a “higher absolute”, such as humanity. This form of atheism favors humanity as the absolute source of ethics and values, and permits individuals to resolve moral problems without resorting to God. Marx and Freud used this argument to convey messages of liberation, full-development, and unfettered happiness.[62] One of the most common criticisms of atheism has been to the contrarythat denying the existence of a god leads to moral relativism, leaving one with no moral or ethical foundation,[78] or renders life meaningless and miserable.[79]Blaise Pascal argued this view in his Penses.[80]

French philosopher Jean-Paul Sartre identified himself as a representative of an “atheist existentialism” concerned less with denying the existence of God than with establishing that “man needs… to find himself again and to understand that nothing can save him from himself, not even a valid proof of the existence of God.” Sartre said a corollary of his atheism was that “if God does not exist, there is at least one being in whom existence precedes essence, a being who exists before he can be defined by any concept, and… this being is man.” The practical consequence of this atheism was described by Sartre as meaning that there are no a priori rules or absolute values that can be invoked to govern human conduct, and that humans are “condemned” to invent these for themselves, making “man” absolutely “responsible for everything he does”.

Sociologist Phil Zuckerman analyzed previous social science research on secularity and non-belief, and concluded that societal well-being is positively correlated with irreligion. He found that there are much lower concentrations of atheism and secularity in poorer, less developed nations (particularly in Africa and South America) than in the richer industrialized democracies.[84][85] His findings relating specifically to atheism in the US were that compared to religious people in the US, “atheists and secular people” are less nationalistic, prejudiced, antisemitic, racist, dogmatic, ethnocentric, closed-minded, and authoritarian, and in US states with the highest percentages of atheists, the murder rate is lower than average. In the most religious states, the murder rate is higher than average.[86][87]

People who self-identify as atheists are often assumed to be irreligious, but some sects within major religions reject the existence of a personal, creator deity.[89] In recent years, certain religious denominations have accumulated a number of openly atheistic followers, such as atheistic or humanistic Judaism[90][91] and Christian atheists.[92][93][94]

The strictest sense of positive atheism does not entail any specific beliefs outside of disbelief in any deity; as such, atheists can hold any number of spiritual beliefs. For the same reason, atheists can hold a wide variety of ethical beliefs, ranging from the moral universalism of humanism, which holds that a moral code should be applied consistently to all humans, to moral nihilism, which holds that morality is meaningless.[95]

Philosophers such as Slavoj iek,[96]Alain de Botton,[97] and Alexander Bard and Jan Sderqvist,[98] have all argued that atheists should reclaim religion as an act of defiance against theism, precisely not to leave religion as an unwarranted monopoly to theists.

According to Plato’s Euthyphro dilemma, the role of the gods in determining right from wrong is either unnecessary or arbitrary. The argument that morality must be derived from God, and cannot exist without a wise creator, has been a persistent feature of political if not so much philosophical debate.[99][100][101] Moral precepts such as “murder is wrong” are seen as divine laws, requiring a divine lawmaker and judge. However, many atheists argue that treating morality legalistically involves a false analogy, and that morality does not depend on a lawmaker in the same way that laws do.[102]Friedrich Nietzsche believed in a morality independent of theistic belief, and stated that morality based upon God “has truth only if God is truthit stands or falls with faith in God.”[103][104][105]

There exist normative ethical systems that do not require principles and rules to be given by a deity. Some include virtue ethics, social contract, Kantian ethics, utilitarianism, and Objectivism. Sam Harris has proposed that moral prescription (ethical rule making) is not just an issue to be explored by philosophy, but that we can meaningfully practice a science of morality. Any such scientific system must, nevertheless, respond to the criticism embodied in the naturalistic fallacy.[106]

Philosophers Susan Neiman[107] and Julian Baggini[108] (among others) assert that behaving ethically only because of divine mandate is not true ethical behavior but merely blind obedience. Baggini argues that atheism is a superior basis for ethics, claiming that a moral basis external to religious imperatives is necessary to evaluate the morality of the imperatives themselvesto be able to discern, for example, that “thou shalt steal” is immoral even if one’s religion instructs itand that atheists, therefore, have the advantage of being more inclined to make such evaluations.[109] The contemporary British political philosopher Martin Cohen has offered the more historically telling example of Biblical injunctions in favor of torture and slavery as evidence of how religious injunctions follow political and social customs, rather than vice versa, but also noted that the same tendency seems to be true of supposedly dispassionate and objective philosophers.[110] Cohen extends this argument in more detail in Political Philosophy from Plato to Mao, where he argues that the Qur’an played a role in perpetuating social codes from the early 7th century despite changes in secular society.[111]

Some prominent atheistsmost recently Christopher Hitchens, Daniel Dennett, Sam Harris, and Richard Dawkins, and following such thinkers as Bertrand Russell, Robert G. Ingersoll, Voltaire, and novelist Jos Saramagohave criticized religions, citing harmful aspects of religious practices and doctrines.[112]

The 19th-century German political theorist and sociologist Karl Marx called religion “the sigh of the oppressed creature, the heart of a heartless world, and the soul of soulless conditions. It is the opium of the people”. He goes on to say, “The abolition of religion as the illusory happiness of the people is the demand for their real happiness. To call on them to give up their illusions about their condition is to call on them to give up a condition that requires illusions. The criticism of religion is, therefore, in embryo, the criticism of that vale of tears of which religion is the halo.”[113]Lenin said that “every religious idea and every idea of God is unutterable vileness… of the most dangerous kind, ‘contagion’ of the most abominable kind. Millions of sins, filthy deeds, acts of violence and physical contagions… are far less dangerous than the subtle, spiritual idea of God decked out in the smartest ideological constumes…”[114]

Sam Harris criticizes Western religion’s reliance on divine authority as lending itself to authoritarianism and dogmatism. There is a correlation between religious fundamentalism and extrinsic religion (when religion is held because it serves ulterior interests)[116] and authoritarianism, dogmatism, and prejudice.[117] These argumentscombined with historical events that are argued to demonstrate the dangers of religion, such as the Crusades, inquisitions, witch trials, and terrorist attackshave been used in response to claims of beneficial effects of belief in religion.[118] Believers counter-argue that some regimes that espouse atheism, such as the Soviet Union, have also been guilty of mass murder.[119][120] In response to those claims, atheists such as Sam Harris and Richard Dawkins have stated that Stalin’s atrocities were influenced not by atheism but by dogmatic Marxism, and that while Stalin and Mao happened to be atheists, they did not do their deeds in the name of atheism.[122]

In early ancient Greek, the adjective theos (, from the privative – + “god”) meant “godless”. It was first used as a term of censure roughly meaning “ungodly” or “impious”. In the 5th century BCE, the word began to indicate more deliberate and active godlessness in the sense of “severing relations with the gods” or “denying the gods”. The term (asebs) then came to be applied against those who impiously denied or disrespected the local gods, even if they believed in other gods. Modern translations of classical texts sometimes render theos as “atheistic”. As an abstract noun, there was also (atheots), “atheism”. Cicero transliterated the Greek word into the Latin theos. The term found frequent use in the debate between early Christians and Hellenists, with each side attributing it, in the pejorative sense, to the other.[12]

The term atheist (from Fr. athe), in the sense of “one who… denies the existence of God or gods”,[124] predates atheism in English, being first found as early as 1566,[125] and again in 1571.[126]Atheist as a label of practical godlessness was used at least as early as 1577.[127] The term atheism was derived from the French athisme,[128] and appears in English about 1587.[129] An earlier work, from about 1534, used the term atheonism.[130][131] Related words emerged later: deist in 1621,[132]theist in 1662,[133]deism in 1675,[134] and theism in 1678.[135] At that time “deist” and “deism” already carried their modern meaning. The term theism came to be contrasted with deism.

Karen Armstrong writes that “During the sixteenth and seventeenth centuries, the word ‘atheist’ was still reserved exclusively for polemic… The term ‘atheist’ was an insult. Nobody would have dreamed of calling himself an atheist.”

Atheism was first used to describe a self-avowed belief in late 18th-century Europe, specifically denoting disbelief in the monotheistic Abrahamic god.[136] In the 20th century, globalization contributed to the expansion of the term to refer to disbelief in all deities, though it remains common in Western society to describe atheism as simply “disbelief in God”.

While the earliest-found usage of the term atheism is in 16th-century France,[128][129] ideas that would be recognized today as atheistic are documented from the Vedic period and the classical antiquity.

Atheistic schools are found in early Indian thought and have existed from the times of the historical Vedic religion.[137] Among the six orthodox schools of Hindu philosophy, Samkhya, the oldest philosophical school of thought, does not accept God, and the early Mimamsa also rejected the notion of God.[138] The thoroughly materialistic and anti-theistic philosophical Crvka (or Lokyata) school that originated in India around the 6th century BCE is probably the most explicitly atheistic school of philosophy in India, similar to the Greek Cyrenaic school. This branch of Indian philosophy is classified as heterodox due to its rejection of the authority of Vedas and hence is not considered part of the six orthodox schools of Hinduism, but it is noteworthy as evidence of a materialistic movement within Hinduism.[139] Chatterjee and Datta explain that our understanding of Crvka philosophy is fragmentary, based largely on criticism of the ideas by other schools, and that it is not a living tradition:

“Though materialism in some form or other has always been present in India, and occasional references are found in the Vedas, the Buddhistic literature, the Epics, as well as in the later philosophical works we do not find any systematic work on materialism, nor any organized school of followers as the other philosophical schools possess. But almost every work of the other schools states, for refutation, the materialistic views. Our knowledge of Indian materialism is chiefly based on these.”[140]

Other Indian philosophies generally regarded as atheistic include Classical Samkhya and Purva Mimamsa. The rejection of a personal creator God is also seen in Jainism and Buddhism in India.[141]

Western atheism has its roots in pre-Socratic Greek philosophy, but did not emerge as a distinct world-view until the late Enlightenment.[142] The 5th-century BCE Greek philosopher Diagoras is known as the “first atheist”,[143] and is cited as such by Cicero in his De Natura Deorum.[144]Atomists such as Democritus attempted to explain the world in a purely materialistic way, without reference to the spiritual or mystical. Critias viewed religion as a human invention used to frighten people into following moral order[145] and Prodicus also appears to have made clear atheistic statements in his work. Philodemus reports that Prodicus believed that “the gods of popular belief do not exist nor do they know, but primitive man, [out of admiration, deified] the fruits of the earth and virtually everything that contributed to his existence”. Protagoras has sometimes been taken to be an atheist but rather espoused agnostic views, commenting that “Concerning the gods I am unable to discover whether they exist or not, or what they are like in form; for there are many hindrances to knowledge, the obscurity of the subject and the brevity of human life.”[146] In the 3rd-century BCE the Greek philosophers Theodorus Cyrenaicus[144][147] and Strato of Lampsacus[148] did not believe in the existence of gods.

Socrates (c. 470399 BCE) was associated in the Athenian public mind with the trends in pre-Socratic philosophy towards naturalistic inquiry and the rejection of divine explanations for phenomena. Although such an interpretation misrepresents his thought he was portrayed in such a way in Aristophanes’ comic play Clouds and was later to be tried and executed for impiety and corrupting the young. At his trial Socrates is reported as vehemently denying that he was an atheist and contemporary scholarship provides little reason to doubt this claim.[149][150]

Euhemerus (c. 300 BCE) published his view that the gods were only the deified rulers, conquerors and founders of the past, and that their cults and religions were in essence the continuation of vanished kingdoms and earlier political structures.[151] Although not strictly an atheist, Euhemerus was later criticized for having “spread atheism over the whole inhabited earth by obliterating the gods”.[152]

Also important in the history of atheism was Epicurus (c. 300 BCE). Drawing on the ideas of Democritus and the Atomists, he espoused a materialistic philosophy according to which the universe was governed by the laws of chance without the need for divine intervention (see scientific determinism). Although he stated that deities existed, he believed that they were uninterested in human existence. The aim of the Epicureans was to attain peace of mind and one important way of doing this was by exposing fear of divine wrath as irrational. The Epicureans also denied the existence of an afterlife and the need to fear divine punishment after death.[153]

The Roman philosopher Sextus Empiricus held that one should suspend judgment about virtually all beliefsa form of skepticism known as Pyrrhonismthat nothing was inherently evil, and that ataraxia (“peace of mind”) is attainable by withholding one’s judgment. His relatively large volume of surviving works had a lasting influence on later philosophers.[154]

The meaning of “atheist” changed over the course of classical antiquity. The early Christians were labeled atheists by non-Christians because of their disbelief in pagan gods.[155] During the Roman Empire, Christians were executed for their rejection of the Roman gods in general and Emperor-worship in particular. When Christianity became the state religion of Rome under Theodosius I in 381, heresy became a punishable offense.[156]

During the Early Middle Ages, the Islamic world underwent a Golden Age. With the associated advances in science and philosophy, Arab and Persian lands produced outspoken rationalists and atheists, including Muhammad al Warraq (fl. 9th century), Ibn al-Rawandi (827911), Al-Razi (854925), and Al-Maarri (9731058). Al-Ma’arri wrote and taught that religion itself was a “fable invented by the ancients”[157] and that humans were “of two sorts: those with brains, but no religion, and those with religion, but no brains.”[158] Despite being relatively prolific writers, nearly none of their writing survives to the modern day, most of what little remains being preserved through quotations and excerpts in later works by Muslim apologists attempting to refute them.[159] Other prominent Golden Age scholars have been associated with rationalist thought and atheism as well, although the current intellectual atmosphere in the Islamic world, and the scant evidence that survives from the era, make this point a contentious one today.

In Europe, the espousal of atheistic views was rare during the Early Middle Ages and Middle Ages (see Medieval Inquisition); metaphysics and theology were the dominant interests pertaining to religion.[160] There were, however, movements within this period that furthered heterodox conceptions of the Christian god, including differing views of the nature, transcendence, and knowability of God. Individuals and groups such as Johannes Scotus Eriugena, David of Dinant, Amalric of Bena, and the Brethren of the Free Spirit maintained Christian viewpoints with pantheistic tendencies. Nicholas of Cusa held to a form of fideism he called docta ignorantia (“learned ignorance”), asserting that God is beyond human categorization, and thus our knowledge of him is limited to conjecture. William of Ockham inspired anti-metaphysical tendencies with his nominalistic limitation of human knowledge to singular objects, and asserted that the divine essence could not be intuitively or rationally apprehended by human intellect. Followers of Ockham, such as John of Mirecourt and Nicholas of Autrecourt furthered this view. The resulting division between faith and reason influenced later radical and reformist theologians such as John Wycliffe, Jan Hus, and Martin Luther.[160]

The Renaissance did much to expand the scope of free thought and skeptical inquiry. Individuals such as Leonardo da Vinci sought experimentation as a means of explanation, and opposed arguments from religious authority. Other critics of religion and the Church during this time included Niccol Machiavelli, Bonaventure des Priers, Michel de Montaigne, and Franois Rabelais.[154]

Historian Geoffrey Blainey wrote that the Reformation had paved the way for atheists by attacking the authority of the Catholic Church, which in turn “quietly inspired other thinkers to attack the authority of the new Protestant churches”.[161]Deism gained influence in France, Prussia, and England. The philosopher Baruch Spinoza was “probably the first well known ‘semi-atheist’ to announce himself in a Christian land in the modern era”, according to Blainey. Spinoza believed that natural laws explained the workings of the universe. In 1661 he published his Short Treatise on God.[162]

Criticism of Christianity became increasingly frequent in the 17th and 18th centuries, especially in France and England, where there appears to have been a religious malaise, according to contemporary sources. Some Protestant thinkers, such as Thomas Hobbes, espoused a materialist philosophy and skepticism toward supernatural occurrences, while Spinoza rejected divine providence in favor of a panentheistic naturalism. By the late 17th century, deism came to be openly espoused by intellectuals such as John Toland who coined the term “pantheist”.[163]

The first known explicit atheist was the German critic of religion Matthias Knutzen in his three writings of 1674.[164] He was followed by two other explicit atheist writers, the Polish ex-Jesuit philosopher Kazimierz yszczyski and in the 1720s by the French priest Jean Meslier.[165] In the course of the 18th century, other openly atheistic thinkers followed, such as Baron d’Holbach, Jacques-Andr Naigeon, and other French materialists.[166]John Locke in contrast, though an advocate of tolerance, urged authorities not to tolerate atheism, believing that the denial of God’s existence would undermine the social order and lead to chaos.[167]

The philosopher David Hume developed a skeptical epistemology grounded in empiricism, and Immanuel Kant’s philosophy has strongly questioned the very possibility of a metaphysical knowledge. Both philosophers undermined the metaphysical basis of natural theology and criticized classical arguments for the existence of God.

Blainey notes that, although Voltaire is widely considered to have strongly contributed to atheistic thinking during the Revolution, he also considered fear of God to have discouraged further disorder, having said “If God did not exist, it would be necessary to invent him.”[168] In Reflections on the Revolution in France (1790), the philosopher Edmund Burke denounced atheism, writing of a “literary cabal” who had “some years ago formed something like a regular plan for the destruction of the Christian religion. This object they pursued with a degree of zeal which hitherto had been discovered only in the propagators of some system of piety… These atheistical fathers have a bigotry of their own…”. But, Burke asserted, “man is by his constitution a religious animal” and “atheism is against, not only our reason, but our instincts; and… it cannot prevail long”.[169]

Baron d’Holbach was a prominent figure in the French Enlightenment who is best known for his atheism and for his voluminous writings against religion, the most famous of them being The System of Nature (1770) but also Christianity Unveiled. One goal of the French Revolution was a restructuring and subordination of the clergy with respect to the state through the Civil Constitution of the Clergy. Attempts to enforce it led to anti-clerical violence and the expulsion of many clergy from France, lasting until the Thermidorian Reaction. The radical Jacobins seized power in 1793, ushering in the Reign of Terror. The Jacobins were deists and introduced the Cult of the Supreme Being as a new French state religion. Some atheists surrounding Jacques Hbert instead sought to establish a Cult of Reason, a form of atheistic pseudo-religion with a goddess personifying reason. The Napoleonic era further institutionalized the secularization of French society.

In the latter half of the 19th century, atheism rose to prominence under the influence of rationalistic and freethinking philosophers. Many prominent German philosophers of this era denied the existence of deities and were critical of religion, including Ludwig Feuerbach, Arthur Schopenhauer, Max Stirner, Karl Marx, and Friedrich Nietzsche.[170]

George Holyoake was the last person (1842) imprisoned in Great Britain due to atheist beliefs.[171]Stephen Law states that Holyoake “first coined the term ‘secularism'”.[172]

Atheism in the 20th century, particularly in the form of practical atheism, advanced in many societies. Atheistic thought found recognition in a wide variety of other, broader philosophies, such as existentialism, objectivism, secular humanism, nihilism, anarchism, logical positivism, Marxism, feminism,[173] and the general scientific and rationalist movement.

In addition, state atheism emerged in Eastern Europe and Asia during that period, particularly in the Soviet Union under Vladimir Lenin and Joseph Stalin, and in Communist China under Mao Zedong. Atheist and anti-religious policies in the Soviet Union included numerous legislative acts, the outlawing of religious instruction in the schools, and the emergence of the League of Militant Atheists.[174][175] After Mao, the Chinese Communist Party remains an atheist organization, and regulates, but does not completely forbid, the practice of religion in mainland China.[176][177][178]

While Geoffrey Blainey has written that “the most ruthless leaders in the Second World War were atheists and secularists who were intensely hostile to both Judaism and Christianity”,[179] Richard Madsen has pointed out that Hitler and Stalin each opened and closed churches as a matter of political expedience, and Stalin softened his opposition to Christianity in order to improve public acceptance of his regime during the war.[180] Blackford and Schklenk have written that “the Soviet Union was undeniably an atheist state, and the same applies to Maoist China and Pol Pot’s fanatical Khmer Rouge regime in Cambodia in the 1970s. That does not, however, show that the atrocities committed by these totalitarian dictatorships were the result of atheist beliefs, carried out in the name of atheism, or caused primarily by the atheistic aspects of the relevant forms of communism.”[181]

Logical positivism and scientism paved the way for neopositivism, analytical philosophy, structuralism, and naturalism. Neopositivism and analytical philosophy discarded classical rationalism and metaphysics in favor of strict empiricism and epistemological nominalism. Proponents such as Bertrand Russell emphatically rejected belief in God. In his early work, Ludwig Wittgenstein attempted to separate metaphysical and supernatural language from rational discourse. A. J. Ayer asserted the unverifiability and meaninglessness of religious statements, citing his adherence to the empirical sciences. Relatedly the applied structuralism of Lvi-Strauss sourced religious language to the human subconscious in denying its transcendental meaning. J. N. Findlay and J. J. C. Smart argued that the existence of God is not logically necessary. Naturalists and materialistic monists such as John Dewey considered the natural world to be the basis of everything, denying the existence of God or immortality.[53][182]

Other leaders like Periyar E. V. Ramasamy, a prominent atheist leader of India, fought against Hinduism and Brahmins for discriminating and dividing people in the name of caste and religion.[183] This was highlighted in 1956 when he arranged for the erection of a statue depicting a Hindu god in a humble representation and made antitheistic statements.[184]

Atheist Vashti McCollum was the plaintiff in a landmark 1948 Supreme Court case that struck down religious education in US public schools.[185]Madalyn Murray O’Hair was perhaps one of the most influential American atheists; she brought forth the 1963 Supreme Court case Murray v. Curlett which banned compulsory prayer in public schools.[186] In 1966, Time magazine asked “Is God Dead?”[187] in response to the Death of God theological movement, citing the estimation that nearly half of all people in the world lived under an anti-religious power, and millions more in Africa, Asia, and South America seemed to lack knowledge of the Christian view of theology.[188] The Freedom From Religion Foundation was co-founded by Anne Nicol Gaylor and her daughter, Annie Laurie Gaylor, in 1976 in the United States, and incorporated nationally in 1978. It promotes the separation of church and state.[189][190]

Since the fall of the Berlin Wall, the number of actively anti-religious regimes has reduced considerably. In 2006, Timothy Shah of the Pew Forum noted “a worldwide trend across all major religious groups, in which God-based and faith-based movements in general are experiencing increasing confidence and influence vis–vis secular movements and ideologies.”[191] However, Gregory S. Paul and Phil Zuckerman consider this a myth and suggest that the actual situation is much more complex and nuanced.[192]

A 2010 survey found that those identifying themselves as atheists or agnostics are on average more knowledgeable about religion than followers of major faiths. Nonbelievers scored better on questions about tenets central to Protestant and Catholic faiths. Only Mormon and Jewish faithful scored as well as atheists and agnostics.[193]

In 2012, the first “Women in Secularism” conference was held in Arlington, Virginia.[194] Secular Woman was organized in 2012 as a national organization focused on nonreligious women.[195] The atheist feminist movement has also become increasingly focused on fighting sexism and sexual harassment within the atheist movement itself.[196] In August 2012, Jennifer McCreight (the organizer of Boobquake) founded a movement within atheism known as Atheism Plus, or A+, that “applies skepticism to everything, including social issues like sexism, racism, politics, poverty, and crime”.[197][198][199]

In 2013 the first atheist monument on American government property was unveiled at the Bradford County Courthouse in Florida: a 1,500-pound granite bench and plinth inscribed with quotes by Thomas Jefferson, Benjamin Franklin, and Madalyn Murray O’Hair.[200][201]

“New Atheism” is the name that has been given to a movement among some early-21st-century atheist writers who have advocated the view that “religion should not simply be tolerated but should be countered, criticized, and exposed by rational argument wherever its influence arises.”[202] The movement is commonly associated with Sam Harris, Daniel C. Dennett, Richard Dawkins, Victor J. Stenger, and Christopher Hitchens.[203] Several best-selling books by these authors, published between 2004 and 2007, form the basis for much of the discussion of “New” Atheism.

These atheists generally seek to disassociate themselves from the mass political atheism that gained ascendency in various nations in the 20th century. In best selling books, the religiously motivated terrorist events of 9/11 and the partially successful attempts of the Discovery Institute to change the American science curriculum to include creationist ideas, together with support for those ideas from George W. Bush in 2005, have been cited by authors such as Harris, Dennett, Dawkins, Stenger, and Hitchens as evidence of a need to move society towards atheism.[205]

It is difficult to quantify the number of atheists in the world. Respondents to religious-belief polls may define “atheism” differently or draw different distinctions between atheism, non-religious beliefs, and non-theistic religious and spiritual beliefs.[206] A Hindu atheist would declare oneself as a Hindu, although also being an atheist at the same time.[207] A 2010 survey published in Encyclopdia Britannica found that the non-religious made up about 9.6% of the world’s population, and atheists about 2.0%, with a very large majority based in Asia. This figure did not include those who follow atheistic religions, such as some Buddhists.[208] The average annual change for atheism from 2000 to 2010 was 0.17%.[208] A broad figure estimates the number of atheists and agnostics on Earth at 1.1 billion.[209]

According to global studies done by Gallup International, 13% of respondents were “convinced atheists” in 2012 and 11% were “convinced atheists” in 2015.[24][210] As of 2012, the top ten countries with people who viewed themselves as “convinced atheists” were China (47%), Japan (31%), the Czech Republic (30%), France (29%), South Korea (15%), Germany (15%), Netherlands (14%), Austria (10%), Iceland (10%), Australia (10%), and the Republic of Ireland (10%).[211]

According to the 2010 Eurobarometer Poll, the percentage of those polled who agreed with the statement “you don’t believe there is any sort of spirit, God or life force” varied from a high percentage in France (40%), Czech Republic (37%), Sweden (34%), Netherlands (30%), and Estonia (29%); medium-high percentage in Germany (27%), Belgium (27%), UK (25%); to very low in Poland (5%), Greece (4%), Cyprus (3%), Malta (2%), and Romania (1%), with the European Union as a whole at 20%.[28] In a 2012 Eurobarometer poll on discrimination in the European Union, 16% of those polled considered themselves non believers/agnostics and 7% considered themselves atheists.[213]

According to a Pew Research Center survey in 2012 religiously unaffiliated (including agnostics and atheists) make up about 18% of Europeans.[214] According to the same survey, the religiously unaffiliated are the majority of the population only in two European countries: Czech Republic (75%) and Estonia (60%).[214] There are another four countries where the unaffiliated make up a majority of the population: North Korea (71%), Japan (57%), Hong Kong (56%), and China (52%).[214]

According to the Australian Bureau of Statistics, 22% of Australians have “no religion”, a category that includes atheists.[215]

In the US, there was a 1% to 5% increase in self-reported atheism from 2005 to 2012, and a larger drop in those who self-identified as “religious”, down by 13%, from 73% to 60%.[216] According to the World Values Survey, 4.4% of Americans self-identified as atheists in 2014.[217] However, the same survey showed that 11.1% of all respondents stated “no” when asked if they believed in God.[217] In 1984, these same figures were 1.1% and 2.2%, respectively. According to a 2015 report by the Pew Research Center, 3.1% of the US adult population identify as atheist, up from 1.6% in 2007, and within the religiously unaffiliated (or “no religion”) demographic, atheists made up 13.6%.[218] According to the 2015 General Sociological Survey the number of atheists and agnostics in the US has remained relatively flat in the past 23 years since in 1991 only 2% identified as atheist and 4% identified as agnostic and in 2014 only 3% identified as atheists and 5% identified as agnostics.[219]

In recent years, the profile of atheism has risen substantially in the Arab world.[220] In major cities across the region, such as Cairo, atheists have been organizing in cafs and social media, despite regular crackdowns from authoritarian governments.[220] A 2012 poll by Gallup International revealed that 5% of Saudis considered themselves to be “convinced atheists.”[220] However, very few young people in the Arab world have atheists in their circle of friends or acquaintances. According to one study, less than 1% did in Morocco, Egypt, Saudia Arabia, or Jordan; only 3% to 7% in the United Arab Emirates, Bahrain, Kuwait, and Palestine.[221] When asked whether they have “seen or heard traces of atheism in [their] locality, community, and society” only about 3% to 8% responded yes in all the countries surveyed. The only exception was the UAE, with 51%.[221]

A study noted positive correlations between levels of education and secularism, including atheism, in America.[86] According to evolutionary psychologist Nigel Barber, atheism blossoms in places where most people feel economically secure, particularly in the social democracies of Europe, as there is less uncertainty about the future with extensive social safety nets and better health care resulting in a greater quality of life and higher life expectancy. By contrast, in underdeveloped countries, there are virtually no atheists.[222] In a 2008 study, researchers found intelligence to be negatively related to religious belief in Europe and the United States. In a sample of 137 countries, the correlation between national IQ and disbelief in God was found to be 0.60.[223]

Excerpt from:
Atheism – Wikipedia

Posted in Atheism | Comments Off on Atheism – Wikipedia

Canary Islands – Wikipedia

Posted: December 7, 2016 at 8:07 am

The Canary Islands (; Spanish: Islas Canarias [izlas kanajas], locally:[ila kanaja]), also known as the Canaries (Spanish: Canarias), are an archipelago and autonomous community of Spain located just off the southern coast of Morocco, 100 kilometres (62 miles) west of its southern border. The Canaries are among the outermost regions (OMR) of the European Union proper. It is also one of the eight regions with special consideration of historical nationality recognized as such by the Spanish Government.[3][4]

The main islands are (from largest to smallest) Tenerife, Fuerteventura, Gran Canaria, Lanzarote, La Palma, La Gomera and El Hierro. The archipelago also includes a number of islands and islets: La Graciosa, Alegranza, Isla de Lobos, Montaa Clara, Roque del Oeste and Roque del Este. In ancient times, the island chain was often referred to as “the Fortunate Isles”.[5] The Canary Islands is the most southerly region of Spain. The Canary Islands is the largest and most populated archipelago of the Macaronesia region.[6]

The archipelago’s beaches, climate and important natural attractions, especially Maspalomas in Gran Canaria and Teide National Park and Mount Teide (a World Heritage Site) in Tenerife (the third tallest volcano in the world measured from its base on the ocean floor), make it a major tourist destination with over 12million visitors per year, especially Gran Canaria, Tenerife, Fuerteventura and Lanzarote.[7][8] The islands have a subtropical climate, with long warm summers and moderately warm winters.[9] The precipitation levels and the level of maritime moderation varies depending on location and elevation. Green areas as well as desert exist on the archipelago. Due to their location above the temperature inversion layer, the high mountains of these islands are ideal for astronomical observation. For this reason, two professional observatories, Teide Observatory on the island of Tenerife and Roque de los Muchachos Observatory on the island of La Palma, have been built on the islands.

The capital of the Autonomous Community is shared by the cities of Santa Cruz de Tenerife and Las Palmas de Gran Canaria,[10][11] which in turn are the capitals of the provinces of Santa Cruz de Tenerife and Province of Las Palmas. Las Palmas de Gran Canaria has been the largest city in the Canaries since 1768, except for a brief period in the 1910s.[12] Between the 1833 territorial division of Spain and 1927 Santa Cruz de Tenerife was the sole capital of the Canary Islands. In 1927 a decree ordered that the capital of the Canary Islands be shared, as it remains at present.[13][14] The third largest city of the Canary Islands is San Cristbal de La Laguna (a World Heritage Site) on Tenerife.[15][16][17] This city is also home to the Consejo Consultivo de Canarias, which is the supreme consultative body of the Canary Islands.[18]

During the times of the Spanish Empire the Canaries were the main stopover for Spanish galleons on their way to the Americas, who came south to catch the prevailing north east trade winds.[19][20]

The name Islas Canarias is likely derived from the Latin name Canariae Insulae, meaning “Islands of the Dogs”, a name applied originally only to Gran Canaria. According to the historian Pliny the Elder, the Mauretanian king Juba II named the island Canaria because it contained “vast multitudes of dogs of very large size”.[21]

Another speculation is that the so-called dogs were actually a species of monk seal (canis marinus or “sea dog” was a Latin term for “seal”[22]), critically endangered and no longer present in the Canary Islands.[23] The dense population of seals may have been the characteristic that most struck the few ancient Romans who established contact with these islands by sea.

Alternatively, it is said that the original inhabitants of the island, Guanches, used to worship dogs, mummified them and treated dogs generally as holy animals.[24] The ancient Greeks also knew about a people, living far to the west, who are the “dog-headed ones”, who worshipped dogs on an island.[24] Some hypothesize that the Canary Islands dog-worship and the ancient Egyptian cult of the dog-headed god, Anubis are closely connected[25] but there is no explanation given as to which one was first.

Other theories speculate that the name comes from the Nukkari Berber tribe living in the Moroccan Atlas, named in Roman sources as Canarii, though Pliny again mentions the relation of this term with dogs.[citation needed]

The connection to dogs is retained in their depiction on the islands’ coat-of-arms (shown above).

It is considered that the aborigines of Gran Canaria called themselves “Canarii”. It is possible that after being conquered, this name was used in plural in Spanish i.e. -as to refer to all of the islands as the Canarii-as

What is certain is that the name of the islands does not derive from the canary bird; rather, the birds are named after the islands.

Tenerife is the most populous island, and also the largest island of the archipelago. Gran Canaria, with 865,070 inhabitants, is both the Canary Islands’ second most populous island, and the third most populous one in Spain after Majorca. The island of Fuerteventura is the second largest in the archipelago and located 100km (62mi) from the African coast.

The islands form the Macaronesia ecoregion with the Azores, Cape Verde, Madeira, and the Savage Isles. The Canary Islands is the largest and most populated archipelago of the Macaronesia region.[6] The archipelago consists of seven large and several smaller islands, all of which are volcanic in origin.[26] The Teide volcano on Tenerife is the highest mountain in Spain, and the third tallest volcano on Earth on a volcanic ocean island. All the islands except La Gomera have been active in the last million years; four of them (Lanzarote, Tenerife, La Palma and El Hierro) have historical records of eruptions since European discovery. The islands rise from Jurassic oceanic crust associated with the opening of the Atlantic. Underwater magmatism commenced during the Cretaceous, and reached the ocean’s surface during the Miocene. The islands are considered as a distinct physiographic section of the Atlas Mountains province, which in turn is part of the larger African Alpine System division.

In the summer of 2011 a series of low-magnitude earthquakes occurred beneath El Hierro. These had a linear trend of northeast-southwest. In October a submarine eruption occurred about 2km (114mi) south of Restinga. This eruption produced gases and pumice but no explosive activity was reported.

According to the position of the islands with respect to the north-east trade winds, the climate can be mild and wet or very dry. Several native species form laurisilva forests.

As a consequence, the individual islands in the Canary archipelago tend to have distinct microclimates. Those islands such as El Hierro, La Palma and La Gomera lying to the west of the archipelago have a climate which is influenced by the moist Gulf Stream. They are well vegetated even at low levels and have extensive tracts of sub-tropical laurisilva forest. As one travels east toward the African coast, the influence of the gulf stream diminishes, and the islands become increasingly arid. Fuerteventura and Lanzarote the islands which are closest to the African mainland are effectively desert or semi desert. Gran Canaria is known as a “continent in miniature” for its diverse landscapes like Maspalomas and Roque Nublo. In terms of its climate Tenerife is particularly interesting. The north of the island lies under the influence of the moist Atlantic winds and is well vegetated, while the south of the island around the tourist resorts of Playa de las Americas and Los Cristianos is arid. The island rises to almost 4,000m (13,000ft) above sea level, and at altitude, in the cool relatively wet climate, forests of the endemic pine Pinus canariensis thrive. Many of the plant species in the Canary Islands, like the Canary Island pine and the dragon tree, Dracaena draco are endemic, as noted by Sabin Berthelot and Philip Barker Webb in their epic work, L’Histoire Naturelle des les Canaries (183550).

Four of Spain’s thirteen national parks are located in the Canary Islands, more than any other autonomous community. Teide National Park is the most visited in Spain, and the oldest and largest within the Canary Islands. The parks are:

The following table shows the highest mountains in each of the islands:

The climate is subtropical and desertic, moderated by the sea and in summer by the trade winds. There are a number of microclimates and the classifications range mainly from semi-arid to desert. According to the Kppen climate classification,[27] the majority of the Canary Islands have a hot desert climate represented as BWh. There also exists a subtropical humid climate which is very influenced by the ocean in the middle of the islands of La Gomera, Tenerife and La Palma; where the laurisilva forests grow.

The seven major islands, one minor island, and several small islets were originally volcanic islands, formed by the Canary hotspot. The Canary Islands is the only place in Spain where volcanic eruptions have been recorded during the Modern Era, with some volcanoes still active (El Hierro, 2011).[35] Volcanic islands such as the those in the Canary chain often have steep ocean cliffs caused by catastrophic debris avalanches and landslides.[36]

The Autonomous Community of the Canary Islands consists of two provinces, Las Palmas and Santa Cruz de Tenerife, whose capitals (Las Palmas de Gran Canaria and Santa Cruz de Tenerife) are capitals of the autonomous community. Each of the seven major islands is ruled by an island council named Cabildo Insular.

The international boundary of the Canaries is the subject of dispute between Spain and Morocco. Morocco’s official position is that international laws regarding territorial limits do not authorise Spain to claim seabed boundaries based on the territory of the Canaries, since the Canary Islands enjoy a high degree of autonomy. In fact, the islands do not enjoy any special degree of autonomy as each one of the Spanish regions is considered an autonomous community. Under the Law of the Sea, the only islands not granted territorial waters or an Exclusive Economic Zone (EEZ) are those that are not fit for human habitation or do not have an economic life of their own, which is clearly not the case of the Canary Islands.[citation needed]

The boundary determines the ownership of seabed oil deposits and other ocean resources. Morocco and Spain have therefore been unable to agree on a compromise regarding the territorial boundary, since neither nation wants to cede its claimed right to the vast resources whose ownership depends upon the boundary. In 2002, for example, Morocco rejected a unilateral Spanish proposal.[37]

The Islands have 13 seats in the Spanish Senate. Of these, 11 seats are directly elected, 3 for Gran Canaria, 3 for Tenerife, 1 for each other island; 2 seats are indirectly elected by the regional Autonomous Government. The local government is presided over by Fernando Clavijo, the current President of the Canary Islands.[38]

Before the arrival of humans, the Canaries were inhabited by prehistoric animals; for example, the giant lizard (Gallotia goliath) and the Tenerife and Gran Canaria giant rats.[39]

The islands were visited by the Phoenicians, the Greeks, and the Carthaginians. According to the first century Roman author and philosopher Pliny the Elder, the archipelago was found to be uninhabited when visited by the Carthaginians under Hanno the Navigator, but that they saw ruins of great buildings.[40] This story may suggest that the islands were inhabited by other peoples prior to the Guanches. King Juba II, Augustus’s Numidian protg, is credited with discovering the islands for the Western world. He dispatched a naval contingent to re-open the dye production facility at Mogador in what is now western Morocco in the early first century Common Era.[41] That same naval force was subsequently sent on an exploration of the Canary Islands, using Mogador as their mission base.

The Romans named the islands Ninguaria or Nivaria (Tenerife), Canaria (Gran Canaria), Pluvialia or Invale (Lanzarote), Ombrion (La Palma), Planasia (Fuerteventura), Iunonia or Junonia (El Hierro) and Capraria (La Gomera).

When the Europeans began to explore the islands in the late Middle Ages, they encountered several indigenous peoples living at a Neolithic level of technology. Although the prehistory of the settlement of the Canary Islands is still unclear, linguistic and genetic analyses seem to indicate that at least some of these inhabitants shared a common origin with the Berbers of the Maghreb.[42] The pre-colonial inhabitants came to be known collectively as the Guanches, although Guanches was originally the name for only the indigenous inhabitants of Tenerife.[43] From the 14th century onward, numerous visits were made by sailors from Majorca, Portugal and Genoa. Lancelotto Malocello settled on Lanzarote in 1312. The Majorcans established a mission with a bishop in the islands that lasted from 1350 to 1400.

There may have been a Portuguese expedition that attempted to colonise the islands as early as 1336, but there is not enough hard evidence to support this. In 1402, the Castilian conquest of the islands began, with the expedition of French explorers Jean de Bthencourt and Gadifer de la Salle, nobles and vassals of Henry III of Castile, to Lanzarote. From there, they conquered Fuerteventura (1405) and El Hierro. Bthencourt received the title King of the Canary Islands, but still recognised King Henry III as his overlord.

Bthencourt also established a base on the island of La Gomera, but it would be many years before the island was truly conquered. The natives of La Gomera, and of Gran Canaria, Tenerife, and La Palma, resisted the Castilian invaders for almost a century. In 1448 Maciot de Bthencourt sold the lordship of Lanzarote to Portugal’s Prince Henry the Navigator, an action that was not accepted by the natives nor by the Castilians. Despite Pope Nicholas V ruling that the Canary Islands were under Portuguese control, a crisis swelled to a revolt which lasted until 1459 with the final expulsion of the Portuguese. In 1479, Portugal and Castile signed the Treaty of Alcovas. The treaty settled disputes between Castile and Portugal over the control of the Atlantic, in which Castilian control of the Canary Islands was recognised but which also confirmed Portuguese possession of the Azores, Madeira, the Cape Verde islands and gave them rights to lands discovered and to be discovered … and any other island which might be found and conquered from the Canary islands beyond toward Guinea.

The Castilians continued to dominate the islands, but due to the topography and the resistance of the native Guanches, complete pacification was not achieved until 1495, when Tenerife and La Palma were finally subdued by Alonso Fernndez de Lugo. After that, the Canaries were incorporated into the Kingdom of Castile.

After the conquest, the Castilians imposed a new economic model, based on single-crop cultivation: first sugarcane; then wine, an important item of trade with England. In this era, the first institutions of colonial government were founded. Both Gran Canaria, a colony of the Crown of Castile since March 6, 1480 (from 1556, of Spain), and Tenerife, a Spanish colony since 1495, had separate governors.

The cities of Santa Cruz de Tenerife and Las Palmas de Gran Canaria became a stopping point for the Spanish conquerors, traders, and missionaries on their way to the New World. This trade route brought great prosperity to some of the social sectors of the islands. The islands became quite wealthy and soon were attracting merchants and adventurers from all over Europe. Magnificent palaces and churches were built on La Palma during this busy, prosperous period. The Church of El Salvador survives as one of the island’s finest examples of the architecture of the 16th century.

The Canaries’ wealth invited attacks by pirates and privateers. Ottoman Turkish admiral and privateer Kemal Reis ventured into the Canaries in 1501, while Murat Reis the Elder captured Lanzarote in 1585.

The most severe attack took place in 1599, during the Dutch Revolt. A Dutch fleet of 74 ships and 12,000 men, commanded by Pieter van der Does, attacked the capital Las Palmas de Gran Canaria (the city had 3,500 of Gran Canaria’s 8,545 inhabitants). The Dutch attacked the Castillo de la Luz, which guarded the harbor. The Canarians evacuated civilians from the city, and the Castillo surrendered (but not the city). The Dutch moved inland, but Canarian cavalry drove them back to Tamaraceite, near the city.

The Dutch then laid siege to the city, demanding the surrender of all its wealth. They received 12 sheep and 3 calves. Furious, the Dutch sent 4,000 soldiers to attack the Council of the Canaries, who were sheltering in the village of Santa Brgida. 300 Canarian soldiers ambushed the Dutch in the village of Monte Lentiscal, killing 150 and forcing the rest to retreat. The Dutch concentrated on Las Palmas de Gran Canaria, attempting to burn it down. The Dutch pillaged Maspalomas, on the southern coast of Gran Canaria, San Sebastin on La Gomera, and Santa Cruz on La Palma, but eventually gave up the siege of Las Palmas and withdrew.

In 1618 the Barbary pirates attacked Lanzarote and La Gomera taking 1000 captives to be sold as slaves.[44] Another noteworthy attack occurred in 1797, when Santa Cruz de Tenerife was attacked by a British fleet under Horatio Nelson on 25 July. The British were repulsed, losing almost 400 men. It was during this battle that Nelson lost his right arm.

The sugar-based economy of the islands faced stiff competition from Spain’s American colonies. Low prices in the sugar market in the 19th century caused severe recessions on the islands. A new cash crop, cochineal (cochinilla), came into cultivation during this time, saving the islands’ economy.

By the end of the 18th century, Canary Islanders had already emigrated to Spanish American territories, such as Havana, Veracruz, Santo Domingo,[45]San Antonio, Texas[46] and St. Bernard Parish, Louisiana.[47][48] These economic difficulties spurred mass emigration, primarily to the Americas, during the 19th and first half of the 20th century. Between 1840 and 1890 as many as 40,000 Canary Islanders emigrated to Venezuela. Also, thousands of Canarians moved to Puerto Rico where the Spanish monarchy felt that Canarians would adapt to island life better than other immigrants from the mainland of Spain. Deeply entrenched traditions, such as the Mascaras Festival in the town of Hatillo, Puerto Rico, are an example of Canarian culture still preserved in Puerto Rico. Similarly, many thousands of Canarians emigrated to the shores of Cuba.[49] During the SpanishAmerican War of 1898, the Spanish fortified the islands against possible American attack, but an attack never came.

Sirera and Renn (2004)[50] distinguish two different types of expeditions, or voyages, during the period 17701830, which they term “the Romantic period”:

First are “expeditions financed by the States, closely related with the official scientific Institutions. characterised by having strict scientific objectives (and inspired by) the spirit of Illustration and progress”. In this type of expedition, Sirera and Renn include the following travellers:

The second type of expedition identified by Sirera and Renn is one that took place starting from more or less private initiatives. Among these, the key exponents were the following:

Sirera and Renn identify the period 17701830 as one in which “In a panorama dominated until that moment by France and England enters with strength and brio Germany of the Romantic period whose presence in the islands will increase”.

At the beginning of the 20th century, the British introduced a new cash-crop, the banana, the export of which was controlled by companies such as Fyffes.

The rivalry between the elites of the cities of Las Palmas de Gran Canaria and Santa Cruz de Tenerife for the capital of the islands led to the division of the archipelago into two provinces in 1927. This has not laid to rest the rivalry between the two cities, which continues to this day.

During the time of the Second Spanish Republic, Marxist and anarchist workers’ movements began to develop, led by figures such as Jose Miguel Perez and Guillermo Ascanio. However, outside of a few municipalities, these organisations were a minority and fell easily to Nationalist forces during the Spanish Civil War.

In 1936, Francisco Franco was appointed General Commandant of the Canaries. He joined the military revolt of July 17 which began the Spanish Civil War. Franco quickly took control of the archipelago, except for a few points of resistance on La Palma and in the town of Vallehermoso, on La Gomera. Though there was never a proper war in the islands, the post-war suppression of political dissent on the Canaries was most severe.[citation needed]

During the Second World War, Winston Churchill prepared plans for the British seizure of the Canary Islands as a naval base, in the event of Gibraltar being invaded from the Spanish mainland.

Opposition to Franco’s regime did not begin to organise until the late 1950s, which experienced an upheaval of parties such as the Communist Party of Spain and the formation of various nationalist, leftist parties.

After the death of Franco, there was a pro-independence armed movement based in Algeria, the Movement for the Independence and Self-determination of the Canaries Archipelago (MAIAC). In 1968, the Organisation of African Unity recognized the MAIAC as a legitimate African independence movement, and declared the Canary Islands as an African territory still under foreign rule.[51]

Currently, there are some pro-independence political parties, like the CNC and the Popular Front of the Canary Islands, but these parties are non-violent, and their popular support is almost insignificant, with no presence in either the autonomous parliament or the cabildos insulares.

After the establishment of a democratic constitutional monarchy in Spain, autonomy was granted to the Canaries via a law passed in 1982. In 1983, the first autonomous elections were held. The Spanish Socialist Workers’ Party (PSOE) won. In the 2007 elections, the PSOE gained a plurality of seats, but the nationalist Canarian Coalition and the conservative Partido Popular (PP) formed a ruling coalition government.[52]

According to “Centro de Investigaciones Sociolgicas” (Sociological Research Center) in 2010, 43.5% of the population of the Canary Islands feels more Canarian than Spanish (37.6%), only Canarian (7.6%), compared to 5.4% that feels more Spanish than Canarian (2.4%) or only Spanish (3%). The most popular choice of those who feel equally Spanish and Canarian, with 49.9%. With these data, one of the Canary recorded levels of identification with higher autonomy from Spain.

The Canary Islands have a population of 2,117,519 inhabitants (2011), making it the eighth most populous of Spain’s autonomous communities, with a density of 282.6 inhabitants per square kilometre. The total area of the archipelago is 7,493km2 (2,893sqmi).[57]

The Canarian population includes long-tenured residents and new waves of mainland Spanish immigrants, as well as Portuguese, Italians, Flemings and Britons. Of the total Canarian population in 2009 (2,098,593) 1,799,373 were Spanish and 299,220 foreigners. Of these, the majority are Europeans (55%), including Germans (39,505), British (37,937) and Italians (24,177). There are also 86,287 inhabitants from the Americas, mainly Colombians (21,798), Venezuelans (11,958), Cubans (11,098) and Argentines (10,159). There are also 28,136 African residents, mostly Moroccans (16,240).[60]

The population of the islands according to the 2010 data are:[61]

The Roman Catholic branch of Christianity has been the majority religion in the archipelago for more than five centuries, ever since the Conquest of the Canary Islands. However, there are other religious communities.

The overwhelming majority of native Canarians are Roman Catholic with various smaller foreign-born populations of other Christian beliefs such as Protestants from northern Europe.

The appearance of the Virgin of Candelaria (Patron of Canary Islands) was credited with moving the Canary Islands toward Christianity. Two Catholic saints were born in the Canary Islands: Peter of Saint Joseph de Betancur[62] and Jos de Anchieta.[63] Both born on the island of Tenerife, they were respectively missionaries in Guatemala and Brazil.

The Canary Islands are divided into two Catholic dioceses, each governed by a bishop:

Separate from the overwhelming Christian majority are a minority of Muslims.[64] Other religious faiths represented include Jehovah Witnesses, The Church of Jesus Christ of Latter-day Saints as well as Hinduism.[64] Minority religions are also present such as the Church of the Guanche People which is classified as a neo-pagan native religion,[64] it also highlights Buddhism,[64]Judaism,[64]Baha’i,[64]Chinese religions[64] and Afro-American religion.[64]

Among the followers of Islam, the Islamic Federation of the Canary Islands exists to represent the Islamic community in the Canary Islands as well as to provide practical support to members of the Islamic community.[65]

The distribution of beliefs in 2012 according to the CIS Barometer Autonomy was as follows:[66]

El Hierro, the westernmost island, covers 268.71km2 (103.75sqmi), making it the smallest of the major islands, and the least populous with 10,753 inhabitants. The whole island was declared Reserve of the Biosphere in 2000. Its capital is Valverde. Also known as Ferro, it was once believed to be the westernmost land in the world.

Fuerteventura, with a surface of 1,660km2 (640sqmi), is the second-most extensive island of the archipelago. It has been declared a Biosphere reserve by Unesco. It has a population of 100,929. Being also the most ancient of the islands, it is the one that is more eroded: its highest point is the Peak of the Bramble, at a height of 807 metres (2,648 feet). Its capital is Puerto del Rosario.

Gran Canaria has 845,676 inhabitants. The capital, Las Palmas de Gran Canaria (377,203 inhabitants), is the most populous city and shares the status of capital of the Canaries with Santa Cruz de Tenerife. Gran Canaria’s surface area is 1,560km2 (600sqmi). In center of the island lie the Roque Nublo 1,813 metres (5,948 feet) and Pico de las Nieves (“Peak of Snow”) 1,949 metres (6,394 feet). In the south of island are the Maspalomas Dunes (Gran Canaria), these are the biggest tourist attractions.

La Gomera has an area of 369.76km2 (142.77sqmi) and is the second least populous island with 22,622 inhabitants. Geologically it is one of the oldest of the archipelago. The insular capital is San Sebastian de La Gomera. Garajonay’s National Park is located on the island.

Lanzarote is the easternmost island and one of the most ancient of the archipelago, and it has shown evidence of recent volcanic activity. It has a surface of 845.94km2 (326.62sqmi), and a population of 139,506 inhabitants, including the adjacent islets of the Chinijo Archipelago. The capital is Arrecife, with 56,834 inhabitants.

The Chinijo Archipelago includes the islands La Graciosa, Alegranza, Montaa Clara, Roque del Este and Roque del Oeste. It has a surface of 40.8km2 (15.8sqmi), and a population of 658 inhabitants all of them in the la Graciosa island. With 29km2 (11sqmi), La Graciosa, is the smallest inhabited island of the Canaries, and the major island of the Chinijo Archipelago.

La Palma, with 86,528 inhabitants covering an area of 708.32km2 (273.48sqmi), is in its entirety a biosphere reserve. It shows no recent signs of volcanic activity, even though the volcano Tenegua entered into eruption last in 1971. In addition, it is the second-highest island of the Canaries, with the Roque de los Muchachos 2,423 metres (7,949 feet) as highest point. Santa Cruz de La Palma (known to those on the island as simply “Santa Cruz”) is its capital.

Tenerife is, with its area of 2,034km2 (785sqmi), the most extensive island of the Canary Islands. In addition, with 906,854 inhabitants it is the most populated island of the archipelago and Spain. Two of the islands’ principal cities are located on it: The capital, Santa Cruz de Tenerife and San Cristbal de La Laguna (a World Heritage Site). San Cristbal de La Laguna, the second city of the island is home to the oldest university in the Canary Islands, the University of La Laguna. The Teide, with its 3,718 metres (12,198 feet) is the highest peak of Spain and also a World Heritage Site. Tenerife is the site of the worst air disaster in the history of aviation, in which 583 people were killed in the collision of two Boeing 747s on March 27, 1977.

The economy is based primarily on tourism, which makes up 32% of the GDP. The Canaries receive about 12million tourists per year. Construction makes up nearly 20% of the GDP and tropical agriculture, primarily bananas and tobacco, are grown for export to Europe and the Americas. Ecologists are concerned that the resources, especially in the more arid islands, are being overexploited but there are still many agricultural resources like tomatoes, potatoes, onions, cochineal, sugarcane, grapes, vines, dates, oranges, lemons, figs, wheat, barley, maize, apricots, peaches and almonds.

The economy is 25billion (2001 GDP figures). The islands experienced continuous growth during a 20-year period, up until 2001, at a rate of approximately 5% annually. This growth was fueled mainly by huge amounts of Foreign Direct Investment, mostly to develop tourism real estate (hotels and apartments), and European Funds (near 11billion euro in the period from 2000 to 2007), since the Canary Islands are labelled Region Objective 1 (eligible for euro structural funds).[citation needed] Additionally, the EU allows the Canary Islands Government to offer special tax concessions for investors who incorporate under the Zona Especial Canaria (ZEC) regime and create more than 5 jobs.[citation needed]

Spain gave permission in August 2014 for Repsol and its partners to explore oil and gas prospects off the Canary Islands, involving an investment of 7.5 billion over four years, commencing at the end of 2016. Repsol at the time said the area could ultimately produce 100,000 barrels of oil a day, which would meet 10 percent of Spain’s energy needs.[68]

The Canary Islands have great natural attractions, climate and beaches make the islands a major tourist destination, being visited each year by about 12million people (11,986,059 in 2007, noting 29% of Britons, 22% of Spanish, not residents of the Canaries, and 21% of Germans). Among the islands, Tenerife has the largest number of tourists received annually, followed by Gran Canaria and Lanzarote.[7][8] The archipelago’s principal tourist attraction is the Teide National Park (in Tenerife) where the highest mountain in Spain and third largest volcano in the world (Mount Teide), receives over 2.8million visitors annually.[69]

The combination of high mountains, proximity to Europe, and clean air has made the Roque de los Muchachos peak (on La Palma island) a leading location for telescopes like the Grantecan.

The islands are outside the European Union customs territory and VAT area, though politically within the EU. Instead of VAT there is a local Sales Tax (IGIC) which has a general rate of 7%, an increased tax rate of 13.5%, a reduced tax rate of 3% and a zero tax rate for certain basic need products and services. Consequently, some products are subject to import tax and VAT if being exported from the islands into mainland Spain or the rest of the EU.

Canarian time is Western European Time (WET) (or GMT; in summer one hour ahead of GMT). So Canarian time is one hour behind that of mainland Spain and the same as that of the UK, Ireland and Portugal all year round.

The Canary Islands have eight airports altogether, two of the main ports of Spain, and an extensive network of autopistas (highways) and other roads. For a road map see multimap.[70]

There are large ferry boats that link islands as well as fast ferries linking most of the islands. Both types can transport large numbers of passengers and cargo (including vehicles). Fast ferries are made of aluminium and powered by modern and efficient diesel engines, while conventional ferries have a steel hull and are powered by heavy oil. Fast ferries travel relatively quickly (in excess of 30 knots) and are a faster method of transportation than the conventional ferry (some 20 knots). A typical ferry ride between La Palma and Tenerife may take up to eight hours or more while a fast ferry takes about 2 and a half hours and between Tenerife and Gran Canaria can be about one hour.

The largest airport is the Gran Canaria airport. It is also the 5th largest airport in Spain. The biggest port is in Las Palmas de Gran Canaria. It is an important port for commerce with Europe, Africa and the Americas. It is the 4th biggest commercial port in Spain with more than 1,400,000 TEU’s. The largest commercial companies of the world, including MSC and Maersk, operate here. In this port there is an international post of the Red Cross, one of only four points like this all around the world. Tenerife has two airports, Tenerife North Airport and Tenerife South Airport.[71]

The two main islands (Tenerife and Gran Canaria) receive the greatest number of passengers.[72]

The port of Las Palmas is first in freight traffic in the islands,[73] while the port of Santa Cruz de Tenerife is the first fishing port with approximately 7,500 tons of fish caught, according to the Spanish government publication Statistical Yearbook of State Ports. Similarly, it is the second port in Spain as regards ship traffic, only surpassed by the Port of Algeciras Bay.[74] The port’s facilities include a border inspection post (BIP) approved by the European Union, which is responsible for inspecting all types of imports from third countries or exports to countries outside the European Economic Area. The port of Los Cristianos (Tenerife) has the greatest number of passengers recorded in the Canary Islands, followed by the port of Santa Cruz de Tenerife.[75] The Port of Las Palmas is the third port in the islands in passengers and first in number of vehicles transported.[75]

The Tenerife Tram opened in 2007 and the only one in the Canary Islands, travelling between the cities of Santa Cruz de Tenerife and San Cristbal de La Laguna. It is currently planned to have three lines in the Canary Islands (two in Tenerife and one in Gran Canaria). The planned Gran Canaria tram route will be from Las Palmas de Gran Canaria to Maspalomas (south).[76]

The official symbols from nature associated with Canary Islands are the bird Serinus canaria (canary) and the Phoenix canariensis palm.[78]

Before the arrival of the Aborigines, the Canary Islands was inhabited by endemic animals, such as some extinct; giant lizards (Gallotia goliath), giant rats (Canariomys bravoi and Canariomys tamarani)[79] and giant tortoises (Geochelone burchardi and Geochelone vulcanica),[80] among others.

With a range of habitats, the Canary Islands exhibit diverse plant species. The bird life includes European and African species, such as the black-bellied sandgrouse; and a rich variety of endemic (local) taxa including the:

Terrestrial fauna includes geckos, wall lizards, and three endemic species of recently rediscovered and critically endangered giant lizard: the El Hierro giant lizard (or Roque Chico de Salmor giant lizard), La Gomera giant lizard, and La Palma giant lizard. Mammals include the Canarian shrew, Canary big-eared bat, the Algerian hedgehog (which may have been introduced) and the more recently introduced mouflon. Some endemic mammals, the lava mouse, Tenerife giant rat and Gran Canaria giant rat, are extinct, as are the Canary Islands quail, long-legged bunting, and the eastern Canary Islands chiffchaff.

The marine life found in the Canary Islands is also varied, being a combination of North Atlantic, Mediterranean and endemic species. In recent years, the increasing popularity of both scuba diving and underwater photography have provided biologists with much new information on the marine life of the islands.

Fish species found in the islands include many species of shark, ray, moray eel, bream, jack, grunt, scorpionfish, triggerfish, grouper, goby, and blenny. In addition, there are many invertebrate species, including sponge, jellyfish, anemone, crab, mollusc, sea urchin, starfish, sea cucumber and coral.

There are a total of 5 different species of marine turtle that are sighted periodically in the islands, the most common of these being the endangered loggerhead sea turtle.[81] The other four are the green sea turtle, hawksbill sea turtle, leatherback sea turtle and Kemp’s ridley sea turtle. Currently, there are no signs that any of these species breed in the islands, and so those seen in the water are usually migrating. However, it is believed that some of these species may have bred in the islands in the past, and there are records of several sightings of leatherback sea turtle on beaches in Fuerteventura, adding credibility to the theory.

Marine mammals include the large varieties of cetaceans including rare and not well-known species (see more details in the Marine life of the Canary Islands). Hooded seals[82] have also been known to be vagrant in the Canary Islands every now and then. The Canary Islands were also formerly home to a population of the rarest pinniped in the world, the Mediterranean monk seal.

The Canary Islands officially has four national parks, of which two have been declared World Heritage Site by UNESCO, and the other two declared a World Biosphere Reserve, these national parks are:[83]

A unique form of wrestling known as Canarian wrestling (lucha canaria) has opponents stand in a special area called a “terrero” and try to throw each other to the ground using strength and quick movements.[85]

Another sport is the “game of the sticks” where opponents fence with long sticks. This may have come about from the shepherds of the islands who would challenge each other using their long walking sticks.[85]

Another sport is called the shepherd’s jump (salto del pastor). This involves using a long stick to vault over an open area. This sport possibly evolved from the shepherd’s need to occasionally get over an open area in the hills as they were tending their sheep.[85]

The two main football teams in the archipelago are: the CD Tenerife (founded in 1912) and UD Las Palmas (founded in 1949). Now Tenerife play in Liga Adelante and Las Palmas in La Liga.

The Carnival of Santa Cruz de Tenerife and Carnival of Las Palmas are one of the most famous Carnivals in Spain. It is celebrated on the streets between the months of February and March.

Links to related articles

See original here:

Canary Islands – Wikipedia

Posted in Private Islands | Comments Off on Canary Islands – Wikipedia