Breaking News and Updates
- Abolition Of Work
- Alternative Medicine
- Artificial Intelligence
- Atlas Shrugged
- Ayn Rand
- Basic Income Guarantee
- Conscious Evolution
- Cosmic Heaven
- Designer Babies
- Ethical Egoism
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom of Speech
- Gene Medicine
- Genetic Engineering
- Germ Warfare
- Golden Rule
- Government Oppression
- High Seas
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Longevity
- Immortality Medicine
- Intentional Communities
- Life Extension
- Mars Colonization
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- New Utopia
- Personal Empowerment
- Political Correctness
- Politically Incorrect
- Post Human
- Post Humanism
- Private Islands
- Resource Based Economy
- Ron Paul
- Second Amendment
- Second Amendment
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Teilhard De Charden
- The Singularity
- Tor Browser
- Transhuman News
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Zeitgeist Movement
The Evolutionary Perspective
Tag Archives: windows
Posted: February 20, 2017 at 7:18 pm
Image source: Getty Images.
Consumer-level virtual reality (VR) received its first big push in 2016 with major headset launches from Facebook(NASDAQ:FB), HTC (NASDAQOTH:HTCCY), and Sony (NYSE:SNE), but uptake for the technology fell short of many analysts’ expectations, and there are a range of challenges that threaten to limit future adoption.Sales for Facebook’s Oculus Rift and HTC’s Vive headsets dried up after their respective launches, and SuperData cut its 2016 sales estimate for Sony’s PlayStation VR from 2.6 million units to 745,000 units — potentially worrying signs for the future of head-mounted displays.
VR still has compelling prospects, but it’s also clear that the technology will have to overcome certain roadblocks before it’s ready for prime time. In order to better understand the potential growth trajectory for virtual reality in 2017 and beyond, let’s take a look at some of the factors that are shaping the progression of the technology.
The fact that Samsung’s Gear VR — which uses compatible cellphones for its display and retails at $99 — is the top-selling headset suggests that price will continue to be a key hurdle for higher-end virtual reality adoption. The Oculus Rift still sells for $599, while the Vive is priced at $799, and the PlayStation VR retails at $399. In addition to a growing list of compatible phones for Gear VR and Alphabet’s Google Daydream platform, more headsets will hit the market in 2017 and fill in the gaps between high- and low-end experiences. Increased competition should put pressure on Facebook, HTC, and Sony to lower the prices for their devices or improve value propositions through bundling and other promotions.
Lenovo is expected to release a headset this year that delivers higher resolution than the Rift or the Vive, a lighter weight, and augmented reality (AR) capabilities — all at a sub-$400 price. Lenovo’s device will be part of Microsoft’s (NASDAQ:MSFT) Windows Holographic virtual reality ecosystem, and make use of a dual-camera internal tracking system (as opposed to the external systems used by the Vive, Oculus Rift, and PS VR) that could be instrumental in the emergence of more affordable headsets. Windows Holographic headsets will reportedly start in the $300 price range and are being designed to be compatible with mid-range computers — moves that should make virtual reality more accessible and build Microsoft’s position in the space. Companies including Asus, Acer, HP, and Dell are also developing entries for the Windows Holographic virtual reality platform, though it’s not clear which, if any, will launch this year.
Even with new entrants, the cost of high-end VR will likely continue to be prohibitive to mass-market adoption, but reports that Facebook is closing 200 out of 500 Oculus Rift demo stations at Best Buy locations due to low engagement suggests other obstacles to VR going mainstream this year.
While new competition means the cost of entry for mid-level and high-end virtual devices is likely to fall this year, a growing selection of headsets will contribute to the trend of fragmentation that threatens to limit the progression of VR. Early competition to establish leadership in the space and technological differences between high-end and low-end deviceshave created a situation where many software offerings are not compatible across devices. Fragmenting even exists within individual platforms, with Oculus Rift developers needing to account for segmentation created by the introduction of the device’s touch-based controllers.
For now, VR hardware lacks a “killer app” to justify the cost of entry, and the dynamics of the current market present barriers to the arrival of breakthrough software. With small and fractured installed bases for VR headsets, developing big-budget virtual reality experiences still doesn’t make sense for most developers, and that issue is likely to persist through 2017. Even Sony, a platform holder with wide range of video game development studios, seems to have few projects on the horizon to support its headset. Without standout software experiences to hook users and encourage engagement with the new display mediums, the high cost of entry will remain prohibitive to the mass market audience.
While early uptake for VR has been disappointing compared to initial projections, it’s important to remember just how young this technology is. The overly optimistic forecasts for VR adoption in 2016 give cause for some skepticism when looking at future targets, but expectations for huge growth in the category persist, with a study from Citigroup estimating that the combined market for VR and AR will reach $2.16 trillion by 2035.
Despite initial roadblocks, the immersive potential offered by VR and AR and improvements to hardware and software make it likely that the technology will eventually achieve mass adoption. Last year marked the beginning of the consumer VR push, and, while it doesn’t look like 2017 will deliver the confluence of factors needed to propel the medium into the mainstream, the long-term outlook remains very bright.
The early adopter market is mostly buying VR for video games, but the technology will eventually be bridged to online shopping and other uses, and the immersive qualities of AR and VR should open up huge advertising opportunities that help build support for the new mediums.
Teresa Kersten is an employee of LinkedIn and is a member of The Motley Fool’s Board of Directors. LinkedIn is owned by Microsoft. Keith Noonan has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Facebook. The Motley Fool has a disclosure policy.
Read the original post:
Posted: at 11:07 am
In honor of February being Black History Month, this column is about myths of the Underground Railroad, a term for the system of networks used by slaves to escape.
Slaves were so valuable that slave owners often had large mortgages on them. Owners even carried insurance on them. Thus, when slaves escaped, owners risked great financial loss. To recover their escaped slaves, owners hired bounty hunters and placed ads in newspapers.
In 1793, President George Washington signed a fugitive slave law that gave slave owners the right to recover escaped slaves. Another law, passed in 1850, required governments and residents in free states to enforce the return of escaped slaves. Severe penalties were given to those who helped the escapees.
Numerous people helped slaves escape bondage. Most did so quietly and in secrecy. Had their names and sentiments become public, bounty hunters would have arrived at their doorsteps with arrest warrants.
Some people who helped were known as abolitionists. They wrote articles and gave speeches expressing their anti-slavery feelings in attempts to raise money for clothing, food and transportation that slaves needed as they fled to freedom. Although abolitionists raised money, they were not a direct part of the Underground Railroad. Out of necessity, people who lived along the Underground Railroad and supplied help were only able to do so by not bringing attention to themselves.
A misconception is that the Underground Railroad consisted of specific trails along which escaped slaves traveled. Instead, the pathways were corridors that constantly shifted. While on the journey to freedom, slaves needed clothing, food and a place to stay, and they needed money for transportation. They also needed directions to the next safe site. Many people along the corridors supplied these types of help. Had escaped slaves used the same trails, they would have been quickly captured by bounty hunters.
While on their freedom journey, escaped slaves slept in churches, barns, homes, caves and tunnels. One misconception is that those who helped often placed lights in their windows and placed quilts with unique designs on their wash lines or porches. That rarely happened. Bounty hunters soon learned about such techniques. Then, too, neighbors watched each other and were aware of unusual people coming and going night after night. Neighbors could collect bounties, too.
Another misconception is about the types of people who helped. Although Quaker families sometimes helped, aid also came from a variety of non-Quaker whites and free blacks, as well as escaped slaves.
Another misconception is that all escaped slaves followed the north star to Canada. Although several traveled there, some went to Mexico, some went out west and a few went to Florida, where the Seminole Tribe allowed them to live in freedom. Sometimes, escapees went to Liberia. Between 1822 and the start of the Civil War, more than 15,000 black Americans relocated to that area of Africa.
Suggestions or queries? Send to Frankie Meyer, 509 N. Center St., Plainfield, IN 46168, or contact: email@example.com.
Posted: February 15, 2017 at 12:17 am
Many experts in the field firmly believe 2017 will be a breakout year for both artificial intelligence and robotics, since the two often go together. Spoiler alert: it’s all good.
AI Makes Robots Smarter
Robots use an increasing number of sensing modalities including taste, smell, sonar, IR, haptic feedback, tactile sensors, and range of motion sensors. They are also becoming better at picking up on facial expressions and gestures, so their interactions with humans become more natural, said Kevin Curran, IEEE senior member and professor of cyber security at Ulster University.
“Basically, AI is crucial for all their learning and adaptive behavior so they can adapt existing capabilities to cope with environmental changes. AI is key to helping them learn new tasks on the fly by sequencing existing behaviors,” he said.
Karsten Schmidt, head of technology at the Innovation Center Silicon Valley for SAP Labs echoed this sentiment. “In 2017, we will see AI gain greater acceptance and momentum as humans come to increasingly rely, trust and depend more on AI-driven decisions and question them less. This will happen as a direct result of improved AI learning due to more usage and a broader user base, and as the quality and usefulness of AI software in turn improves,” he said.
Meet Your AI Co-Worker
Many people fear losing their jobs to robots, but more than likely you will have a robot for a co-worker. Then again, if you’ve been in the workforce long enough, you’ve probably already had a robot for a co-worker, just in human form.
“In 2017, we are seeing a growing emergence of robots designed to operate alongside people in everyday human environments. Autonomous service robots that assist workers in warehouses, deliver supplies in hospitals, and maintain inventory of items in grocery stores are emerging onto the market,” said Sonia Chernova, assistant professor at Georgia Tech College of Computing.
These systems need humans because one thing robotics researchers are still struggling with is robotic arms. There’s no substitute for the human arm to pick things up and manipulate objects. “[Robot arms] have of course been used successfully for decades in manufacturing, but current techniques work reliably only in controlled factory environments, and are not yet robust enough for the real world,” said Chernova.
This could lead to the rise of “AI Supervisors,” said Tomer Naveh, CTO of Adgorithms, an AI-based digital marketing platform. Robots already have taken on many labor-intensive, manual (read: boring) tasks we do in our everyday life but robots will get smarter, and need AI to do it, he said.
“AI systems will get better at communicating their decisions and reasoning to their operators, and those operators will respond with new rules, business logic, and feedback that make it more and more useful in practice over time. As a result we will see people shifting from doing tasks by themselves, to supervising AI software on how to do it for them,” he said.
That’s actually a disturbing thought.
AI and robotics will slowly move into another area where human error is common: retail. To some degree there is already automation in optical scanners and retail tracking used by stores to manage inventory, but it will be considerably improved.
The retail industry, for example, has been unable to address the problem of non-scanned items at checkout, which accounts for 30% of retailers annual losses. They only discover the loss in inventory well after the fact.
“AI is stepping in to address issues of this caliber across industries, and as a result, its often gathering just as much data as its processing. This resulting data is becoming a secondary benefit to businesses that use AI. AI Apps created to detect these non-scans are now also providing retailers with information about their origins, whether theyre fraudulent or accidental, and how customers and cashiers are gaming the system,” said Alan OHerlihy, CEO of Everseen, developer of AI products for point of sale systems.
And as consumers have positive experiences with drone deliveries, public opinion may go a long way towards opening up regulations for further drone use, said Jake Rheude, director of business development for Red Stag Fulfillment, an eCommerce fulfillment provider.
“Consumers are already fully on board with the concept of drone delivery. According to The Walker Sands Future of Retail 2016 Study, 79% of US consumers said they would be ‘very likely’ or ‘somewhat likely’ to request drone delivery if their package could be delivered within an hour. And 73% of respondents said that they would pay up to $10 for a drone delivery. This is an unprecedented level of acceptance for new technology with so little real word experience from consumers,” he said.
AI in Your Home
Another prediction made by umpteen science fiction movies usually with an alarmist tone is that AI will come into the home in a big way. It already has if you have an iPhone, with Siri, or use Windows 10 and Cortana. Gradually it will move into other devices, the experts predict.
“Alexa, Cortana and Siri are great, but they still lack the sophistication and accuracy to be relied upon as a utility. In 2017, advances in natural language processing and natural language generation will transform what digital assistants understand and how they analyze and respond with legitimately useful information. The era of just opening a related Wikipedia page are over,” said Matt Gould, AI expert and co-founder of Arria NLG, which develops technology that translates data into language.
To make these devices work optimally, they need to develop an emotional quotient, or an EQ, predicts Dr. Rana el Kaliouby, CEO and co-founder, Affectiva, which develops facial recognition software. “We expect to see Emotion AI really come to the fore this year, and once AI systems develop social skills and rapport, AI interfaces will be more engaging and sticky, and less frustrating for their users, driving even wider adoption of the technology,” she said.
She predicts that in the future, all of our devices will be equipped with a chip that can adapt our experiences to our emotions in real time, by reading facial expressions, analyzing tone of voice and possessing built-in emotion awareness. “The ability of technology to adapt to our mood and preferences could enhance experiences ranging from driving a car to ordering a pizza,” she said.
And this should mean less typing, said Scott Webb, president of Avionos. “Physical interaction with hand-to-keyboard commands will give way to more organic input methods like voice and physical response as we move forward,” he said.
It’s been said before but is worth repeating that AI will improve security because, like in so many other cases, security AI won’t be prone to human failings of boredom, fatigue, illness and disinterest that often causes a security lapse. It will also have much faster reaction times and much better recognition of unusual patterns.
“Machine learning and the models generated through processes around machine learning are helping enterprises analyze massive amounts of data and identify trends, anomalies, and things not detectable through standard modeling. Machine learning algorithms are helping security researchers dynamically identify threats, airlines improve maintenance and reliability of their aircraft, and provide the back bone for self-driving cars to analyze data in real-time to make decisions,” said David Dufour, senior director of engineering at antimalware vendor WebRoot.
That immediacy is needed with catching data breaches, as well. The average time to discover a network attacker is about five months, giving attackers plenty of time to achieve their goals, said Peter Nguyen, director of technical services at LightCyber, which does behavior based security software.
“Finding signs of an attacker is difficult and demands the use of AI. Instead of trying to encounter, identify and block threats by their known characteristics, the way to find an active attacker is through their operational activities. Using machine learning, its possible to learn the good behavior of all users and devices and then find anomalies. Then, AI can be focused to find those anomalies that are truly indicative of an active attack,” he said.
Posted: February 14, 2017 at 11:17 am
In my last installment, I discussed a few different areas where data center monitoring automation can not only make life in the data center more convenient but also become a force multiplier. I ran out of space, however, before I ran out of ideas (the story of my life). The one thing I didnt cover was the automation you can implement in response to an alert.
As a data center professional, you probably have a solid understanding of monitoring and alerting already, but to truly appreciate how automation can relieve an enormous burden, it may be helpful to review a few examples.
What follows are some clippings from my garden of automationalert responses that have had a huge impact on the environments where they were implemented.
Example 1: Disk Full
Disk-full alerting is a simple concept with a deceptively large number of moving parts. So, I want to break it down into specifics. First, get the alert right. As my fellow SolarWinds Head Geek Thomas LaRock and I discussed in a recent episode of SolarWinds Lab, simplistic disk alerts help nobody. If you have a 2TB disk, alerting when its 90 percent used translates to having204.8GBs of disk space remaining.
A good solution to this problem is to check for both percent used and also remaining space. A better solution is to include logic in the alert that tests for the total space of the drive, so that drives with less than 1TB of space have one set of criteria and drives with greater than 1Tb have another. These tests should all be in the same alert, if possible, because who wants to manage hundreds of alert rules? Nevertheless, you want to ensure you are monitoring disk space in a way that is reasonable for the volumes in question, and only create necessary alerts.
Next, clear unnecessary disk files out of various directories. For the purpose of this article, Ill just say that all systems have a temporary directory and that you can delete all files out of that folder with impunity. The challenge in doing so easily comes down to a problem of impersonation. Many monitoring solutions run on the server as the system account. As a result, performing certain actions requires the script to impersonate a privileged user account. There are a variety of ways to do so, which is why Ill leave the problem here for you to solve in a way that best fits your individual environment.
Once the impersonation issue is resolved, theres another challenge specific to the disk-full alert: knowing that the correct directories for the specific server are being targeted. The best approach is to use a common shared folder that maps to all servers and place a script file there. That script can be set up to first detect the proper directories and then clear them out with all the necessary safeguards and checks in place to avoid accidental damage.
Example 2: Restart an IIS Application Pool
Sadly, restarting application pools is often the easiest and best fix for website-related issues. Im not saying that running appcmd stop… and then appcmd start… from the server command line is a quick kludge that ignores the bigger issues. Im saying that often, resetting the application pool is the fix.
If your web team finds itself in this situation, waking a human being to do the honors is absolutely your most expensive option. But automatically restarting the application pool becomes slightly more challenging because one server could be running multiple websites, which in turn have multiple application pools. Or you could have one big application pool controlling multiple websites. It all depends on how the server and websites were configured and you have no way of knowing.
If your monitoring solution can monitor the application pool, it will provide the name for you. Most mature monitoring solutions do so already. Once you have the name, you can do the following:
Example 3: Restart IIS
Running a close second behind restarting application pools is resetting IIS. Doing so is, of course, the nuclear option of website fixes since you are bouncing all websites and all connections. Even though its drastic, its a necessary step in some cases.
As with restarting application pools, getting a human involved in this incredibly simple action is a waste of everyones time and the companys money. Its far better to automatically restart and then recheck the website a minute or two later. If all is well, the server logs can be investigated in the morning as part of a postmortem. If the website is still down, its time to send in the troops.
You can restart the IIS web server in a number of ways:
Example 4: Restart a Server
If restarting the IIS service is the nuclear option, restarting the entire server is akin to nuclear Armageddon. Yet we all know there are times when restarting the server is the best option, given a certain set of conditions that you can monitor.Assuming your monitoring solution doesn’t support a built-in capability for this function, some options include the following:
Example 5: Restart a Service
Occasionally, services stop. They are sometimes even services that you, as a data center professional who needs to monitor your infrastructure, care about, such as SNMP.So, you are cutting dozens of service-down alerts. Have you thought about restarting them? In some cases, a restart doesnt really help much. But in far more situations it does. Computers are funny things. After all, Screws fall out all the time. The world is an imperfect place. (From The Breakfast Club.)
Sometimes, they just need a gentle nudge. If this is the case, you can do the following:
Example 6: Backup a Network-Device Configuration
Everything Ive gone over so far covers direct remediation-type actions. But in some cases, automation can be defensive and informational. Network-device configurations are a good example, in that they dont fix anything, but instead gather additional information to help you fix the issue faster.
Its important to note that between 40 and 80 percent of all corporate-network downtime is the result of unauthorized or uncontrolled changes to network devices. These changes arent always malicious. Often, the change simply went unreviewed by another set of eyes or an otherwise simple error slipped past the team.
So, having the ability to spontaneously pull a device configuration based on an event trigger is super helpful. To do so, you can use the following approach:
There are two general cases when you may want to execute this automatic action. The first is when your monitoring solution receives a config change trap. Although the details of SNMP traps are beyond the scope of this article, you can configure your network devices to send spontaneous alerts on the basis of certain events. One of these events is a configuration change. The second is when the behavior of a device changes drastically, such as when ping success drops below 75 percent or ping latency increases. In either case, often the device is in the process of becoming unavailable. But in some situations, its wobbly, and theres a chance to grab the configuration before it drops completely.
In both of those situations, having the latest configuration provides valuable forensic information that can help troubleshoot the issue. It also gives you a chance to restore the absolutely last-known-good configuration, if necessary. And if it leads you to think, Well, if I have the last known good configuration, why cant I just push that one back? Then you, my friend, have caught the automation bug! Run with it.
Example 7: Reset a User Session
Somewhere in the murky past, the first computer went online and became Node 1 in the vast network we now call the Internet. The next thing that probably happened, mere seconds later, was that the first user forgot to log off their session and left it hanging.
For any system that supports remote connectionswhether its in the form of telnet/ssh, drive mappings or RDP sessionshaving the ability to monitor and manage remote-connection user sessions can make running weekly, if not daily, restarts unnecessary. Or at least much smoother.
For Linux, use the who command to discover current sessions, or with greater granularity by remotely running netstat -tnpa | grep ‘ESTABLISHED.*sshd. Once you have the process ID, you can kill it. For Windows, you get the active sessions on a system using the query session
Example 8: Clear DNS Cache
At times, a server and/or application will misbehave because it cant contact an external system. This misbehavior is either because the DNS cache (the list of known systems and their IP addresses) is corrupt, or because the remote system has moved. In either case, a really easy fix is to clear the DNS cache and let the server attempt to contact the system at its new location.
In Windows, use the command ipconfig /flushdns. In Linux, the command varies from one distribution to another, so its possible that sudo /etc/init.d/nscd restart will do the trick, or /etc/init.d/dns-clean, or perhaps another command. Research may be necessary for this one.
Hopefully at least a few of things Ive shared here and in this series on automation as a whole have inspired you to give automation a try in your data center. If so, or if youre already well on your way to automating all the things. Id love to hear about your experiences and perspective in the comments section.
Leading article image courtesy ofLeonardo Rizzi under a Creative Commons license
Leon Adato,SolarWindsHead Geek and long-time IT systems management and monitoring expert, discusses all things data center in this ongoing series.
Automations Impace on Data Center Monitoring Alerts was last modified: February 13th, 2017 by Leon Adato
Read the original here:
Posted: at 11:13 am
Valentines day is traditionally a time when you can act on your secret crushes and let them know how you feel about them.
Anyone who cares about security and technology has an app or a platform or a programming language or something that might not be very cool or very glamorous but which they love, trust and rely on. So this year weve decided to ask Naked Security writers what their secret crushes are.
Mark Stockley, our web technologies guru, has had a long, slightly dysfunctional love-hate relationship withPerl. He says:
My secret tech crush is Perl.
Its not for looks, mind.In a bad light Perl looks like the contents of the unix tool chain after a heavy fall down some stairs.
Its not because Perl loves me and nobody else either. When I first met Perl (in its prime in the late 90s), it had caught everyones eye and was living it up at the heart of things on seemingly every server and every website.
And its not because Perl was nice to me, either.Back then, we didnt have well lit safe spaces like Stackoverflow to get to know a programming language that had caught our eye. We had to use usenet and meeting Perl meant risking the piranha-infested waters of comp.lang.misc.perl, a usenet group so fierce and elitist that suitors with questions were publicly eviscerated for sport.
Perl is complex difficult moody, even. On the rare occasions that things go well, working with Perl can be like painting with oils or dancing with Darcey Bussell. But when they arent (and they frequently arent) it can feel like wrestling socks on to an octopus.
In fact there are a hundred reasons to choose something else, but for me there is no doubt that its Perl. For all its faults it was my gateway drug, the red pill that led me to late night Slackware installs, unfathomable man pages and scratching my head for two weeks as I looked in the wrong place for Apaches it works! page.
Here at Naked Security, were upfront about our love for password managers and multifactor authentication. But Naked Security stalwart Lisa Vaasfell out of love with hers recently. She says:
I dont know if youd call this a secret crush. The feelings I have for my password manager are more along the lines of master-sub, with a dash of Stockholm syndrome. The strength of the bondage came clear recently when I lost my phone during a trip. Got off the Metro, but somehow, the phone did not.
After a good deal of hand-wringing and fruitless searching , I gave up and ordered a replacement phone courtesy of my insurance company. Thats when the fun really began.
The lost phone had my multifactor authentication (MFA) app on it, Google Authenticator, and without it, I couldnt get into any email accounts. The lost password hoops Google made me jump through were recursive and failed every time.
Using a friends laptop, I tried to reach my password manager vendor (LastPass) to help me out. I could get one toe into LastPass, given that Ive memorized that one password, but losing my Google Authenticator app on the phone meant that I couldnt verify my login with the second factor: the one-time use password Authenticator produces.
Turns out that LastPass has no phones. None. OK, so Ill write to customer support, I thought. Explain the situation, see what they can do to ascertain Im not a hacker trying to hijack my account. Automatic LastPass responses kept telling me Id get a faster response if I upgraded to premium, and I kept wailing that I am a premium user. Days later, I finally got a response: well send you the instructions to download a new Authenticator instance, they said. To your email address on file. which I couldnt get into.
Ill stop there. Suffice it to say that I was rather impressed with the locks and chains set up around my accounts by MFA and that crazy, frustrating password manager. One lesson I learned quite well, after about a week of writhing in those bonds: I need to set up a safe word. What does that extended metaphor translate into? Well, Im not going to give it away, but lets just say that its along the lines of writing down a password. and then locking that physical token safely (hopefully!) away, not putting it on a sticky note on my monitor!
Sometimes the old loves are the best, and Naked Security writer Maria Varmazis remains devoted to Notepad++. She tells us:
As someone who dabbles in code but primarily writes for a living, my indispensable but slightly-unsexy tool is a text editor. For my PCs, Im a Notepad++ fiend. For my Macs, Im devoted to SublimeText.(Linux text editing is a sore subject in my household. I cling to emacs, which I picked up in college, while my husband is a vi die-hard. Somehow were still married.)
The simplicity of these editors is what makes them so beautiful and so useful. When you just want to write without distraction or frill, theres nothing better than opening a simple text editor and getting to work. Text editors let me type without worrying about font and format, or being interrupted by grammatical suggestions and when youre on deadline, interruption-free writing is precisely what you need. Once Ive written what I need and start editing, the built-in line numbering and contextual highlighting many of these text editors come with (handy for folks who are deep in code all day) make my life a lot easier as well.
Perhaps my devotion to these humble text editors comes from habit: back in the 90s when so many of my peers and I were learning rudimentary HTML, we went to work with just Notepad. I still remember the humble Made with Notepad buttons some of us would put on our sites as our nerdy badge of honor. Notepad was still my editor of choice in the years following when working on professional website development, Dreamweaver and others be damned.
I know a text editor isnt the first thing people think of when they need to write, but if you find it hard to get started and the thought of firing up Word makes your blood run cold, open a text editor instead. They provide minimal distractions and render no judgments so you may write freely. And for that, they will always have my devotion.
Google may be dominant on the search scene, but not everyone is comfortable with the amount of data it scavenges about users. So Danny Bradbury, our man in British Columbia, tells us why hes quietly in love with DuckDuckGo:
Google is great at delivering the results you want, in an attractive style. Half the time, thanks to voice search and Google Assistant, you dont even have to type anything. But I dont like searching for things using a tool run by a company that makes money by selling my data, especially when my work causes me to search for a lot of strange things. Evidence suggests that while Google enables users to switch off the search history that it shows them, its still collecting a lot behind the scenes. DuckDuckGo isnt as polished as Google, but Im becoming increasingly paranoid about giving my data to large companies, especially given the political uncertainties facing us over the next few years. Perhaps Im not the only one, given that DuckDuckGo racked up 4bn searches last year.
Love is wide-ranging, and its not just software and applications that Naked Security writers are secretly in love with. Freelancer Bill Camarda has been faithful to a much-loved headset for many years. He tells us:
Im jaded. Ive been disappointed too often. My idea of lovable tech is something that just works, doesnt demand a lot, didnt cost a lot, and stays out of my way the rest of the time. Thatd be my old Logitech ClearChat Comfort USB Headset H390.
I mean, this is seriously mature technology. Introduced a decade ago this coming August, you can still buy one new at Amazon. Where youre informed that itll Elevate the Power of Windows Vista. Hey marketers, I love the thing, but please: nothing could do that.
Heres what it does do: whatever I plug it into Windows 7, 8.x, 10, Mac it goes right to work. No waiting for drivers to fail install. Never crashes the system. Good sound. Good mic thats easy to adjust (and moves neatly up out of the way when Im only listening.) Handy mute button. Well-made USB cable. Fairly if not perfectly comfy adjustable padded earphones, for todays endless Hangouts, Skype videocalls, et al. Not sexy: stable, reliable, there for me. If thats not love, what is?
Meanwhile, Naked Security freelancer John E Dunn, also has a hardware love: its the privacy- and security-focused Blackphone. He says:
From the femtosecond I first saw version 1 in 2014, Ive wanted one. If they ever get around to making Men in Black 4, this is the smartphone theyd use. But how to justify paying nearly 600 for an uneventful Android smartphone? One answer is that in an age obsessed with features and looks, the Blackphone strips away all that nonsense and just does the important thing privacy well.
Granted, a lot of people think that privacy is another feature but a lot of people are wrong. Security and privacy is the future of everything, the destiny of the world. Finding all of this in a slim black device that can trace its software lineage back to the genesis of popular encryption with Phil Zimmermanns PGP just adds to its desirability. Its old but new with it.
And what about me? Ive only been editing Naked Security for a few months, but Ive been writing about technology and security for many years, and so Ive had plenty of time to fall in love with any number of flighty suitors. But the technology I still love, even though its almost as old and uncool as Donny Osmond (who I saw performing in London earlier this month; I still love him, too) is Windows Phone.
Ive been using Windows devices since back when it was known as Windows CE, and Ive only reluctantly moved to Android after smashing the screen of my beloved Nokia Lumia 1520 and discovering it would cost 250 to fix (Im now rocking a Pixel XL).
I love Windows Phone for its elegant design language: instead of dozens of multicoloured icons splattered across several pages theres a homescreen of tiles displaying all the information you need at a glance. On my homescreen I could see how many emails were waiting for me, if Id missed any calls, which of my key contacts had tried to reach me, if I had any Twitter mentions or DMs, when my next train was, and so on.
I also love that it remains a pretty secure platform: theres been almost no malware spotted in the wild. And finally, while other manufacturers made Windows Phones, the Lumia range had (and to some, still has) the very best cameras a cellphone could sport: the 1020s camera, amazing in its day, is still one to beat.
Whats your secret technology crush? Wed love to hear about your first and current loves.
Read the original here:
Posted: at 6:53 am
Novice cryptocurrency users are often worried about how they can best store their bitcoin balance moving forward. Keeping money on an exchange wallet needs to be avoided at all costs. Using a desktop bitcoin wallet makes a lot more sense, as the user is in full control of their funds at all times. Below are some of the most convenient desktop bitcoin wallets for novice users, all of which are well worth checking out.
One of the oldest desktop bitcoin wallet solutions available today goes by the name of Armory. On paper, the desktop wallet is rather easy to set up, as most of the world is done through the installation procedure. Do keep in mind Armory requires the users to download the entire blockchain on their computer. This process will take a few hours or longer, as the blockchain is roughly 100GB in size right now.
What makes Armory so appealing are some of its more extensive features, even though they may not necessarily appeal to novice cryptocurrency users right away. Taking security seriously is of the utmost importance when it comes to bitcoin. Armory offers multi-signature and cold storage support, which will provide for more security to both novice and experienced users. Armory is available for Windows, Linux, and MacOS, and is certainly worth checking out. It also provides a bit more privacy, as the wallet does not reveal the IP address linked to your bitcoin wallet.
It may seem surprising to see the Bitcoin-QT wallet ranked number 3 on this list, but there is a good reason for that. Not only does the QT client require users to download the entire blockchain, it is also one of the more bland wallets for novice users.At the same time, Bitcoin-QT works quite well and receives regular updates, which make it worth checking out. Users will need to encrypt their wallet themselves, though, which may be considered a daunting task for novice users. In the end, Bitcoin-QT is a bit of a hit-and-miss among novice cryptocurrency users, thus your mileage may vary.
Multibit has always advertised itself as the go-to wallet for desktop bitcoin users. It takes mere seconds to set up a bitcoin wallet, as everything is done through a Setup Wizard. Moreover, the wallet is available in several dozen languages, which makes it more appealing to non-English speakers as well. The only downside is how Multibit is only available on Windows right now, which leaves Linux and MacOS users out in the cold. Then again, most novice cryptocurrency users seem to be using a Windows computer, which makes Multibit a more than solid pick.
From a convenience point of view, Electrum is the best desktop Bitcoin wallet client hands down. It is very lightweight, easy to set up, and does not require the whole bitcoin blockchain to be downloaded. Electrum has been around since 2011 and still receives regular updates to improve stability and add a few new features as time progresses. Users can also choose between various user interfaces, tweaking the Electrum wallet look and feel to their liking.
Similarly to most other wallet services, Electrum requires specific servers and nodes to maintain its connection to the bitcoin network. However, the Electrum servers and decentralized and redundant, which means users will always be able to access their bitcoin wallet without interruption. Last but not least, the wallet supports various add-ons and plugins, allowing for even more customization.
If you liked this article, follow us on Twitter @themerklenews and make sure to subscribe to our newsletter to receive the latest bitcoin, cryptocurrency, and technology news.
Posted: at 6:41 am
Ben Goertzel: Some people are gravely worried about the uncertainty and the negative potential associated with transhuman, superhuman AGI. And indeed we are stepping into a great unknown realm.
Its almost like a Rorschach type of thing really. I mean we fundamentally dont know what a superhuman AI is going to do and thats the truth of it, right. And then if you tend to be an optimist you will focus on the good possibilities. If you tend to be a worried person whos pessimistic youll focus on the bad possibilities. If you tend to be a Hollywood movie maker you focus on scary possibilities maybe with a happy ending because thats what sells movies. We dont know whats going to happen.
I do think however this is the situation humanity has been in for a very long time. When the cavemen stepped out of their caves and began agriculture we really had no idea that was going to lead to cities and space flight and so forth. And when the first early humans created language to carry out simple communication about the moose they had just killed over there they did not envision Facebook, differential calculus and MC Hammer and all the rest, right. I mean theres so much that has come about out of early inventions which humans couldnt have ever foreseen. And I think were just in the same situation. I mean the invention of language or civilization could have led to everyones death, right. And in a way it still could. And the creation of superhuman AI it could kill everyone and I dont want it to. Almost none of us do.
Of course the way we got to this point as a species and a culture has been to keep doing amazing new things that we didnt fully understand. And thats what were going to keep on doing. Nick Bostroms book was influential but I felt that in some ways it was a bit deceptive the way he phrased things. If you read his precise philosophical arguments which are very logically drawn what Bostrom says in his book, Superintelligence, is that we cannot rule out the possibility that a superintelligence will do some very bad things. And thats true. On the other hand some of the associated rhetoric makes it sound like its very likely a superintelligence will do these bad things. And if you follow his philosophical arguments closely he doesnt show that. What he just shows is that you cant rule it out and we dont know whats going on.
I dont think Nick Bostrom or anyone else is going to stop the human race from developing advanced AI because its a source of tremendous intellectual curiosity but also of tremendous economic advantage. So if lets say President Trump decided to ban artificial intelligence research I dont think hes going to but suppose he did. China will keep doing artificial intelligence research. If U.S. and China ban it, you know, Africa will do it. Everywhere around the world has AI textbooks and computers. And everyone now knows you can make peoples lives better and make money from developing more advanced AI. So theres no possibility in practice to halt AI development. What we can do is try to direct it in the most beneficial direction according to our best judgment. And thats part of what leads me to pursue AGI via an open source project such as OpenCog. I respect very much what Google, Baidu, Facebook, Microsoft and these other big companies are doing in AI. Theres many good people there doing good research and with good hearted motivations. But I guess Im enough of an old leftist raised by socialists and I sort of Im skeptical that a company whose main motive is to maximize shareholder value is really going to do the best thing for the human race if they create a human level AI.
I mean they might. On the other hand theres a lot of other motivations there and a public company in the end has a fiduciary responsibility to their shareholders. All in all I think the odds are better if AI is developed in a way that is owned by the whole human race and can be developed by all of humanity for its own good. And open source software is sort of the closest approximation that we have to that now. So our aspiration is to grow OpenCog into sort of the Linux of AGI and have people all around the world developing it to serve their own local needs and putting their own values and understanding into it as it becomes more and more intelligent.
Certainly this doesnt give us any guarantee. We can observe things like Linux has fewer bugs than Windows or OSX and its open source. So more eyeballs on something sometimes can make it more reliable. But theres no solid guarantee that making an AGI open source will make the singularity come out well. But my gut feel is that theres enough hard problems with creating a superhuman AI and having it respect human values and have a relationship of empathy with people as it grows. Theres enough problems there without the young AGI getting wrapped up in competition of country versus country and company versus company and internal politics within companies or militaries. I feel like we dont want to add these problems of sort of human slash primate social status competition dynamics. We dont want to add those problems into the challenges that are faced in AGI development.
Read more from the original source:
It’s Already Too Late to Stop the Singularity – Big Think
Posted: February 10, 2017 at 3:08 am
Not so long ago, when automation suppliers talked about the future of manufacturing, cloud computing was central to nearly every conversation. Though the cloud remains poised to play a significant role in manufacturings future, there is a great deal more attention being focused on edge computing today.
If youre unsure about the difference between the two, the simplest way to understand edge computing is to realize that it is simply the placement of servers, or other computing deviceeven a microcomputer, on or near a plant floor device for data collection, analysis and storage. Cloud computing, on the other hand, involves sending plant floor device data to an offsite server for storage and analysis. Read more about edge computing here.
At this years ARC Forum, edge computing had a high-profile in several automation suppliers exhibits and was central to announcements by Inductive Automation, Bedrock Automationand Stratus.
SCADA/HMI at the Edge Last year at the ARC Forum, Inductive Automation announced its partnership with Cirrus Link Solutions around the release of MQTT modules for Inductive Automations Ignition product. Those modules were designed to decouple applications, such as HMI and SCADA, from plant floor devices and send the devices data to an MQTT server which could then be connected to various applications. By taking this step, Inductive Automation and Cirrus Link addressed the growing network traffic issues and negative impacts of too much direct data polling of plant floor devices.
Now, Inductive Automation and Cirrus Link are planning to release IgnitionEdge a set of three products designed for plant floor edge computing applications. The products include: IgnitionEdge Panel, which creates local HMIs for field devices; IgnitionEdge Enterprise for synchronizing data collected from an edge device to a centralized server, and IgnitionEdge MQTT to publish field device data through MQTT. Read more about MQTT.
IgnitionEdge products can handle up to 500 tags from PLCs and come with OPC-UA, Modbus, Siemens and Allen-Bradley drivers. The products are also cross-platform, meaning they can work on any platform from Windows and OSX to Linux and even Raspberry Pis.
Though the IgnitionEdge Panel is a straightforward product for creating local HMIs, an added benefit is its ability to buffer dataenabling one weeks worth of data to be stored on the device in the event of failed network connection.
IgnitionEdge Enterprise allows for the creation of a hub-and-spoke architecture so that it can act as a remote server to synchronize data from an edge device to a central Ignition server via the Ignition Enterprise Administration Module. In addition to its remote backup, restoration management, centralized monitoring of performance and health metrics, and remote alarm notification, IgnitionEdge Enterprise has store-and-forward capabilitiesmeaning that, like the IgnitionEdge Panelit can handle local data buffering to collect historical data for up to one week if the connection to the central several goes down. Once connections are restored, data will synchronize back to the central server.
IgnitionEdge MQTT essentially enables any device to become an edge gateway by converting the devices data into MQTT and publishing it to an MQTT broker, which can then be accessed by the MQTT Engine Module.
Arlen Nipper, president and CTO of Cirrus Link Solutions, noted a key aspect of IgnitionEdge as being its ability to enable devices to deliver the root authority on tag information. With the tag itself becoming the root authority for information about the device, this means that human tagging can become a thing of the past, he said, adding that, if a tag is manually changed, that change will be automatically reflected all the way back to the central server.
With IgnitionEdge, people can stop talking about how to adopt IoT and get on with doing it, said Don Pearson, chief strategy officer of Inductive Automation. Ignition Edge takes any field device and turns into a lightweight IoT-enabled device.
Cybersecurity at the Edge Bedrock Automation, which made a surprising entry into the automation market just two years ago with a unique approach to designing controllers, I/O and even the backplane, extended its embedded cybersecurity capabilities with the release of Bedrock Cybershield 2.0. A key addition to this upgrade is the incorporation of a certification authority into Bedrocks hardware root of trust.
Certification authority is a critical aspect for interconnected automation systems, particularly as operations technology (OT) and IT systems converge. Adding this capability into Bedrock Automations root of trust means that applications and developers can now receive certificates of authority (CAs) to incorporate Bedrock encryption keys into their software, giving their programs secure access to Bedrock controllers.
Software providers working with Bedrock Automaton on this include 3S,which isusing its IEC61131 configuration and runtime engines running over TLS (transport layer security) with authentication to the Bedrock system root of trust, and M&M with its Softwares Field Device Tool (FDT) for HART configuration. Albert Rooyakkers, founder and CTO of Bedrock Automation, noted that Inductive Automation and other SCADA partners will begin working with Bedrock Automations CAs later this year.
Explaining the benefits of adding CAs to Cybershield, Rooyakkers said it extends BedrockAutomations embedded securityfrom the controller to the networks, applications and edge devices connected to it. At the ARC event, Rooyakkers provided insight into how this CA approach to cybersecurity will extend even to the people accessing the system via multi-factor authentication with smart cards, biometrics and role-based access management authenticated to the root of trust inside the machine. The biometric and smart card features will be available in subsequent Cybershield releases later this year.
With this approach, the person operating the workstation has certification authority to access the automation system and so does the workstation itself, said Rooyakkers. And with OPC UA, we deploy an open communications standard for Ethernet networks at the control and I/O. OPC UA server runs in the Bedrock Secure Power and UPS products with the client running in the Ethernet I/O module.
Certification authority adds to the layers of intrinsic security designed into Bedrock Automations electronic components and modules, which include strong cryptography, secure components, component anti-tamper, secure firmware, secure communications and module anti-tamper. From embedded cryptography to physical tamper resistance, the design of Bedrock Automations products address industrial security concerns with the objective of a nation-state defense posture, said Rooyakkers.
Companies can also personalize their own unique root keys with Bedrock Automations SCC.X controller, which allows for customer-specific root keys to be placed within the controller in the Bedrock factory at the time of order. Rooyakkers said these unique root keys not only provide an additional layer of protection for user IP, the system modules and applications can be defined by company, plant or other designations desired by the user.
Bedrock Automation also unveiled its new 20-channel discrete output (DO) moduleSIO8.20. Key features of the new module include:
Servers at the Edge One of the most frequently asked questions about the Industrial Internet of Things (IIoT) is: Where do I start? And while there are plenty of entry points to IIoT, one of the most basic approaches involves shoring up your edge computing capabilities.
With a long history in the financial and telecom sectors, Stratus has been turning its attention toward industrial automation and is positioning its fault tolerant servers and high availability software for use across industry. Evidence of this can be seen in Stratus achieving a 40 percent increase year-over-year in revenue from industrial companies in the Americas.
Jason Andersen, vice president at Stratus, said that most of the business Stratus has done in industry comes from the process side, specifically oil and gas, water/wastewater, electricity, food & beverage and pharmaceuticals. He also noted that Stratuss primary customer in industry is someone in operations technology, not IT. We support the whole stack, so it avoids any finger pointing by IT, he said.
Explaining why off-the-shelf, general business servers are not the best choice for industrial automation applications, Andersen said that Stratus is often brought in to work with industrial companies because something broke [with a general business server] and it was painful for the company, or theyre looking to upgrade their operating software to enable failsafe operation and remote management of edge servers.
Another key aspect of Stratuss offering for industry, and which holds particular appeal for its OT clients, is Stratuss ability to perform predictive maintenance on its server and software.
Andersen said that most industrial computing today involves providing a platform for HMI and SCADA. But as companies look to do more with IIoT, theyll need more software at the edge and it needs to be protected thats where we come in, he said. We provide a smart connected hub for industry. Like Google Home or Amazon Echo for consumer usewe connect devices to the cloud. Were essentially selling an onramp to the future of IIoT, he said.
In terms of its use in industry, Alexander said Stratus servers and software are application transparent, meaning that they can support any industrial software applications. Current industrial automation partners include Rockwell Automation, Wonderware by Schneider Electric, GE Digital and Siemens.
Go here to read the rest: